tilman victor wrote:But can you tell me why seemingly everyone suggests 224 heads and 56 sectors?
Cylinder/head/sector (CHS) values used to be meaningful (in the 1980s). By the early-to-mid-1990s, they became a convenient fiction. By 2000, they were an inconvenient fiction kept alive like some mad scientist's experiment in order to placate old software. Today CHS values are finally being allowed to perish; the new EFI firmware and GPT partitioning system don't use CHS values at all. In fact, even on MBR disks, they're largely irrelevant, especially on disks over about 8GB in size, since the CHS fields in MBR top out at about 8GB. (Larger disks' partitions must
be described by LBA values, not CHS values.)
If nowadays everything is virtual, why use exactly THESE parameters---why are multiples of 7 so important?
The numbers you quote above are not
common. Most large disks use values of 255 heads and 63 sectors/track. Those values are used because they're the maximums for the respective fields. The data structures are pretty weird; you can read the details on the Wikipedia page on CHS.
The 224/56 values you quote were sometimes recommended for use in older versions of fdisk because those values are both multiples of 8
, not because they're multiples of 7; using multiples of 8 guarantees proper alignment on Advanced Format disks when fdisk aligns on cylinder boundaries. (I don't know why whoever originated this tip used 224 rather than 248. When I wrote the IBM developerWorks article to which I linked in my earlier post, I simply quoted advice I'd read elsewhere rather than look into it in more depth.)
And why does everyone want to waste space at the beginning?
It's not wasted. On a typical MBR disk, sector 0 is the MBR, which holds the primary partition table and the first stage of the BIOS boot loader. Space following the MBR is officially unallocated, but is often occupied by the boot loader's second stage, or sometimes it's used by disk encryption software or the like. (This is a major problem with MBR; various tools think they can just dump stuff in this "unallocated" space, and the result can be conflicts over who gets to use it.) On GPT disks, partitioning data extends beyond the first sector. It's possible to start a partition on sector 34 with GPT, although that's inadvisable on Advanced Format disks.
There are also the alignment issues to which I alluded. Unfortunately, the system calls that should
tell software what type of partition alignment to use for optimum performance aren't 100% reliable. Therefore, the safest approach today is to align all partitions on 1MiB boundaries, which are safe for the vast majority of disks currently in use. Since 1MiB is a tiny amount of space by modern disk standards, throwing away even most of that space is a small price to pay to get optimum disk performance.
In former times (about 6 weeks ago) everyone (including fdisk) seemed to be happy with partitions starting at 1.
To the best of my knowledge, Linux's fdisk has never
begun partitions starting at sector
1, although it did at one time begin them at cylinder
1. The Linux fdisk code changed from cylinder 1 (usually sector 63) to sector 2048 a year or more ago, but it's plausible that you only got an upgrade 6 weeks ago because of delays in feeding the software down to whatever you were using. The libparted library (behind GParted, parted, and most other Linux partitioning software) changed at about the same time. Likewise for GPT fdisk, although it may have been a couple of months ahead of the others. IMHO, it's bad that it took this long; it should have been changed years ago, since the coming of Advanced Format disks was known years ago, and the use of cylinder boundaries became pointless over a decade ago.
The drive is for data only---no OS, no booting, and only 1 big storage area is needed.
Divisions by one don't make much sense anyway, do they?
Could I avoid all that hassle if I simply write the filesystem to the whole device?
Will I ever, anyhow, run into trouble that way?
It's possible to use a hard disk unpartitioned, as you suggest. This is inflexible, though. What happens if you want to split the drive for use by two OSes? (Splitting and resizing partitions is possible with tools like GParted, but few or no tools enable you to take a partitionless disk and add a partition table to it while preserving the filesystem that the disk contains.) What if you need a separate partition for a firmware or boot loader need, such as the EFI System Partition (ESP)? What if you need to control where on the disk something goes, as in placing the kernel below a boundary where the firmware can read it (as has been a need with BIOSes of various ages over the years)? What if you want to use different filesystems or mount options for different directories (say, to mount /usr read-only or to separate user data in /home from the system data)? These and many other reasons are why partitions exist, and collectively they're compelling enough that partitions have become pretty much standard on hard disks. Even computers that ship with Windows pre-installed use multiple partitions, although this fact is hidden from casual users. Add all of this together and you get another reason for partitioning: Software expects
disks to be partitioned. Software that assumes all disks to be partitioned might misbehave when shown a disk that's unpartitioned and used "raw." In a worst-case scenario, this could result in data loss. I've heard of cases of software that gets confused, but I don't recall the details offhand, and I don't recall how dangerous those specific cases were. In any event, given the huge number of disk utilities and the potential for damage should one of them be poorly designed or buggy, playing it safe is prudent. At worst, partitioning a disk uses about 1MiB of storage space. On a 1TB drive, that's 0.0001% of the disk's capacity, if I've done the math right. That's not worth worrying about, compared to even a small risk that some errant utility will flake out and cause damage, or should you develop a need for partitioning in the future.