I don't know the details of NTFS, but under the FAT32 (and older) Windows file system, the reason for defragging has nothing to do with recovering space. Yes, hard drive space was allocated in clusters of several sectors. If the file size was not evenly divisible by the cluster size, space would be wasted. Defragging did not fix this.
The reason for defragging was performance. The FAT in FAT32 refers to the "file allocation table". This is a block of data at the beginning of a hard drive partition that maps all those file fragments to scattered sectors. When reading a large file, the hard drive would have to seek back to the FAT to get the next entry every time it reached the end of a fragment. Seeking (moving the read head to a different track) is much slower than reading contiguous data.
I don't know linux file systems very well, but I know they don't use a FAT
Regardless, the point is that linux wastes much less time when navigating from one fragment to the next because it doesn't have to keep thrashing back and forth to and from the FAT. Fragmentation carries a smaller penalty, so there's less to be gained by fixing it.
IDE drives also threw a new variable into the mix. The cylinder/head/sector mapping reported to the OS may not correspond to the physical mapping. Do contiguous logical sectors necessarily map to contiguous physical sectors? Not any more. Of course, the drive vendors care about performance, so they aren't going to just scatter sectors randomly.