gabriel rosenkoetter on Tue, 5 Jun 2001 20:30:05 -0400 |
On Tue, Jun 05, 2001 at 07:53:18PM -0400, Tim Peeler wrote: > 8k sectors??? And I thought 4k was wasteful. Um, yeah, but the actual, physical sector size is defined by the hardware, and it *is* almost always 512 bytes, both for SCSI and IDE, and yes, I really did do a little research this time. See: http://www.dorsai.org/~dcl/publications/NTLDR_Hacking/ and http://www.netbsd.org/cgi-bin/query-pr-single.pl?number=7652 (I'm actually not trying to be BSD-centric this time around, those are the first two hits on google for "physical sector size"... in reverse order, even.) Other storage mediums have other sector sizes (CD is 2048 bytes, floppies are various and sundry sizes, usually smaller than 512 bytes). Slight branch in topic: do solid state storage mediums have sector sizes? I'd think not, or that the writing software could at least demand exactly what it wanted, but I don't know. Getting back to the point, the block size, as defined in the disklabel, just defines how large a chunk of disk a single vnode in the file system code refers to. When you look at it this way, 8Mb seems like a pretty reasonable number (I mean... think how much more memory the fs code would occupy if it allocated a vnode for each physical disk block). This is definitely a tradeoff that one tuning a file system for some specific circumstance would want to consider (finer-grained blocks for lots of small files, larger-grained blocks for nothing but huge files or uniform sequential access, etcetera), but I've got very little feel for what "right" would be, or how one would measure whether one was improving or worsening the situation. A search for "file system tune" turns up plenty of hits on google, including a Bell labs paper or two, which I'll maybe take the time to read after I'm finished with the last little (late) bit of my course work for this past semester. (Oh, and I'd hoped to come to tonight's meeting--would have been my first--but can't for the same reason. Nuts.) Would practical file system tuning be an interesting meeting topic? Has it already been done? > That seems reasonable, but I don't know. I know that du does default > to report usage in certain block sizes, but as to why I don't know. > I know RedHat had used 4096 byte block sizes to create the filesystem > and I remember seeing du report in 4k blocks, perhaps it's as simple > as what the person likes. Hrm. So perhaps GNU du does query the fs for what block size it likes? BSD du pretty clearly does not. Neither man page discusses this even slightly, and I don't really feel like digging through the code at the moment. Cheers... ~ g r @ eclipsed.net ______________________________________________________________________ Philadelphia Linux Users Group - http://www.phillylinux.org Announcements-http://lists.phillylinux.org/mail/listinfo/plug-announce General Discussion - http://lists.phillylinux.org/mail/listinfo/plug
|
|