Mark M. Hoffman on 26 Feb 2009 16:28:32 -0800 |
Hi Gabe: * gabriel rosenkoetter <gr@eclipsed.net> [2009-02-26 16:23:36 -0500]: > At 2009-02-26 13:23 -0500, Mark M. Hoffman <mhoffman@lightlink.com> wrote: > > Hi Walt, Gabe: > > > > > On Thu, Feb 26, 2009 at 12:08:14PM -0500, gabriel rosenkoetter wrote: > > > > I actually question the sanity of doing the flac->mp3 processing on > > > > the fly all the time, which entails a non-trivial memory and CPU hit. > > > > My home file server is not CPU bound. I guess this is not uncommon. Also, at > > 128 kb/s encoding, 8 minutes of mp3 is less than 8MB of memory. No big deal. > > No big deal... if only one file within a given mp3fs logical volume > is being read at a time. > > > 25 streams of FLAC without mp3fs would be equally hard on the disk, no? > > Sure, it would, but that's why I was discussing the CPU and memory > as scaling issues, not the disk I/O. > > > Depends on the read/write speed of the device, too. They're not usually all > > that fast. I ran the following on a few different mp3fs files: > > > > $ time dd if=blah.mp3 of=/dev/null > > > > I get about 340 kB/s throughput with an Intel Core2 6600. I'll have to compare > > that against the speed of my wife's ipod later. > > What's your CPU utilization while doing that? I would have assumed 100%. In fact it used only 50%... i.e. all of one core in a 2 core system. I fired off two instances of the command above on two different files and that hit 100% (both cores) as expected. > How many of those can you have running at the same time? I fired off three instances and it worked as expected: 100% cpu on both cores and the throughput for each individual file was reduced. Overall throughput stayed the same. I know that doesn't answer your exact question but you may extrapolate from there. > (To be clear, these are purely academic questions. I think it sounds > pretty cool, and, although I don't have any quantity of FLACs > floating around, I do have a stack of oggs, and something like this > is a more complete solution than, for example, plugins for iTunes.) > > > One downside of this filesystem involves the id3 tags... a lot of software will > > just open every file in a directory and read just enough to process all of the > > tags at once. It looks like older versions of mp3fs didn't handle this well, > > but the current version does much better. > > Huh. Yeah, that's a classic cache miss issue. I wonder if they > solved it in a generalized sort of way (largely "make the cache > bigger") or something smarter, given that they only really care > about a specific target file format. I'm willing to bet that the earlier versions simply fired off the encoding thread as soon as the file was opened, and ran to completion whether or not it was needed. So the 25 file hypothetical was a real problem. ;) It might be even worse than that if the file is opened by two different processes: a dir tree/tag viewer and then the decoder itself. Now, it looks like the fs will hold off on the encoding until it's actually needed. Regards, -- Mark M. Hoffman mhoffman@lightlink.com ___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug
|
|