gabriel rosenkoetter on Mon, 10 Jun 2002 18:00:17 +0200 |
[Deleting the In-Reply-To: line on purpose so that this breaks into a separate thread in threaded mail readers.] On Mon, Jun 10, 2002 at 10:00:21AM -0400, Chris Beggy wrote: > Gabriel makes a really good point, namely that Linux's nfs > implementation compares poorly with others. (In my case, it > compares poorly with Sun's nfs. In addition, Linux nfs doesn't > seem to play well with others, like some versions of FreeBSD > nfs.) Well, cheer up, it sucks a lot less than it used to. It mostly works as a client these days (which is, thankfully, all *I* ever need it to do with Solaris), though its speed is still significantly off. I don't know where it stands on the security issues (predictable temp file handles; giving anyone access to a file system via UDP NFS gives everyone access to it), but I'd hazard a guess not too well. It's also unclear how well exporting certain file system types will work, especially across a reboot of the server. (FAT, for instance, has no way to store a lot of information that NFS presumes the backing FS implementation will be storing. Cf, file handle problems.) > Why is it bad? It's a complicated protocol. > Why hasn't it gotten much better? Nobody's had that itch (or, at least, hasn't scratched it sufficiently yet). NFS support is a real dog too because Sun sort of does whatever the hell they like (which they can get away with because their changes do make things better), and then is a little ornery about explaining what they've done. (They'll tell you the external protocol, but they won't explain how they've made that play nicely with their FS, all that being proprietary, and you cah't even look at their code for it anymore.) > Is AFS a solution? Depends on the situation. AFS is definitely nowhere near as fast as NFS (v3, which I don't think Linux even speaks anyway); it's applicable in situations where you've got a lot of potentially disconnected nodes on a WAN but you want them to be able to see a shared file system whenever possible. CODA is probably a better solution in many cases these days. Imho, a *real* solution to the networked file system problem involved embedded crypto (with symmetric keying, of course; using asymmetric keying for this would slow it down way too much, and you're not buying yourself much if you've got a mitm anyway) and truly distributed backing store. Berkeley's xFS (not to be confused with the X font server or SGI's XFS, which is a local file system) is a networked, distributed file system used in the GLUnix cluster out there. It rocks. You dedicate some portion of a given disk to the xFS cluster, then you can just write things into that partition. When a given node gets a write lock on a file, it'll be transfered to that node on writes. Subsequently, it's served from that node until someone else gets a write lock. (This works fine on a fast, tightly connected network. If you haven't got one of those, you already wanted AFS or CODA anyway.) I took a preliminary look at importing xFS into NetBSD (doing so is pretty trivial on the NetBSD side, as hooking another network file system into the kernel just require mirroring the NFS structure and adding a few fields to an in-kernel data structure, but requires reshaping some of the xFS stuff so it's actually portable), and I'll probably get back to it. I don't know how easy it would be to do the same with the Linux kernel. (I recall looking a Linux's NFS stuff, but I don't recall whether it looked reasonable or revolting without going back to check.) -- gabriel rosenkoetter gr@eclipsed.net Attachment:
pgpCOkyI0wb78.pgp
|
|