Adam Turoff on 17 Oct 2003 18:38:02 -0400 |
On Fri, Oct 17, 2003 at 06:06:30PM -0400, W. Chris Shank wrote: > I'm trying to move a 28G file from Linux to OS X. Transmit fails after > 4G. scp fails after about the same. I'm trying to avoid cutting it up - > I already tried to make on 5.6G file - but that fails too (no surprise). > Unless I can take my 28G tarball and slice it into 7 4G pieces and > reassemble then on the other side. It just really bother me that it's > bombing at 4G. I assume because the file is being cached in RAM. Anyway > to force scp to incrementally write the file to disk? Sounds like you're using tools that aren't large-file aware. That would explain whatever you do, you can't write more than 2^32 bytes. I'm pretty sure HFS+ can handle large files. Try using a tool that uses 64-bit file offsets. Z. ___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug
|
|