bergman on 15 Feb 2009 18:26:27 -0800

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Backup fun

In the message dated: Sat, 14 Feb 2009 23:54:59 EST,
The pithy ruminations from Adam Zion on 
<[PLUG] Backup fun> were:
=> OK, I purchased a Western Digital Element 1TB external backup drive
=> for my linux server. I have a 300 GB drive to back up, so there's

What filesystem is in use on the external drive? What were the fielsystem

=> plenty of room to spare.
=> But no. When I use rsync to back up the data, it bombs out w/a drive

Using what options to rsync?

=> full error before even completing the first back up- which would be

Is "drive full" really the error message? Is that returned from rsync, or
logged by the kernel to /var/log/message or dmesg? Where there other error

[As an aside--not solely directed to Adam--diagnosing problems via email is
difficult enough...if you (collective) want accurate answers, please provide
useful data, such as:

	the exact command name (with options)

	the exact text of error messages (from the application, from system
	log files, from dmesg, etc...obfuscated for privacy if neeed)	
	relevent version numbers for applications, distributions, the kernel,



=> far less than 1 TB of data.

The "drive full" error can come from several causes:

[A]	Are you using the "--sparse" option to rsync? If not, then any
		"sparse" files[1] will be recreated as real files on the
		target drive.

	For example, here's a 4GB sparse file on a filesystem that has only
	1GB available. If I rsync that file without using "--sparse", I'd get
	a 4GB file.

		[bergman@mirchi tmp]$ df -hl .
		Filesystem            Size  Used Avail Use% Mounted on
		/dev/sda11            2.3G  815M  1.5G  36% /var
		[bergman@mirchi tmp]$ dd if=/dev/zero of=sparse-file bs=1 count=0 seek=4196M
		0+0 records in
		0+0 records out
		0 bytes (0 B) copied, 2.5423e-05 s, 0.0 kB/s
		[bergman@mirchi tmp]$ df -h .
		Filesystem            Size  Used Avail Use% Mounted on
		/dev/sda11            2.3G  815M  1.5G  36% /var
		[bergman@mirchi tmp]$ ls -l sparse-file
		-rw-rw-r-- 1 bergman bergman 4399824896 Feb 15 20:56 sparse-file
		[bergman@mirchi tmp]$ ls -lh sparse-file 
		-rw-rw-r-- 1 bergman bergman 4.1G Feb 15 20:56 sparse-file

[B] 	Are you certain that the filesystem is out of space, not out of
	inodes? What does "df -h" show on the external drive? What does
	"df -hi" show?

	When you created the filesystem on the external drive, what options
	did you give? If you don't specify options, filesystems are created in
	a way that attempts to be appropriate based on the size of the
	drive....for larger drives, this means larger block sizes and
	a correspondingly smaller number of inodes. If the source disk
	contains an extremely large number of small files, it's possible that
	it simply has more files than there are inodes on the target disk.
	This usage--a very high count of very small files was typical
	for Usenet news systems about a decade ago, so you may see mkfs or
	documentation options that refer to creating a filesystem intended for

=> Since I highly doubt that cluster overhang is an issue, can anyone
=> else think why this drive would be running into such an error? When I
=> check it on Windows it reports as a 1 TB drive, so it's unlikely to be

Hmmm...that implies to me that you haven't formatted the disk specifically for
Linux, and that it's running a Windows-compatible filesystem (vfat or ntfs).

Some filesystems don't support symbolic if you're running rsync
with the "--copy-links" option, that will turn any symbolic links on the
source filesystem into real files on the target drive.

=> a physical error w/the drive.


Mark Bergman    Biker, Rock Climber, Unix mechanic, IATSE #1 Stagehand

I want a newsgroup with a infinite S/N ratio! Now taking CFV on:
15+ So Far--Want to join? Check out: 

Philadelphia Linux Users Group         --
Announcements -
General Discussion  --