Joe Laudadio on Sun, 23 Apr 2000 11:15:07 -0400 (EDT) |
On Sat, 22 Apr 2000, Martin DiViaio wrote: > wget by default will attempt to download the url in question and put it > in a directory named whatever the main part of the url happens to be > (i.e. www.whatever.com.) Also, wget may parse the page it receives and > attempt to get whatever links the page points to (I don't remember off > hand if it will do this by default) and then parse the next html page it > receives and so on. This isn't infinite; it will only go five layers > deep by default but can add ALOT of data to your home directory. > wget will only do this if you explicitly tell it to (-r option I think). The default behavior is to retrieve only the page you asked for. Even images embedded in the page will not be downloaded. mg ______________________________________________________________________ Philadelphia Linux Users Group - http://plug.nothinbut.net Announcements - http://lists.nothinbut.net/mail/listinfo/plug-announce General Discussion - http://lists.nothinbut.net/mail/listinfo/plug
|
|