Walt Mankowski on 2 Apr 2010 12:04:59 -0700 |
On Fri, Apr 02, 2010 at 02:53:54PM -0400, John Karr wrote: > I'm working on a database that involves periodic replacement of a 10 million > record database from an external source. > > The postgres bulk load (copy from) is very sensitive to errors and crashes > the entire operation for just one error in an operation, so my import script > adds records one at a time so I can deal with or ignore failures. > > On my test machine it takes nearly a day to load the database this way. I > tried using transactions, but only got about a 30% speed boost at the cost > of a single error in a batch crashing the whole batch. > > Any ideas on how to do this load faster? Normally this load would occur > about once every 3 months, but in times peak activity it could be weekly and > it wouldn't be acceptable to have the application down for a day for every > load. Is there any way you can check the new data for errors before starting the bulk copy? Attachment:
signature.asc ___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug
|
|