Antony P Joseph on 2 Apr 2010 15:27:59 -0700


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] postgres data loading


Hi

   You can  add a dummy column to input file and table  which has an 
incrementing value. When copy fails, get the maximum number from the 
table and chop the file and  restart the copy. If you are loading in 
parallel, you can use numbering schemes 1.1,1.2, 2.1, 2.2.etc. If you 
have fixed format file and index on dummy column, you are very lucky.

With regards
Antony
John Karr wrote:
> I'm working on a database that involves periodic replacement of a 10 million
> record database from an external source. 
>
> The postgres bulk load (copy from) is very sensitive to errors and crashes
> the entire operation for just one error in an operation, so my import script
> adds records one at a time so I can deal with or ignore failures. 
>
> On my test machine it takes nearly a day to load the database this way. I
> tried using transactions, but only got about a 30% speed boost at the cost
> of a single error in a batch crashing the whole batch. 
>
> Any ideas on how to do this load faster? Normally this load would occur
> about once every 3 months, but in times peak activity it could be weekly and
> it wouldn't be acceptable to have the application down for a day for every
> load.
>
> ___________________________________________________________________________
> Philadelphia Linux Users Group         --        http://www.phillylinux.org
> Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
> General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug
>
>   

___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug