Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
414 views
in Technique[技术] by (71.8m points)

sql - Bulk/batch update/upsert in PostgreSQL

I'm writing a Django-ORM enchancement that attempts to cache models and postpone model saving until the end of the transaction. It's all almost done, however I came across an unexpected difficulty in SQL syntax.

I'm not much of a DBA, but from what I understand, databases don't really work efficiently for many small queries. Few bigger queries are much better. For example it's better to use large batch inserts (say 100 rows at once) instead of 100 one-liners.

Now, from what I can see, SQL doesn't really supply any statement to perform a batch update on a table. The term seems to be confusing so, I'll explain what I mean by that. I have an array of arbitrary data, each entry describing a single row in a table. I'd like to update certain rows in the table, each using data from its corresponding entry in the array. The idea is very similar to a batch insert.

For example: My table could have two columns "id" and "some_col". Now the array describing the data for a batch update consists of three entries (1, 'first updated'), (2, 'second updated'), and (3, 'third updated'). Before the update the table contains rows: (1, 'first'), (2, 'second'), (3, 'third').

I came accross this post:

Why are batch inserts/updates faster? How do batch updates work?

which seems to do what I want, however I can't really figure out the syntax at the end.

I could also delete all the rows that require updating and reinsert them using a batch insert, however I find it hard to believe that this would actually perform any better.

I work with PostgreSQL 8.4, so some stored procedures are also possible here. However as I plan to open source the project eventually, any more portable ideas or ways to do the same thing on a different RDBMS are most welcome.

Follow up question: How to do a batch "insert-or-update"/"upsert" statement?

Test results

I've performed 100x times 10 insert operations spread over 4 different tables (so 1000 inserts in total). I tested on Django 1.3 with a PostgreSQL 8.4 backend.

These are the results:

  • All operations done through Django ORM - each pass ~2.45 seconds,
  • The same operations, but done without Django ORM - each pass ~1.48 seconds,
  • Only insert operations, without querying the database for sequence values ~0.72 seconds,
  • Only insert operations, executed in blocks of 10 (100 blocks in total) ~0.19 seconds,
  • Only insert operations, one big execution block ~0.13 seconds.
  • Only insert operations, about 250 statements per block, ~0.12 seconds.

Conclusion: execute as many operations as possible in a single connection.execute(). Django itself introduces a substantial overhead.

Disclaimer: I didn't introduce any indices apart from default primary key indices, so insert operations could possibly run faster because of that.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Bulk insert

You can modify the bulk insert of three columns by Ketema:

INSERT INTO "table" (col1, col2, col3)
  VALUES (11, 12, 13) , (21, 22, 23) , (31, 32, 33);

It becomes:

INSERT INTO "table" (col1, col2, col3)
  VALUES (unnest(array[11,21,31]), 
          unnest(array[12,22,32]), 
          unnest(array[13,23,33]))

Replacing the values with placeholders:

INSERT INTO "table" (col1, col2, col3)
  VALUES (unnest(?), unnest(?), unnest(?))

You have to pass arrays or lists as arguments to this query. This means you can do huge bulk inserts without doing string concatenation (and all its hazzles and dangers: sql injection and quoting hell).

Bulk update

PostgreSQL has added the FROM extension to UPDATE. You can use it in this way:

update "table" 
  set value = data_table.new_value
  from 
    (select unnest(?) as key, unnest(?) as new_value) as data_table
  where "table".key = data_table.key;

The manual is missing a good explanation, but there is an example on the postgresql-admin mailing list. I tried to elaborate on it:

create table tmp
(
  id serial not null primary key,
  name text,
  age integer
);

insert into tmp (name,age) 
values ('keith', 43),('leslie', 40),('bexley', 19),('casey', 6);

update tmp set age = data_table.age
from
(select unnest(array['keith', 'leslie', 'bexley', 'casey']) as name, 
        unnest(array[44, 50, 10, 12]) as age) as data_table
where tmp.name = data_table.name;

There are also other posts on StackExchange explaining UPDATE...FROM.. using a VALUES clause instead of a subquery. They might by easier to read, but are restricted to a fixed number of rows.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...