Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
512 views
in Technique[技术] by (71.8m points)

python - to_sql pyodbc count field incorrect or syntax error

I am downloading Json data from an api website and using sqlalchemy, pyodbc and pandas' to_sql function to insert that data into a MSSQL server.

I can download up to 10000 rows, however I have to limit the chunksize to 10 otherwise I get the following error:

DBAPIError: (pyodbc.Error) ('07002', '[07002] [Microsoft][SQL Server Native Client 11.0]COUNT field incorrect or syntax error (0) (SQLExecDirectW)') [SQL: 'INSERT INTO [TEMP_producing_entity_details]

There are around 500 Million rows to download, it's just crawling at this speed. Any advice on a workaround?

Thanks,

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

At the time this question was asked, pandas 0.23.0 had just been released. That version changed the default behaviour of .to_sql() from calling the DBAPI .executemany() method to constructing a table-value constructor (TVC) that would improve upload speed by inserting multiple rows with a single .execute() call of an INSERT statement. Unfortunately that approach often exceeded T-SQL's limit of 2100 parameter values for a stored procedure, leading to the error cited in the question.

Shortly thereafter, a subsequent release of pandas added a method= argument to .to_sql(). The default – method=None – restored the previous behaviour of using .executemany(), while specifying method="multi" would tell .to_sql() to use the newer TVC approach.

Around the same time, SQLAlchemy 1.3 was released and it added a fast_executemany=True argument to create_engine() which greatly improved upload speed using Microsoft's ODBC drivers for SQL Server. With that enhancement, method=None proved to be at least as fast as method="multi" while avoiding the 2100-parameter limit.

So with current versions of pandas, SQLAlchemy, and pyodbc, the best approach for using .to_sql() with Microsoft's ODBC drivers for SQL Server is to use fast_executemany=True and the default behaviour of .to_sql(), i.e.,

connection_uri = (
    "mssql+pyodbc://scott:tiger^[email protected]/db_name"
    "?driver=ODBC+Driver+17+for+SQL+Server"
)
engine = create_engine(connection_uri, fast_executemany=True)
df.to_sql("table_name", engine, index=False, if_exists="append")

This is the recommended approach for apps running on Windows, macOS, and the Linux variants that Microsoft supports for its ODBC driver. If you need to use FreeTDS ODBC, then .to_sql() can be called with method="multi" and chunksize= as described below.


(Original answer)

Prior to pandas version 0.23.0, to_sql would generate a separate INSERT for each row in the DataTable:

exec sp_prepexec @p1 output,N'@P1 int,@P2 nvarchar(6)',
    N'INSERT INTO df_to_sql_test (id, txt) VALUES (@P1, @P2)',
    0,N'row000'
exec sp_prepexec @p1 output,N'@P1 int,@P2 nvarchar(6)',
    N'INSERT INTO df_to_sql_test (id, txt) VALUES (@P1, @P2)',
    1,N'row001'
exec sp_prepexec @p1 output,N'@P1 int,@P2 nvarchar(6)',
    N'INSERT INTO df_to_sql_test (id, txt) VALUES (@P1, @P2)',
    2,N'row002'

Presumably to improve performance, pandas 0.23.0 now generates a table-value constructor to insert multiple rows per call

exec sp_prepexec @p1 output,N'@P1 int,@P2 nvarchar(6),@P3 int,@P4 nvarchar(6),@P5 int,@P6 nvarchar(6)',
    N'INSERT INTO df_to_sql_test (id, txt) VALUES (@P1, @P2), (@P3, @P4), (@P5, @P6)',
    0,N'row000',1,N'row001',2,N'row002'

The problem is that SQL Server stored procedures (including system stored procedures like sp_prepexec) are limited to 2100 parameters, so if the DataFrame has 100 columns then to_sql can only insert about 20 rows at a time.

We can calculate the required chunksize using

# df is an existing DataFrame
#
# limit based on sp_prepexec parameter count
tsql_chunksize = 2097 // len(df.columns)
# cap at 1000 (limit for number of rows inserted by table-value constructor)
tsql_chunksize = 1000 if tsql_chunksize > 1000 else tsql_chunksize
#
df.to_sql('tablename', engine, index=False, if_exists='replace',
          method='multi', chunksize=tsql_chunksize)

However, the fastest approach is still likely to be:

  • dump the DataFrame to a CSV file (or similar), and then

  • have Python call the SQL Server bcp utility to upload that file into the table.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...