We are doing a delete from a few tables in one database (say FirstDB), and will insert about 93,357 rows (one row at a time ) into one Log table in another database (say SecondDB).
The databases Recovery Model is Full.
Each row contains 6 columns of data
[DeleteTime] [datetime] NULL,
[FilePath] [varchar](255) NOT NULL,
[DocumentID] [int] NULL,
[AnotherCol] [varchar](50) NULL,
[AnotherCol2] [varchar](50) NULL,
[AnotherCol3] [varchar](50) NULL,
On previous insert, when we inserted 153,000 rows, we got the error "The transaction log for database 'SecondDB' is full due to 'LOG_BACKUP'
What is the best way to avoid using a lot of space in the transaction log ?
Shall I commit transaction for say every 1000 insertion to the database SecondDB ?
This is the code to delete from FirstDb and insert into SecondDB
using (SqlConnection con = new SqlConnection(connectionString))
{
try
{
con.Open();
SqlDataAdapter da = new SqlDataAdapter("spGetDocuments", con);
DataSet ds = new DataSet();
da.Fill(ds);
foreach (DataRow aRow in ds.Tables[0].Rows)
{
try
{
//Code to insert into FirstDB
//Code to insert into SecondDB
cmdSecondDB = new SqlCommand("spUpdateDeleteDocsLog", con);
cmdSecondDB.CommandType = CommandType.StoredProcedure;
cmdSecondDB.Parameters.Add(new SqlParameter("FilePath", sDocumentPath));
cmdSecondDB.Parameters.Add(new SqlParameter("DocumentID", aRow["DocumentID"]));
:
iRow = cmdSecondDB.ExecuteNonQuery();
cmdSecondDB.Dispose();
cmdSecondDB = null;
}
}
ds.Dispose();
ds = null;
da.Dispose();
da = null;
con.Close();
}
}
Also, is there a way to calculate how much transaction log will be taken by 93,357 rows of the above 6 rows of data ?
Thank you
question from:
https://stackoverflow.com/questions/65927351/the-transaction-log-for-database-is-full-due-to-log-backup 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…