Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
185 views
in Technique[技术] by (71.8m points)

c# - The transaction log for database is full due to 'LOG_BACKUP'

We are doing a delete from a few tables in one database (say FirstDB), and will insert about 93,357 rows (one row at a time ) into one Log table in another database (say SecondDB).

The databases Recovery Model is Full.

Each row contains 6 columns of data

[DeleteTime] [datetime] NULL,
[FilePath] [varchar](255) NOT NULL,
[DocumentID] [int] NULL,
[AnotherCol] [varchar](50) NULL,
[AnotherCol2] [varchar](50) NULL,
[AnotherCol3] [varchar](50) NULL,

On previous insert, when we inserted 153,000 rows, we got the error "The transaction log for database 'SecondDB' is full due to 'LOG_BACKUP'

What is the best way to avoid using a lot of space in the transaction log ?

Shall I commit transaction for say every 1000 insertion to the database SecondDB ?

This is the code to delete from FirstDb and insert into SecondDB

using (SqlConnection con = new SqlConnection(connectionString))
        {
            try
            {
                con.Open();
                SqlDataAdapter da = new SqlDataAdapter("spGetDocuments", con);
                DataSet ds = new DataSet();
                da.Fill(ds);
                foreach (DataRow aRow in ds.Tables[0].Rows)
                {
                    try
                    {
                        //Code to insert into FirstDB

                        //Code to insert into SecondDB
                        cmdSecondDB = new SqlCommand("spUpdateDeleteDocsLog", con);
                        cmdSecondDB.CommandType = CommandType.StoredProcedure;
                        cmdSecondDB.Parameters.Add(new SqlParameter("FilePath", sDocumentPath));
                        cmdSecondDB.Parameters.Add(new SqlParameter("DocumentID", aRow["DocumentID"]));
                        :
                        iRow = cmdSecondDB.ExecuteNonQuery();
                        cmdSecondDB.Dispose();
                        cmdSecondDB = null;
                    }
                }
                ds.Dispose();
                ds = null;
                da.Dispose();
                da = null;
                con.Close();
            }               
        }

Also, is there a way to calculate how much transaction log will be taken by 93,357 rows of the above 6 rows of data ?

Thank you

question from:https://stackoverflow.com/questions/65927351/the-transaction-log-for-database-is-full-due-to-log-backup

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The databases Recovery Model is Full.

Then the only thing that matters is the size of your log file and the frequency of your log backups. Batching doesn't cause less log to be written or allow the log space to be reused before the next transaction log backup under the Full Recovery Model.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...