Our team is using Microsoft SQL Server, accessed using Entity Framework Core.
We have a table with 5-40 million records anticipated, which we want to optimize for high-velocity record create, read and update.
Each record is small and efficient:
- 5 integer (one of which is the indexed primary key)
- 3 bit
- 1 datetime
plus 2 varchar(128)
- substantially larger than the other columns.
The two varchar columns are populated during creation, but used in only a tiny minority of subsequent reads, and never updated. Assume 10 reads and 4 updates per create.
Our question is: does it improve performance to put these larger columns in a different table (imposing a join penalty for create, but only a tiny minority of reads) versus writing two stored procedures, using one which retrieves the non-varchar columns for the majority of queries, and one which retrieves all columns when required?
Put another way: how much does individual record size affect SQL performance?
question from:
https://stackoverflow.com/questions/65837073/does-record-size-affect-sql-performance 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…