Đang chuẩn bị liên kết để tải về tài liệu:
SQL Server Tacklebox- P15

Không đóng trình duyệt đến khi xuất hiện nút TẢI XUỐNG

SQL Server Tacklebox- P15: This book, as with almost all books, started out as an idea. I wanted to accumulate together those scripts and tools that I have built over the years so that DBAs could sort through them and perhaps adapt them for their own circumstances. | 3 The migratory data we are loading data from the text file back in to the SQL_Conn table which currently holds 58K rows. The -h tablock hint forces a lock on the receiving table. This is one of the requirements to guarantee minimally logging the transactions. The -b option tells BCP to batch the transactions at n rows in this case every 50 000 rows. If there are any issues during the BCP load process any rollback that occurs will only rollback to the last transaction after the n load. So say I wanted to load 100 000 records and I batched the BCP load every 20 000 records. If there were an issue while loading record 81 002 I would know that 80 000 records were successfully imported. I would lose 1 002 transactions as they would roll back to the last 20 000 mark which would be 80 000 records. The batch file takes one parameter which is the number of times to run the BCP command in order to load the required number of rows into the table. How did I choose 20 iterations Simple math 20 58 040 1 160 800 records. As you can see in Figure 3.2 this is exactly the number of rows that is now in the SQL_Conn table after 20 iterations of the BCP command using the 58 040 records in the f1_out.txt file as the source. SQLQuery1 .sql.landrum 51 CTE_File_Size_.m select COUNT from sql conn No column name 1 1160800 Figure 3.2 Query to count SQL_Conn after loading over 1 million records. NOTE For what it is worth I have also used this batch file to load a Terabyte worth of data to test how we could effectively manage such a large data store. If you re-run the BCP command in Listing 3.2 to output the query results to a file you will find that the process takes more than a minute for a million rows as opposed to the previous 3 seconds for 58K rows indicating that the time to output the records remains good 58 040 3 19 346 records per second 60 70 3 The migratory data seonds 1.16 million . I am still seeing nearly 20 000 records per second times despite the increase in data attesting to

Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.