This article helps you resolve the problem where OS errors 1450 and 665 are reported for database files while executing DBCC CHECKDB
, creating a Database Snapshot, or file growth.
Original product version: SQL Server
Original KB number: 2002606
Assume that you perform one of the following actions on a computer that's running SQL Server:
DBCC CHECKDB
family of commands to check the consistency of a large database, and you also perform a large number of data changes in that database.Note
SQL Server uses sparse files for these operations: database snapshot and DBCC CHECKDB
.
As a result of these operations, you might notice one or more of these errors reported in the SQL Server Error log depending on the environment SQL Server is running on.
The operating system returned error 665(The requested operation could not be completed due to a file system limitation) to SQL Server during a write at offset 0x00002a3ef96000 in file 'Sam.mdf:MSSQL_DBCC18'
The operating system returned error 1450 (Insufficient system resources exist to complete the requested service.) to SQL Server during a write at offset 0x00002a3ef96000 in file with handle 0x0000000000000D5C. This is usually a temporary condition and the SQL Server will keep retrying the operation. If the condition persists, then immediate action must be taken to correct it.`
In addition to these errors, you may also notice the following Latch Timeout errors:
Timeout occurred while waiting for latch: class *'DBCC_MULTIOBJECT_SCANNER'*, id 000000002C61DF40, type 4, Task 0x00000000038089B8 : 16, waittime 600, flags 0x1a, owning task 0x0000000006A09828. Continuing to wait.
Timeout occurred while waiting for latch: class *'ACCESS_METHODS_HOBT_COUNT'*, id 000000002C61DF40, type 4, Task 0x00000000038089B8 : 16, waittime 600, flags 0x1a, owning task 0x0000000006A09828. Continuing to wait.
Additionally, you might also notice blocking when you view various dynamic management views (DMV), such as sys.dm_exec_requests
and sys.dm_os_waiting_tasks
.
In rare cases, you may observe a non-yielding scheduler issue reported in the SQL Server error log and that SQL Server generates a memory dump.
CauseThis problem occurs if a large number of ATTRIBUTE_LIST_ENTRY
instances are needed to maintain a heavily fragmented file in NTFS. If the space is next to a cluster that's already tracked by the file system, then the attributes are compressed into a single entry. However, if the space is fragmented, it has to be tracked with multiple attributes. Thus, heavy file fragmentation can lead to attribute exhaustion and the resulting 665 error. This behavior is explained in the following KB article: A heavily fragmented file in an NTFS volume may not grow beyond a certain size.
Both regular and sparse files created by SQL Server or other applications can get fragmented to these levels when large amounts of data modifications happen for the life of these snapshot files.
If you perform database backups across a stripe set of files all located on the same volume, or if you're bulk copying (BCP-ing) data out to multiple files on the same volume, the writes may end up in adjacent locations but belonging to different files. For example, one stream writes to offset between 201 and 400, the other stream writes from 401 to 600, the third stream can write from 601 to 800. This process continues for other streams as well. This will lead to file fragmentation on the same physical media. Each of the backup files or BCP output streams can exhaust the attribute storage as none of them get adjacent storage.
For a complete background of how SQL Server Engine uses NTFS sparse files and alternate data streams, see More information.
ResolutionConsider using one or more of the following options to resolve this issue:
Place the database files on a Resilient File System (ReFS) volume, which doesn't have the same ATTRIBUTE_LIST_ENTRY
limits that NTFS presents. If you want to use the current NTFS volume, you must reformat using ReFS after moving your database files elsewhere temporarily. Using ReFS is the best long-term solution to deal with this issue.
De-fragment the volume where the database files reside. Be sure your defragmentation utility is transactional. For more information on defragmenting drives where SQL Server files reside, see Precautions when you defragment SQL Server database drives and Recommendations. You must shut down SQL Server to perform this operation on the files. We recommend that you create full database backups before you defragment the files as a safety measure. Defragmentation works differently on solid-state drives (SSD) media and typically doesn't address the problem. Copying the file(s) and allowing the SSD firmware to repack the physical storage is often a better solution. For more information, see Operating System Error (665 - File System Limitation) Not just for DBCC Anymore.
File copy - performing a copy of the file may allow better space acquisition because the bytes might be tightly packed together in the process. Copying the file (or moving it to a different volume) may reduce attribute usage and may prevent the OS error 665. Copy one or more of the database files to another drive. Then, you may leave the file on the new volume or copy it back to the original volume.
Format the NTFS volume by using the /L option to obtain a large FRS. This choice may provide relief to this problem because it makes the ATTRIBUTE_LIST_ENTRY
larger. This choice might not be helpful when using DBCC CHECKDB
because the latter creates a sparse file for the database snapshot.
Note
For systems running Windows Server 2008 R2 and Vista, you first need to apply the hotfix from the KB article 967351 before using the /L
option with the format
command.
Break up a large database into smaller files. For example, if you have one 8-TB data file, you can break it up into eight 1-TB data files. This option might help because fewer modifications would happen on smaller files, thus less likely to introduce attribute exhaustion. Also, in the process of moving data around, the files will be organized compactly and fragmentation would be reduced. The following are high-level steps, which outline the process:
Database Auto-grow setting: Adjust the auto growth increment database setting to acquire sizes conducive to production performance and packing of NTFS attributes. The less frequent the auto growth occurrences and the larger the growth increment size, the less likely the possibility of file fragmentation.
Reduce the lifetime of DBCC CHECK
commands using the performance enhancements and thus avoid the 665 errors: Improvements for the DBCC CHECKDB command may result in faster performance when you use the PHYSICAL_ONLY option. Keep in mind that running DBCC CHECKDB
with PHYSICAL_ONLY
switch doesn't provide a guarantee that you'll avoid this error, but it will reduce the likelihood in many cases.
If you're performing database backups across many files (stripe set), carefully plan the number of files, MAXTRANSFERSIZE
and BLOCKSIZE
(see BACKUP). The goal is to reduce the fragments on the file system which is generally accomplished by writing the larger byte chunks together to a file. You might consider striping the files across multiple volumes for faster performance and reduction of fragmentation.
If you're using BCP to write multiple files simultaneously, adjust utility write sizes, for example increase the BCP batch size. Also, consider writing multiple streams to different volumes to avoid fragmentation, or reduce the number of parallel writes.
To execute DBCC CHECKDB
, you can consider setting up an Availability Group or Log Shipping/Standby server. Or use a second server where you can run the DBCC CHECKDB
commands to offload the work and avoid running into the issues caused by the heavy fragmentation of sparse files.
When you execute DBCC CHECKDB
, if you run the command at a time when there's little activity on the database server, then the sparse files will be lightly populated. The fewer writes to the files will reduce the likelihood of exhausting attributes on the NTFS. Less activity is another reason to run DBCC CHECKDB
on a second read-only server, when possible.
If you're running SQL Server 2014, upgrade to the latest Service Pack. For more information, see FIX: OS error 665 when you execute DBCC CHECKDB command for database that contains columnstore index in SQL Server 2014.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4