Simplifying DBCC CHECKDB With NO_INFOMSGS and Minimizing the Consequences

adamsmith2309
5 min readOct 6, 2020

--

MS SQL is a one-stop hub of the largest databases. This is what makes it more important to have smooth functioning of MS SQL. Being the largest database also means that there may be a problem of corruption and various errors which will pop up. Moreover, you will be bedding some strategies or command to check for these hindrances. DBCC(Database console command) CHECKDB is one of that commands which checks for the logical and physical integrity of the entire database making sure that everything is just fine(with no errors). DBCC CHECKDB helps in detecting corruption in the database, therefore should be run regularly. This is majorly done by running several of the other DBCC commands like DBCC CHECKDB With NO_INFOMSGS and then doing some extra verification and Repair SQL database DBCC CHECKDB to make sure you are tension.

Before checking for this make sure of these points:-

  • Run DBCC CHECK_ALLOC on the database — Checks the consistency of disk space and how it is allocating.
  • Run DBCC CHECKTABLE on every table and view in the database — Checks the integrity of all the pages
  • Run DBCC CHECK CATALOG on the database for catalog consistency
  • Validate content and link-level consistency between table metadata and file system directories
  • Check for Service Broker data in the database

Although these operations just work fine. You can also take a smart approach to how you perform your CheckDB operations in various situations and decreasing the length of the DBCC process. So to make sure for a larger database does not get time-consuming, there are the following options available.

Options available for DBCC CHECKDB

  1. NOINDEX — Specifies that which checks of nonclustered indexes for user tables should not be performed leading to less execution time. NOINDEX command will create more impact on system table indexes.
  2. PHYSICAL_ONLY — Limits the checking to the physical structure of the page creating the consistent allocation of the database. This check is designed to provide checks on torn pages, checksum failures, and common hardware failures that can compromise a user’s data.
  3. TABLOCK — DBCC CHECKDB instead of using an internal database snapshot. TABLOCK will make DBCC CHECKDB run faster on a database and reduce the load.
  4. DATA_PURITY — DBCC CHECKDB to check the database for column values that are out-of-range. For example, DBCC CHECKDB finds columns with date and time values that are larger or less than the acceptable range.
  5. NO_INFOMSGS — Lets you only focus on important messages.

Important Note: When you perform a DBCC CHECKDB operation, by default you will be provided with large output, not all of which you may be useful. Moreover, you will start feeling the pressure of removing any corruption in your database and missing some important information. So to counter this problem we use various Steps to minimize the consequences of DBCC CHECKDB.

Minimizing the Consequences of DBCC

  1. When checking for DBCC always use the NO_INFOMSGS option: DBCC check DB With NO_INFO MSGS filters irrelevant output that just tells you how many rows are in each table. It will show queries against Dynamic management views and not while DBCC Is running. Lessening the output makes it far less likely that you’ll miss a critical message that could be of great help to you.
  2. Always use the ALL ERROR_MSGS option especially in case of running SQL Server 2008 or SQL Server 2005 ( you may see the list of per-object errors truncated to 200).
  3. De-load the checks in DBCC command- in order to reduce pressure on servers to find for corruption, it would be better to de-load some checks. To do this keep the following in your minds:
  • Make sure you are frequently testing the restores of your full backups.
  • Run DBCC CHECKDB against log records and run it once only otherwise it will invalidate this database.
  • Run DBCC CHECKDB against your primary, and/or using PHYSICAL_ONLY more often than not.
  • Never assume that check-in secondary is enough. Even in your primary database, there can still physical issues.
  • Always analyze DBCC output. Check it off some list, is as helpful as running backups and will make the process more successful.

4. Optimizing Tempdb- By using less Tempdb you will make sure that the right resources are utilized. You would need to allocate space to Tempdb in a proper manner and also using the optimum number of data files.

5. Check for the snapshot- the recent versions of SQL Server will attempt to create a hidden snapshot of your database You can’t control this mechanism, but if you want to control where CHECKDB operates, create your own snapshot (Enterprise Edition required) on the specific drive.

Important Note: You may have seen suggestions to force CHECKDB to run in offline mode using WITH TABLOCK option. It is recommended that this approach if your database is actively being used, choosing this option will just make things complicated.

6. Reduce impact on CPU- CHECKDB in a way that it creates less effect on CPU also consider reducing parallel things running in a couple of different ways:

  • Use Resource Governor in 2008 SQL and above, as long as you are running Enterprise Edition. To target just DBCC commands, you’ll have to write a classifier function that can identify the sessions that will be performing this work (e.g. a specific login)

Apart from using these manual methods, you can also use a professional automated method that does all the work on it own(both backup and recovery). It is an advanced MS SQL Database Recovery Tool that Repair SQL database DBCC CHECKDB with the whole program.

Salient features are:-

  1. A user-friendly tool with interactive GUI and rich preview option to preview files after scanning.
  2. A reliable tool with the recovery of NDF as well as MDF files.
  3. Recovers every data including file objects with table views, supports stored rows and page compression.
  4. Supports all SQL servers and compatible with all windows operating systems.
  5. Ability to save all data in another MS SQL database or MS SQL script.

Conclusion

As this process can take much times based on the MS SQL database size. It would be good to go for any choice that you like and most of all, It would be more important to get all corrupted files cleared from the database in a fast and efficient manner without any data loss.

--

--

adamsmith2309
adamsmith2309

Written by adamsmith2309

0 Followers

Marketing and Tech Enthusiast.

No responses yet