How can SQL Defrag Manager Defragment Indexes?

Based on the coverage settings you select, SQL Defrag Manager defragments tables and indexes in one of the subsequent ways:


The rebuild defragmentation type uses the DBCC DBREINDEX command to rebuild the indicators on the tables. The rebuild operation creates fresh, contiguous pages. SQL Server 2005/2008 enables the choice to Rebuild Online, which permits entry to the tables before the performance is completed.


The reorganize defragmentation type uses the DBCC INDEX DEFRAG command to reorder the leaf pages of the index in-place. This procedure is similar to a bubble type. Even though the pages are physically reordered, they may not be contiguous within the data file. This dilemma can cause interleaved indexes, which have to be reconstructed to store them in contiguous pages.

Defragmenting an Index Example

Consider a simplified instance of pages after several inserts, updates, and deletes, as revealed in the next figure. The page numbering represents the logical sequence of this pages. On the other hand, the physical string, as shown in the figure from left to right, doesn’t match the logical sequence.

How SQL Defrag Manager Compacts Data

In addition to reordering the leaf pages of this index, SQL Defrag Manager compacts the info in the pages using the original fill variable value specified to your table and then removes any pages that are empty. Think about the following conditions related to this compaction stage:Compaction is totally skipped if the Inhibit Page Locks property is put for the index. There are various calculations built to the compaction stage to stop unnecessary work. For instance, if the first page in the index is vacant and the rest of the pages are complete, SQL Server doesn’t move all the info forwards 1 page. SQL Server compacts pages back to the fill variable value defined for the index. Make sure this value is not set too large. To learn more, visit the SQL Server documentation. In case a lock cannot be obtained on a page during the compaction phase of DBCC INDEX DEFRAG, SQL Server skips that webpage.

Regarding Interleaved Indexes

Interleaving occurs when an index scope, which can be a set of eight index webpages, is not physically contiguous because a scope for another index is intermingled with it. This condition sometimes happens even if there’s no logical fragmentation in the index. Although the pages may be physically and logically arranged, they are not always contiguous. Switching between extents can impact performance as information access is inefficient. To resolve this matter, use SQL Defrag Manager to rebuild the indicators to store them in mini pages and reduce the need to change involving extents.

How to Find and Fix SQL Fragmentation

What is  Fragmentation?

As information is altered at a database, the database along with its own indicators become fragmented. As indicators become fragmented, ordered data recovery becomes much less efficient and decreases database operation.

Understanding the Different kinds of Fragmentation

There are numerous kinds of fragmentation that could occur and affect SQL Server performance and space use. Notice that logical page and order density problems exist on indexes and tables included in SQL Server. These problems cannot be solved by operating system level fragmentation tools since the fragmentation exists inside the documents, as opposed to in the record level itself.

Record fragmentation in the operating system Level

When deletes and inserts have been completed above time, pages eventually become fragmented as the actual sequence of information pages no more fits their logical purchase. This fragmentation occurs at the record allocation level and could be addressed using program tools. For those who are in possession of a little to medium size program and you don’t have a SAN, you must manage a system fragmentation tool prior to covering logical sequence and webpage density fragmentation within SQL Server.

Page density fragmentation

This matter, also called topical fragmentation within SQL Server, which is much like file fragmentation in the operating system level. When information is deleted, inserted, and altered through the years, an index may cause pages to be out of sequence, where the upcoming logical page isn’t the same as the upcoming physical page.This matter, also called internal fragmentation, happens as webpages split to make room to get info added to your page, there might be excess spare space left on the webpages. This excess space may cause SQL Server to browse more pages than needed to execute specific tasks. SQL Defrag Manager fragments the leaf amount of the index in order to the physical sequence of this webpages matches the left-to-right plausible order of their leaf pages. This procedure enhances index scan performance and all information recovery actions.

Fragmentation Examples

The data is arranged as well as the pages are complete as revealed in the next figure. Since the goal page is complete enough the new row doesn’t match, SQL Server splits the webpage about in half and inserts the new info on the newest page, as displayed in the next figure. Now, the logical sequence of this index doesn’t match the physical sequence, and the index is now fragmented.


How to Measuring Data with PowerShell

Working together with PowerShell, it’s possible to pretty much conduct anything and among these matters occurs to function as investigating a great number of information to quantify a variety of matters. Maybe you wish to discover the entire count of each the information which you’re looking at or wish to work out the typical price of other data and also do not really wish to load up the data in an excel spreadsheet to entertaining several formulas. PowerShell and perform the job for you breaking a sweat

Employing a CSV file featuring Iowa Crash Data, we could examine the information and offer some statistical info only by using PowerShell’s present commands.

The CSV file is near 34MB in proportion, therefore it could have a brief time to download it. This will simply default a count that represents the amount of rows which have information in it.

Import-Csv .\Crash_Data.csv | Measure-Object

We can see that there are close to 165,000 rows of data in this CSV file. This should make for a great look at data and determining how often weather might have been a factor among other things. Let’s grab a look at the first row of data to better understand what we are looking at.

$CrashData = Import-Csv .\Crash_Data.csv
  $CrashData | Select-Object -First 1 

That’s a great deal of information for only a single row. Also, a few of the pillars might not be that understandable and lots of the merits have numbers from them. Normally this implies that somewhere, there’s a hash table any key to appreciate collection that may translate every one of the amounts to a human readable format. This is average when storing information in a database to utilize a number that maps rear elsewhere into a human readable price.

After a small bit of hunting, I discovered the missing piece to the puzzle that distinguishes each column in addition to the anticipated lookups to every one the numerical outcomes. The connection which hosts this information luckily returns the info in JSON meaning I can utilize Invoke-RestMethod to pull down the data and browse it again simpler than if I had been only seeing the net page itself.

$Data = Invoke-RestMethod -Uri  ''
  $Data.fields|Select-Object -Property Name, alias, Domain

Now we know what the column names are by looking at the alias. When looking at the domain, we can step into each of those hash tables to find out the numeric lookup values for items such as the weather.

($Data.fields | Where-Object {$ -eq 'weather'}).Domain.CodedValues

name        code
  ----                          ----
  Clear                           1
  Cloudy                          2
  Fog, smoke, smog                3
  Freezing rain/drizzle           4
  Rain                            5
  Sleet, hail                     6
  Snow                            7
  Blowing Snow                    8
  Severe Winds                    9
  Blowing sand, soil, dirt        10
  Other (explain in narrative)    98
  Unknown                         99
Not Reported                    77

With this knowledge, we now know exactly the weather conditions that were reported at the time of the crash. Before we do that, let’s run some measurements using Measure-Object to get an idea some averages of things like property damage and anything else that stands out to me.

#Property Damage Average and Max 
  $CrashData | Measure-Object -Property PROPDMG  -Average -Maximum

We see that the average amount per crash was $5155.02 while there was one crash where the property damage was $4,851,387! That is quite a bit of damage being done. That was pretty interesting, but I wonder how the weather played a factor in some of these crashes.

#Weather Related 
  $WeatherHash = @{}
  ($Data.fields  | Where-Object  {$  -eq 'weather'}).Domain.CodedValues  | ForEach {
  $CrashData | Group-Object -Property WEATHER  | 
  Select-Object -Property  Count,  @{L='Name';E={$WeatherHash[([int]$_.Name)]}} | 
Sort-Object -Property  Count -Descending

Here I made use of the JSON data to put the weather code and display name into a hash table for an easier lookup. I also used Group-Object to group everything together and then sorted the count to show the most common reasons for a crash and then the least common reasons. Weather is one thing, but I want to know if there are other environmental conditions which were reported at the time of the crash.

#Environmental Conditions 
  $ECNTCRC =  @{}
  ($Data.fields  | Where-Object  {$  -eq 'ECNTCRC'}).Domain.CodedValues  | ForEach {
  $CrashData | Group-Object -Property ECNTCRC  | 
  Select-Object -Property  Count,  @{L='Name';E={$ECNTCRC[([int]$_.Name)]}} | 
Sort-Object -Property  Count -Descending

It appears that the majority of accidents didn’t appear to have any environmental conditions with weather being second as a reason followed by nothing being reported and animals being on the road as the fourth most reported reason.

There is a wealth of information here that we can dig into and using PowerShell, are able to better visualize the data using Group-Object and Measure-Object. I’ve only scratched the surface with this data and we could continue to dig deeper by combining the grouped data by picking the largest weather condition (clear skies in this case) and determining other factors such as major cause of the crash and taking the top hit from that and determining which city had the most of the top major cause.

#Top Weather/City/Reason 
  $WeatherHash = @{}
  ($Data.fields  | Where-Object  {$  -eq 'weather'}).Domain.CodedValues  | ForEach {
  #Major Cause 
  $MAJCSEHash = @{}

So based on our query, we can determine that the top weather condition is clear conditions and the top city which had reported the most clear conditions at the time of the wreck is in Des Moines with the number one reason for the crash was due to following too close to the car in front of them. Now you can take this data (or other data that is available) and conduct your own data analysis using PowerShell.

Credit :

How to develop an effective capacity planning process

Attempting to find a grip on matching technology infrastructure together with need? Here are the nine big steps related to implementing a audio capacity planning process.

Pick an proper capacity planning process owner. Identify the important resources to be quantified. Gauge the utilizations or functioning of the resources. Compare utilizations to max capabilities. Assemble workload predictions from programmers and consumers.

  • 1. Select an appropriate capacity planning process owner.
  • 2. Identify the key resources to be measured.
  • 3. Measure the utilizations or performance of the resources.
  • 4. Compare utilizations to maximum capacities.
  • 5. Collect workload forecasts from developers and users.
  • 6. Transform workload predictions into IT resource conditions.
  • 7. Map requirements onto present utilizations.
  • 8. Predict when the store is going to be from ability.
  • 9. Update forecasts and utilizations.

Step 1: Select an Appropriate Capacity Planning Process

The very first step in creating a robust capability planning prcoess is to choose an appropriately qualified person to function as the process. This man is accountable for designing, implementing and keeping the procedure and is permitted to negotiate and also delegate with programmers and other support classes.

First of all, this person should have the ability to communicate effectively with programmers because much of their credibility and success of an ability plan is dependent upon accurate input and constructive comments from programmers to infrastructure planners. This individual also have to be knowledgeable about network and systems applications and parts, in addition to with hardware and software settings.

A lot of other medium- and also lower-priority features are suggested in picking out the capacity planning process (notice table under). These characteristics and their priorities certainly differ from shop to shop, based on the kinds of software provided and services provided.

Capacity planning characteristics

1. Ability to work effectively with developers High
2. Knowledge of systems software and components High
3. Knowledge of network software and components High
4. Ability to think and plan strategically High
5. Knowledge of software configurations Medium
6. Knowledge of hardware configurations Medium
7. Ability to meet effectively with customers Medium
8. Knowledge of applications Medium
9. Ability to talk effectively with IT executives Medium
10. Ability to promote teamwork and cooperation Medium
11. Knowledge of database systems Low
12. Ability to analyze metrics and trending reports Low
13. Knowledge of power and air conditioning systems Low
14. Knowledge of desktop hardware and software Low