Wednesday, November 28, 2012

DATEFROMPARTS function in SQL Server 2012


One of the new data functions in SQL Server 2012 is the DATEFROMPARTS function. This function will give you the date when you will add the parameters to it.

DATETIMEFROMPARTS ( year, month, day, hour, minute, seconds, milliseconds )

Here is an example:

DECLARE @years TABLE (
 [Year] INT);INSERT @years([Year])SELECT 2010 AS Year 
UNION ALL 
SELECT 2011 
UNION ALL 
SELECT 2012;
SELECT [FirstDayOfYear] = DATEFROMPARTS(t1.[Year],1,1)
,      [LastDayOfYear]  = DATEFROMPARTS(t1.[Year],12,31)FROM @years y;

SELECT [FirstDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Year] * 10000) + 101)))
,      [LastDayOfYear]  = CONVERT(DATE,CONVERT(CHAR(8),((y.[Year] * 10000) + 1231)))FROM @years y;

In this example you could see how much more easier it is to get the First day of the Year and the last day of the year with the usage of the function DATEFROMPARTS.

Thursday, November 15, 2012

My new index scripts for SQL Server 2008

Please be free to try out my new script concerning indexes in SQL Server 2008 and above. These scripts won't work in SQL Server 2005 environment.


http://www.4shared.com/document/SRceeYh1/sp_SQL2008_helpindex.html?

This procedure will show you everything you want to know about the indexes in your database.

http://www.4shared.com/document/9QiLKdan/sp_SQL2008_finddupes.html?

This procedure will show you the duplicates within your database.

Both can be used with a parameter to define the table or view you want to use.

Example:

exec sp_sql2008_helpindex

exec sp_sql2008_helpindex 'tablename'


exec sp_sql2008_finddupes

exec sp_sql2008_finddupes 'tablename'



Please leave some comments if you find a bug in one of these two procedures.

Friday, June 29, 2012

Stored procedure Usage


How you might want to monitor execution of stored procedures. I am using Dynamic Managemant views (DMV) to accomplish the monitoring of SP executions.

This will be my script to identify the most frequently run SP’s of an instance of SQL Server, as well as those SPs that use the most CPU, I/O or run the longest.

SELECT @@SERVERNAME                                       AS [Instance_Name]                     -- col. 1
            , DB_NAME(f1.dbid)                                         AS [DB_Name]                             -- col. 2
            , OBJECT_SCHEMA_NAME(f1.objectid,dbid) AS [Schema_Name]                      -- col. 3
            , OBJECT_NAME(f1.objectid,dbid)                   AS [Stored_Procedure]                  -- col. 4
            , MAX(v1.usecounts)                                          AS [Execution_count]                     -- col 5
            , SUM(v2.total_worker_time)                              AS [Total_cpu_Time_ms]               -- col. 6
            , SUM(v2.total_worker_time) / 1000                  AS [Total_cpu_Time_sec]               -- col. 7
            , SUM(v2.total_worker_time) / 1000 / 60
                                                                                       AS [Total_cpu_Time_min]              -- col. 8
            , SUM(v2.total_worker_time) / (MAX(v1.usecounts) * 1.0)
                                                                                       AS [AVG_cpu_Time]                     -- col. 9
           , SUM(v2.total_physical_reads + v2.total_logical_reads + v2.total_logical_writes)
                                                                                       AS [Total_IO]                                -- col. 10
          , SUM(v2.total_physical_reads + v2.total_logical_reads + v2.total_logical_writes) /
           (MAX(v1.usecounts))                                           AS [AVG_Total_IO]                     -- col. 11
          , SUM(v2.total_physical_reads)                             AS [Total_Physical_Reads]            -- col. 12
          , SUM(v2.total_physical_reads) / (MAX(v1.usecounts) * 1.0)
                                                                                       AS [AVG_Physical_Read]              -- col. 13
          , SUM(v2.total_logical_reads)                                AS [Total_Logical_Reads]              -- col. 14
          , SUM(v2.total_logical_reads) / (MAX(v1.usecounts) * 1.0)
                                                                                       AS [AVG_Logical_Read]               -- col. 15
          , SUM(v2.total_logical_writes)                               AS [Total_Logical_Writes]              -- col. 16
          , SUM(v2.total_logical_writes) / (MAX(v1.usecounts) * 1.0)
                                                                                       AS [AVG_logical_writes]                -- col. 17
          , SUM(v2.total_elapsed_time)                               AS [Total_Elapsed_Time_ms]         -- col. 18
          , SUM(v2.total_elapsed_time) / 1000                    AS [Total_Elapsed_Time_sec]        -- col. 19
          , SUM(v2.total_elapsed_time) / 1000 / 60             AS [Total_Elapsed_Time_min]        -- col 20
          , SUM(v2.total_elapsed_time) / MAX(v1.usecounts)
                                                                                       AS [AVG_Elapsed_Time_ms]         -- col. 21
          , SUM(v2.total_elapsed_time) / MAX(v1.usecounts) / 1000
                                                                                       AS [AVG_Elapsed_Time_s]            -- col. 22
          , SUM(v2.total_elapsed_time) / MAX(v1.usecounts) / 1000 /60
                                                                                       AS [AVG_Elapsed_Time_min]         -- col. 23
 FROM sys.dm_exec_cached_plans v1
INNER JOIN sys.dm_exec_query_stats v2 ON v1.plan_handle = v2.plan_handle
CROSS APPLY sys.dm_exec_sql_text(v1.plan_handle) f1
WHERE DB_NAME(f1.dbid) IS NOT NULL
     AND v1.objtype = 'proc'
     AND DB_NAME(f1.dbid) NOT IN ('master','model','msdb')
GROUP BY v1.plan_handle
                 , DB_NAME(f1.dbid)
                , OBJECT_SCHEMA_NAME(objectid,f1.dbid)
                , OBJECT_NAME(objectid,f1.dbid)
  ORDER BY 21 desc


By changing the ORDER BY clause I can order them in different ways, depending on whar you want to see.

On column 5 means I want to know which stored procedure is used the most.

On column 6 means I want to know which stored procedure used the most total cpu time.

On column 21 means I want to know which stored procedure is using the most average elapsed time.


Tuesday, June 26, 2012

SQL Server 2005 Performance Dashboard in SQL Server 2008


Many of you DBA’s out there would have heard about the SQL Server 2005 Performance Dashboard Reports and used it before. I find the reports highly useful and time saving, it allows me to get a picture of how my SQL Server is performing.

For those who don't know about it, you can download it from here and add it as a custom report to Management Studio (SSMS). It is a free download! Read on below to find out how to install it and use it on your SQL Server. Also there are included custom modified files that allow you to use this for a SQL Server 2008 instance. This is provided "as is" with no guarantees.

Introduction:


The report are mainly intended to quickly identify performance problems and help to resolve them. These reports use the DMV's that came out with SQL 2005. so they are very fast & reliable information when troubleshooting a performance problem. Some of the common problems that can be detected using the Dashboard Reports are:


  1. CPU bottlenecks (which queries are consuming the most CPU and Plan for this query)
  2. I/O bottlenecks (wich queries are performing the most IO and plan for this query)
  3. Index recommondations generated by the query optimizer (missing index recommendations)
  4. Blocking
  5. Latch contention and other Wait Types
Pre-requisite:

  1. You need to have Management Studio installed on the machine where you intend to use the Performance Dashboard Reports (could be either client or server machine).
  2. Your SQL Server 2005 needs to have at a minimum SP2 applied (build 9.0.3042 or greater)
Installation:

  1. You can install the Performance Dashboard Reports from here.
  2. Once you install the above, go to %\Program Files\Microsoft SQL Server\90\Tools\PerformanceDashboard and run the setup.sql script against the SQL instance you want to monitor the performance for.

Thursday, April 19, 2012

Use this script to generate a SQL Server Job Report


Have you ever wanted to create a report listing all of the SQL Server jobs on your SQL Server 2005 or higher servers? This is easy if you use the script I have created just for this purpose. The script creates this report by joining the sysjobs and sysjobschedules tables and creating a resultset.
To make the report easy to read, I have had to substitute the various integer values found in these tables with readable values. The first part of the stored procedure generates the list of jobs and the days to run for weekly scheduled jobs. The second part is written with CASE statements to incorporate all the possible combinations listed under the sysjobschedules table.
Making the Data Readable
The following are the various numeric entries and their string values, used in the stored procedure, to make it more readable.

freq_type int Frequency of the schedule execution:
1 = Once
4 = Daily
8 = Weekly
16 = Monthly
32 = Monthly relative
64 = Execute when SQL Server

freq_interval int Value indicating on which days the schedule runs.

If freq_type is 4 (daily), the value is every freq_interval days.

If freq_type is 8 (weekly), the value is a bitmask indicating the days in which weekly schedules are run. The freq_interval values are:
1 = Sunday
2 = Monday
4 = Tuesday
8 = Wednesday
16 = Thursday
32 = Friday
64 = Saturday

If freq_type is 16 (monthly), the value is freq_interval day of the month.

If freq_type is 32 (monthly relative), freq_interval can be one of these values:
1 = Sunday
2 = Monday
3 = Tuesday
4 = Wednesday
5 = Thursday
6 = Friday
7 = Saturday
8 = Day
9 = Weekday
10 = Weekend day

freq_subday_type int Units for the freq_subday_interval:
1 = At the specified time
2 = Seconds
4 = Minutes
8 = Hours

freq_subday_interval int Number of freq_subday_type periods to occur between each scheduled execution of the job.

freq_relative_interval int Scheduled job’s occurrence of the freq_interval in each month when freq_type is 
32 (monthly relative):
1 = First
2 = Second
4 = Third
8 = Fourth
16 = Last
========================================================================
DECLARE @x             int
      , @y             int
      , @z             int
      , @counter       smallint
      , @days          varchar(100)
      , @day           varchar(10)
      , @Job_Name      sysname
      , @freq_interval int
      , @Job_ID        varchar(50)

SET NOCOUNT ON

CREATE TABLE #temp (Job_ID   varchar(50)
                   ,Job_Name sysname
                   ,Jdays    varchar(100))

--–This cursor runs throough all the jobs that have a weekly frequency running on different days

DECLARE Job_Cursor CURSOR
    FOR
      SELECT t1.job_id        AS [Job_id]
           , t2.name          AS [Name]
           , t2.freq_interval AS [freq_interval]
        FROM msdb.dbo.sysjobschedules t1
        JOIN msdb..sysschedules t2 ON t1.schedule_id = t2.schedule_id
       WHERE freq_type = 8

OPEN Job_Cursor
FETCH NEXT FROM Job_Cursor
 INTO @Job_ID
    , @Job_Name
    , @freq_interval

WHILE @@fetch_status = 0
BEGIN
  SELECT @counter = 0
       , @x       = 64
       , @y       = @freq_interval
       , @z       = @y
       , @days    = ''
       , @day     = ''

  WHILE @y <> 0
  BEGIN
    SELECT @y = @y - @x
         , @counter = @counter + 1
    IF @y < 0 
    BEGIN
      SET @y = @z
      GOTO start
    END
    SELECT @day = CASE @x
                    WHEN 1
                      THEN 'Sunday'
                    WHEN 2
                      THEN 'Monday'
                    WHEN 4
                      THEN 'Tuesday'
                    WHEN 8
                      THEN 'Wednesday'
                    WHEN 16
                      THEN 'Thursday'
                    WHEN 32
                      THEN 'Friday'
                    WHEN 64
                      THEN 'Saturday'
                  END
    SELECT @days = @day
                 + ','
                 + @days
    start:
    SELECT @x = CASE @counter
                  WHEN 1
                    THEN 32
                  WHEN 2
                    THEN 16
                  WHEN 3
                    THEN 8
                  WHEN 4
                    THEN 4
                  WHEN 5
                    THEN 2
                  WHEN 6
                    THEN 1
                END

    SET @z = @y
    IF @y = 0
      BREAK
  END

  INSERT INTO #temp
       SELECT @Job_ID
            , @Job_Name
            , left(@days, len(@days)-1)
  FETCH NEXT FROM Job_Cursor
   INTO @Job_ID
      , @Job_Name
      , @freq_interval
END
CLOSE Job_Cursor
DEALLOCATE Job_Cursor

--–Final query to extract complete information by joining sysjobs, sysschedules and #temp table

SELECT @@servername    AS [Instance]
     , t3.name         AS [Job_Name]
     , CASE t3.enabled 
         WHEN 1
           THEN 'Enabled'
           ELSE 'Disabled'
       END             AS [JobEnabled]
     , t2.name         AS [Schedule_Name]
     , CASE t2.enabled 
         WHEN 1
           THEN 'Enabled'
           ELSE 'Disabled'
       END             AS [ScheduleEnabled]
     , CASE freq_type 
         WHEN 1
           THEN 'Once'
         WHEN 4
           THEN 'Daily'
         WHEN 8
           THEN 'Weekly'
         WHEN 16
           THEN 'Monthly'
              + cast(freq_interval AS char(2))
              + 'th Day'
         WHEN 32
          THEN 'Monthly Relative'
         WHEN 64
           THEN 'Execute when SQL Server Agent Starts'
       END             AS [Job Frequency]
     , CASE freq_type 
         WHEN 32
           THEN CASE freq_relative_interval
                  WHEN 1
                    THEN 'First'
                  WHEN 2
                    THEN 'Second'
                  WHEN 4
                    THEN 'Third'
                  WHEN 8
                    THEN 'Fourth'
                  WHEN 16
                    THEN 'Last'
                END
           ELSE ''
       END             AS [Monthly Frequency]
     , CASE freq_type
         WHEN 16
           THEN cast(freq_interval AS char(2))
              + 'th Day of Month'
         WHEN 32
           THEN CASE freq_interval 
                  WHEN 1
                    THEN 'Sunday'
                  WHEN 2
                    THEN 'Monday'
                  WHEN 3
                    THEN 'Tuesday'
                  WHEN 4
                    THEN 'Wednesday'
                  WHEN 5
                    THEN 'Thursday'
                  WHEN 6
                    THEN 'Friday'
                  WHEN 7
                    THEN 'Saturday'
                  WHEN 8
                    THEN 'Day'
                  WHEN 9
                    THEN 'Weekday'
                  WHEN 10
                    THEN 'Weekend day'
                END
         WHEN 8
           THEN t4.Jdays
           ELSE ''
       END             AS [Runs On]
     , CASE freq_subday_type
         WHEN 1
           THEN 'At the specified Time'
         WHEN 2
           THEN 'Seconds'
         WHEN 4
           THEN 'Minutes'
         WHEN 8
           THEN 'Hours'
       END             AS [Interval Type]
     , CASE freq_subday_type 
         WHEN 1
           THEN 0
           ELSE freq_subday_interval 
       END             AS [Time Interval]
     , CASE freq_type 
         WHEN 8
           THEN cast(freq_recurrence_factor AS char(2))
              + ' Week'
         WHEN 16
           THEN cast(freq_recurrence_factor AS char(2))
              + ' Month'
         WHEN 32
           THEN cast(freq_recurrence_factor AS char(2))
              + ' Month'
           ELSE ''
       END             AS [Occurs Every]
     , left(active_start_date,4)
     + '-'
     + substring(cast(active_start_date AS char),5,2) 
     + '-'
     + right(active_start_date,2) [Begin Date-Executing Job]
     , left(REPLICATE('0', 6-len(active_start_time))
     + cast(active_start_time AS char(6)),2)
     + ':'
     + substring(REPLICATE('0', 6-len(active_start_time))
     + cast(active_start_time AS char(6)),3,2)
     + ':'
     + substring(REPLICATE('0', 6-len(active_start_time))
     + cast(active_start_time AS char(6)),5,2)
                       AS [Executing At]
     , left(active_end_date,4)
     + '-'
     + substring(cast(active_end_date AS char),5,2) 
     + '-'
     + right(active_end_date,2)
                       AS [End Date-Executing Job]
     , left(REPLICATE('0', 6-len(active_end_time))
     + cast(active_end_time AS char(6)),2)
     + ':'
     + substring(REPLICATE('0', 6-len(active_end_time))
     + cast(active_end_time AS char(6)),3,2)
     + ':'
     + substring(REPLICATE('0', 6-len(active_end_time))
     + cast(active_end_time AS char(6)),5,2)
                       AS [End Time-Executing Job]
     , t3.date_created  AS [Job Created]
     , t2.date_created AS [Schedule Created] 
  FROM msdb..sysjobschedules t1 
  JOIN msdb..sysschedules t2       ON t1.schedule_id = t2.schedule_id
 RIGHT OUTER JOIN msdb..sysjobs t3 ON t1.job_id = t3.job_id
  LEFT OUTER JOIN #temp t4         ON t2.name = t4.Job_Name
                                  AND t1.job_id = t4.Job_ID
 ORDER BY 1

DROP TABLE #temp

========================================================================

Please let me know if you have any issues with the script.


Wednesday, March 7, 2012

Design a sisaster Recovery Plan for SQL Server


In today's world every single data is very important, data keep growing day by day, managing the data becoming challenging, thanks we have got machines which can store our data, what all our brain have to do is to manage those machines and make sure data is safe. The question is how you will make sure that the machine which is storing our data is going to be safe, if tomorrow something happens disaster and machine destroyed then where about your data which was stored in the machine since long time, is that gone ??.
We don't want to lose our data so is the disaster recovery plan comes into the picture, you can restore your data even your current database machine is dead. Busy database administrators can easily push this task onto the back burner with promises to themselves that they'll "take care of backups right after I finish this big project." However, take a moment and consider the value of your data to your organization? What would be the result if an hour's worth of database changes were lost? A day's worth? What about a complete loss of the database?
In this blog I am going top explain what ways there are for you to plan your disaster recovery plan if your data storage machine is Microsoft SQL Server.

Introduction

Disaster Recovery:

Disaster Recovery plan is your business continuity plan. Disaster recovery is a process that you can use to help recover information systems and data, if a disaster occurs.

Suppose you have a database system storing financial application data, an organization whole business is running on some applications and data is stored in database. Lets your database system machine is placed in the same organization, now if tomorrow database machine crashed then???, all your data gone, what about that organization business records, is that organization will be able to continue business?

So if you had solid Disaster Recovery plan for your organization data then your organization business will not stop and it can continue, for the organization only it will be a matter of one system crash. So in disaster recovery what you do is configure some backup mechanism on some other machine to keep the same piece of information what you have on your current database machine. So if your current database machine dead then you can recover your all data from backup machine.

The first step in developing a solid disaster recovery plan is to develop an idea of what constitutes an acceptable loss for your organization. First, consider the impact of losing data stored in your database. Would you be able to recover from the loss of an hour's worth of data? If you're managing a human resources database, chances are that you could deal with this situation by instructing your personnel to reenter data entered during that period. If you're running the database supporting a financial institution, the loss of an hour's data could bring the instant wrath of clients and industry regulators along with significant monetary losses. Protection against these types of losses is provided by backups of the database and use of transaction logging. We'll discuss these in further detail.

Disaster recovery planning is the work that is devoted to preparing all the actions that must occur in response to a disaster. The planning includes the selection of a strategy to help recover valuable data. The selection of the appropriate disaster recovery strategy depends on your business requirements.

Disaster recovery in SQL Server

Microsoft SQL Server database system provides multiple ways to configure disaster recovery.

Timely database Backup strategies
Database Replication
Log shipping
Database Mirroring
Disaster Recovery Plan

Timely Database Backup Strategies

If your disaster recovery plan you are planning for timely databases backup then all what you have to do is:

Take backup of all database from the database server (you can schedule a job in sql server to do take backup of all databases)
Copy all back up files to your file server or some other directory
You can write schedule daily/weekly backup plan in sql server
Write your daily backup / weekly back up in disk and store the disk at some safe place

Database Replication

SQL Server replication, a software package included in Microsoft SQL Server, is used to move data from server to server in a transitionally consistent state from one instance to another.

Replication topology consists of three servers – the subscriber, publisher and distributor. The subscribers are the servers receiving the data. The publisher is the server that holds the initial set of data being made available to the subscribers. The distributor is the server that contains many of the settings. It also holds the data as it is moved from the publisher to the subscribers.

Snapshot Replication

Snapshot replication is the easiest replication type to understand because it is conceptually similar to full backup and restore. With snapshot replication the server copies entire set of data to the subscriber at scheduled time, rewriting the data at subscriber with each copy of operation. However snapshot replication does not operate on the complete the database as backup and restore does. Snapshot replication copies only specified articles from publisher to the subscribers. Keep in mind because snapshot replication copies entire set of data every time it runs, you should use this replication type only when the amount of data is small and rather static.

Transactional Replication

Peer-to-peer transactional replication is designed for applications that might read or might modify the data in any database that participates in replication. Additionally, if any servers that host the databases are unavailable, you can modify the application to route traffic to the remaining servers. The remaining servers contain identical copies of the data.

Log shipping

Log shipping is the process of automating the backup of database and transaction log files on a production SQL server, and then restoring them onto a standby server. But this is not all, the key feature of log shipping is that it will automatically backup transaction logs throughout the day (for whatever interval you specify) and automatically restore them on the standby server. This in effect keeps the two SQL Servers in sync. Should the production server fail, all you have to do is point the users to the new server. Well, it's not really that easy, but it comes close if you put enough effort into your log shipping setup

Log shipping involves one Primary Server and one or many secondary server and one monitor server. First transaction log backup of database on primary server is taken and then the backup files are moved to the secondary servers and transaction log backups are applied to each of the secondary databases individually. An optional third server instance, known as the monitor server, records the history and status of backup and restores operations and, optionally, raises alerts if these operations fail to occur as scheduled.

Log shipping consists of three operations:

Back up the transaction log at the primary server instance.
Copy the transaction log file to the secondary server instance.
Restore the log backup on the secondary server instance.

The primary server instance runs the backup job to back up the transaction log on the primary database. This server instance then places the log backup into a primary log-backup file, which it sends to the backup folder. In this figure, the backup folder is on a shared directory—the backup share.

Each of the three secondary server instances runs its own copy job to copy the primary log-backup file to its own local destination folder.

Each secondary server instance runs its own restore job to restore the log backup from the local destination folder onto the local secondary database.

Database mirroring

The primary goal of Database Mirroring technology is to increase the database availability. When enabled, every update to the principal database is immediately reproduced onto the mirror database

Database mirroring is a primarily software solution for increasing database availability. You can only implement mirroring on a per-database basis. Mirroring only works with databases that use the full recovery model. The simple and bulk-logged recovery models do not support database mirroring. Therefore, all bulk operations are always fully logged. It also offers failover capabilities, which may be made automatic depending on how you configure mirroring. The mirrored copy is a standby copy that cannot be accessed directly. It is used only for failover situations

Principal Server: In a database mirroring configuration, there are two copies of a single database, but only one copy is accessible to the clients at any given time. The copy of the database that the applications connect to is called the principal database. The server that hosts the principal database is known as the principal server.

Mirror Server: The mirror is the copy of the principal database. The mirror is always in a restoring state; it is not accessible to the applications. To keep this database up-to-date, the log records are transferred from the principal and applied on the mirror database. The server that hosts the mirror database is known as the mirror server.

Witness Server: The optional witness is an SQL Server instance in a database mirroring configuration. It is separate from the principal and mirror instances. When database mirroring is used in synchronous mode, the witness provides a mechanism for automatic failover

Wednesday, February 22, 2012

Identify database features restricted to a specific edition of SQL Server 2008

How can we identify whether a database uses any of the features that are restricted to a specific edition of SQL Server 2008.  In this tip we will go through the steps a DBA needs to follow to identify a database which uses edition specific features.



When you implement any of the features such as Data Compression , Partitioning , Transparent Data Encryption or Change Data Capture of SQL Server 2008 Enterprise Edition on a database, these features will internally change the way the database engine stores information within the database files. If a database uses any of the above features which are specifically designed to be used in Enterprise Edition of SQL Server 2008, it cannot be moved to other editions of SQL Server 2008.

DBAs can identify all edition specific features that are enabled within a user database by using the sys.dm_db_persisted_sku_features dynamic management view. 

As an example, lets try restoring the AdventureWorks database to a SQL Server 2008 Standard Edition instance.  Unfortunately, there is a catch, one of the tables Person.Address of is using Page Level compression in SQL Server 2008 Enterprise Edition.  When you try to restore the database to a SQL Server 2008 Standard Edition instance which does not support the Data Compression feature, the restoration of the database will fail with the following error message:

RESTORE DATABASE is terminating abnormally.
Msg 909, Level 21, State 1, Line 4
Database 'AdventureWorks' cannot be started in this edition of SQL Server because part or all of object 'Address' is enabled with data compression or vardecimal storage format. Data compression and vardecimal storage format are only supported on SQL Server Enterprise Edition.

Msg 933, Level 21, State 1, Line 4
Database 'AdventureWorks' cannot be started because some of the database functionality is not available in the current edition of SQL Server.


If a DBA is asked to move a database from Enterprise Edition to any other edition of SQL Server 2008 it is a good idea to execute the T-SQL code below to identify whether the current database uses any of the features that are restricted to a specific edition of SQL Server. The sys.dm_db_persisted_sku_features dynamic management view will let you know which edition specific feature is being used by the database. One needs to have VIEW DATABASE STATE permission on the database to view the results. 
USE AdventureWorks
GO
SELECT *
FROM sys.dm_db_persisted_sku_features
GO

As you can see in the code above snippet, the dynamic management view lets you know that the AdventureWorks database uses Compression feature of SQL Server 2008 Enterprise Edition.  The sys.dm_db_persisted_sku_features dynamic management view will not return any rows if no features that are restricted by edition are used by the database. In case, you are using Data Compression , Partitioning , Transparent Data Encryption or Change Data Capture of SQL Server 2008 Enterprise Edition, you need to remove the features before you perform the full backup and restore the database. 


Wednesday, February 8, 2012

Delete Orphan users from databases on a server


Have you heard about orphan SQL Server users? If not, an orphan user is a user in a SQL Server database that is not associated with a SQL Server login. Orphan users are created when a database backup from one server is restored on another server. There are a number of articles that have been written, and lots of topics on discussion boards about reconnecting orphan users with logins, but few, if any, regarding removing orphan users. Therefore, this article deals with how to identify which databases have orphan users, and how to remove the identified orphan users.

Just because you are moving a database from one server to another does not mean you also want to move all of the users associated with the database. In some cases, you may not want to move any, and in other cases, you might want to move only a few. For this discussion, let's say you have already identified and reconnected all of the orphan users you plan to keep. For all of the other orphan users you did not reconnect, I will show you how to identify and remove them.

Identifying Orphan Users
SQL Server provides a SP to help you identify orphan users that were originally associated with SQL Server Authenticated logins. The SP is called sp_change_users_login. However, SQL Server does not provide a mechanism to identify orphan users that where originally associated with Windows authenticated users or groups. The key to removing users is being able to identify them. The following code can be run against any database to identify all of the orphan users regardless of whether they are associated with a SQL Server or Windows authenticated user and/or a Windows group.

select u.name
  from master..syslogins l
   right join sysusers u on l.sid = u.sid
where l.sid is null
    and issqlrole <> 1
    and isapprole <> 1
    and u.name not in ('INFORMATION_SCHEMA''guest', 'system_function_schema', 'sys', 'dbo''SA')

Removing Orphan Users

Once you have identified orphan users it is extremely simple to remove them. You remove them by using my script. Here is an example that removes the database users 'Orphan', from the current database in use.

exec sp_revokedbaccess 'Orphan'. It seems fairly simple to do this for a few users and databases. However, if you have a large number of orphan users and databases, I'm sure you would not like to do this by hand. At least, I didn't want to do this manually. Because I like to automate repetitive manual tasks, I developed a script to accomplish identifying and removing orphan users.

This SP first determines which databases have orphan users. For each database that has orphans, it removes them one at a time. If an orphan user is the database owner then the "sp_changedbowner" SP is used to change the owner to "SA" before the orphan user is removed. The SP does not really remove the users, or change the database owners, but instead it just generates the code to remove the users. This allows you to review the code and determine if you want to remove all users, or only a select set of orphan users.

Conclusion

You can leave the orphan database users in a database if you want. Although they are excess baggage, that comes along with a database restore. In addition, they provide a small security risk if some newly defined login is unintentionally associated with an orphan user, allowing the new login to gain unauthorized database access. It is best to remove un-necessary orphan users to provide a clean, uncluttered database environment. This script provides an easy method to identify and remove unneeded orphan users. Therefore, this script can be a valuable tool, to be used as part of your database restore process, should you desire to remove orphan users. Download my script here.

set nocount on

-- Section 1: Create temporary table to hold databases to process 

-- drop table if it already exists
if (select object_id('tempdb..##dbnames')) is not null
  drop table ##dbnames

-- Create table to hold databases to process
create table ##dbnames (dbname varchar(128))

-- Section 2: Determine what databases have orphan users
exec master.dbo.sp_MSforeachdb 'insert into ##dbnames select ''?'' from master..syslogins l right join ?..sysusers u
on l.sid = u.sid
where l.sid is null and issqlrole <> 1 and isapprole <> 1 
and u.name not in (''INFORMATION_SCHEMA'', ''guest'', ''system_function_schema'', ''sys'', ''dbo'', ''SA'') 
having count(1) > 0'

-- Section 3: Create local variables needed
declare @CNT int
          , @name char(128)
          , @sid  varbinary(85)
          , @cmd nchar(4000)
          , @c int
          , @hexnum char(100)
          , @db varchar(100) 

-- Section 5: Process through each database and remove orphan users
select @cnt=count(*) from ##DBNAMES
While @CNT > 0
begin

-- get the name of the top database
  select top 1 @db=dbname from ##DBNAMES

-- delete top database 
  delete from ##DBNAMES where dbname = @db

-- Build and execute command to determine if DBO is not mapped to login
  set @cmd = 'select @cnt = count(*) from master..syslogins l right join ' + 
             rtrim(@db) + '..sysusers u on l.sid = u.sid' + 
             ' where l.sid is null and u.name = ''DBO'''
  exec sp_executesql @cmd,N'@cnt int out',@cnt out

-- if DB is not mapped to login that exists map DBO to SA
  if @cnt = 1
  begin
    print 'exec ' + @db + '..sp_changedbowner ''SA'''  
    -- exec sp_changedbowner 'SA'
  end -- if @cnt = 1


-- drop table if it already exists
if (select object_id('tempdb..##orphans')) is not null
  drop table ##orphans

-- Create table to hold orphan users
create table ##orphans (orphan varchar(128))

-- Build and execute command to get list of all orphan users (Windows and SQL Server)
-- for current database being processed
   set @cmd = 'insert into ##orphans select u.name from master..syslogins l right join ' + 
              rtrim(@db) + '..sysusers u on l.sid = u.sid ' + 
              'where l.sid is null and issqlrole <> 1 and isapprole <> 1 ' +  
              'and (u.name not in (''INFORMATION_SCHEMA'', ''guest'', ''system_function_schema'', ''sys'',  ''dbo'', ''SA'')'
   exec (@cmd)


-- Are there orphans
  select @cnt = count(1) from ##orphans
  
  WHILE @cnt > 0 
  BEGIN
  
-- get top orphan
  select top 1 @name= orphan from ##orphans

-- delete top orphan
  delete from ##orphans where orphan = @name

-- Build command to drop user from database.
    set @cmd = 'exec ' + rtrim(@db) + '..sp_revokedbaccess ''' + rtrim(@name) + ''''
    print @cmd
    --exec (@cmd)

   
-- are there orphans left
    select @cnt = count(1) from ##orphans  
  end --  WHILE @cnt > 0


-- are the still databases to process
select @cnt=count(*) from ##dbnames

end -- while @cnt > 0

-- Remove temporary tables
drop table ##dbnames, ##orphans

Wednesday, February 1, 2012

Do you use backup log with truncate only in your instance

In some cases developers are using the statement BACKUP LOG <databasename> WITH TRUNCATE_ONLY in their scripts. And this script is going to be scheduled in our production environment. If the database has not been set on simple and we make backups from the log in between for our DR. Then those backup logs are useless.

How can we determine if the log isn't being backup in a proper manner.

Here is the script I am using:


DECLARE @currentfilename nvarchar(max)
DECLARE @logvalue        nvarchar(1000)
DECLARE @oldfilename     nvarchar(max)
DECLARE @oldlogId        smallint
DECLARE @olderfilename     nvarchar(max)
DECLARE @olderlogId        smallint
DECLARE @endpos          smallint
DECLARE @startpos        smallint
DECLARE @diffpos         smallint

SELECT @currentfilename=CAST(value AS nvarchar(max))
  FROM fn_trace_getinfo(default)
 WHERE property = 2 -- Get the current trace file name

SET @endpos   = CHARINDEX('.trc',@currentfilename)
SET @startpos = CHARINDEX('log_',@currentfilename)
SET @diffpos  = @endpos-@startpos


SET @logvalue=SUBSTRING(SUBSTRING(@currentfilename,@startpos,@diffpos),1,10)
SET @oldlogId=SUBSTRING(SUBSTRING(@currentfilename,@startpos,@diffpos),5,4) - 1
SET @olderlogID=SUBSTRING(SUBSTRING(@currentfilename,@startpos,@diffpos),5,4) - 2

SET @oldfilename = 'log_'
                 + CAST(@oldlogId AS varchar(1000))
SET @olderfilename = 'log_'
                 + CAST(@olderlogId AS varchar(1000))

SET @oldfilename = REPLACE(@currentfilename,@logvalue,@oldfilename)
SET @olderfilename = REPLACE(@currentfilename,@logvalue,@olderfilename)

SELECT *
   FROM fn_trace_gettable(@currentfilename,DEFAULT) AS Trace
 WHERE Trace.ObjectType <> '21587' -- Ignoring the statistics
   AND TextData is not null
   AND (TextData like '%no_log%'
    OR TextData like 'DUMP%'
    OR TextData like '%truncate%')
   AND TextData not like 'SELECT%'

--SELECT *
--  FROM fn_trace_gettable(@oldfilename,DEFAULT) AS Trace
-- WHERE Trace.ObjectType <> '21587' -- Ignoring the statistics
--   AND TextData is not null
--   AND (TextData like '%no_log%'
--    OR TextData like 'DUMP%'
--    OR TextData like '%truncate%')
--   AND TextData not like 'SELECT%'
--
--SELECT *
--  FROM fn_trace_gettable(@olderfilename,DEFAULT) AS Trace
-- WHERE Trace.ObjectType <> '21587' -- Ignoring the statistics
--   AND (TextData like '%no_log%'
--    OR TextData like 'DUMP%'
--    OR TextData like '%truncate%')
--   AND TextData not like 'SELECT%'
--

Calculate the size of you indexes

I am using this script to see how much size my indexes are using inside my database.


SET NOCOUNT ON

CREATE TABLE #index_size
(Table_view_name varchar(200)
,Index_name varchar(200)
,Index_size_kb decimal(20,6)
,Index_size_Mb decimal(20,6)
,Index_size_Gb decimal(20,6))


DECLARE @object_name NVARCHAR(255)

DECLARE object_cur CURSOR FOR
  SELECT t2.name + '.' + t1.NAME
    FROM SYS.OBJECTS t1
  INNER JOIN dbo.sysusers t2 on t1.schema_id = t2.uid
   WHERE t1.type = 'U' --in ('U','V')
   ORDER BY t2.name, t1.name

OPEN object_cur

FETCH NEXT FROM object_cur INTO @object_name

WHILE @@FETCH_STATUS = 0
BEGIN
INSERT INTO #index_size (Table_view_name)
SELECT @object_name

PRINT @object_name

INSERT INTO #index_size
SELECT     @object_name
     ,     i.name              AS IndexName
     ,     SUM(page_count * 8) AS IndexSizeKB
     ,     SUM(page_count * 8.00) / 1024 AS IndexSizeMB
     ,     SUM(page_count * 8.00) / 1024 / 1024 AS IndexSizeGB
  FROM sys.dm_db_index_physical_stats(db_id(), object_id(@object_name), NULL, NULL, 'DETAILED') AS s
  JOIN sys.indexes AS i ON s.[object_id] = i.[object_id]
                       AND s.index_id    = i.index_id
 GROUP BY i.name
 ORDER BY i.name

FETCH NEXT FROM object_cur INTO @object_name
END

CLOSE object_cur
DEALLOCATE object_cur

select @@SERVERNAME, db_name(), Table_view_name,
CASE
  WHEN index_name IS NULL
    THEN CASE
           WHEN Index_size_kb IS NOT NULL
             THEN 'HEAP'
             ELSE index_name
         END
    ELSE index_name
END as index_name, Index_size_kb, Index_size_MB, Index_size_Gb
     from #index_size
where Index_size_kb is not null

drop table #index_size

Top Stored Procedures by Execution Count

In this script I want to show you how you can find the procedures that have been executed the most times on your instance. I always use this script to see if I can make the most executed stored procedures perform better.


SELECT TOP 100
               CASE
                    WHEN dbid = 32767
                          THEN 'Resource'
                          ELSE DB_NAME(dbid)
               END [DB_NAME]
             , OBJECT_SCHEMA_NAME(objectid, dbid) AS [SCHEMA_NAME]
             , OBJECT_NAME(objectid, dbid) AS [OBJECT_NAME]
             , SUM(usecounts) AS [Execution Count]
   FROM    sys.dm_exec_cached_plans cp
  CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle)
 WHERE   objtype = 'Proc'
      AND CASE
                     WHEN dbid = 32767
                       THEN 'Resource'
                       ELSE DB_NAME(dbid)
               END NOT IN ('Resource','msdb','master')
     AND UPPER(
                 REPLACE(
                   REPLACE(
                     REPLACE(
                       REPLACE(
                         REPLACE(
                           REPLACE(
                             REPLACE(text , '       ', ' ') , '       ', ' '), '      ', ' '),'     ', ' '), '    ', ' '), '   ', ' '), ' ', ' '))
              NOT LIKE '%CREATE FUNC%'
GROUP BY dbid
                  , objectid
                  , cacheobjtype
ORDER BY SUM(usecounts) DESC ;

What should we use delete or truncate


There are many stories about the differences between the delete and the truncate statement.

What are the differences between TRUNCATE and DELETE? You may immediately point out that:

Truncates:

Reset the identity value
Can’t have where clause
Can’t be run on a table which is referenced by other tables
Does not fire triggers.
Can’t be run on a table which has depending objects such as indexed views.
Can’t be run on a table which has published by merge or transactional replication publication
Needs db_owner and db_ddladmin permission.
Needs locks on the table and schema but do not need locks on rows of the tables

Internally, after a table is truncated, no data pages are left for the table whereas deletion over a heap may not release the database and deletion on a B-Tree may not release none leaf pages. Truncating command simply unhook data from metadata of the table. When table is small, unhooked data (rows) will be deallocated synchronously. When table is large, data will be removed asynchronously which is called deferred deallocation.

Some myths about the truncate statement:

Truncate cannot be rolled back. That’s not true. Truncate Table command is transactional. It can be rolled back.
Truncate generate less log records. It depends. If table is small enough, truncating table will generate more logs.

Truncate is always faster. It depends. The Truncate statement needs more operations than delete if the table is very small, thus theoretically it needs more time to process.

Deferred deallocation

While a large table is getting truncated, SQL Server merely unhooks the meta-data and IAMs then returns without resetting bits in IAM, PFS, GAM and SGAM. An internal process will set those bits and then change the allocation related tables.


Tuesday, January 31, 2012

SQL Server Backup, Integrity Check, and Index and Statistics Maintenance


Why create something new when there is something very good and free on the market already!

The SQL Server Maintenance Solution comprises scripts for running backups, integrity checks, and index and statistics maintenance on all editions of Microsoft SQL Server 2005, SQL Server 2008, SQL Server 2008 R2 and SQL Server 2012. The Solution is based on stored procedures, functions, the sqlcmd utility, and SQL Server Agent jobs. I designed the Solution for the most mission-critical enterprise environments, and it is used in many organizations around the world. Numerous SQL Server community experts recommend the Solution, which has been a Gold and Silver winner in the 2011 and 2010 SQL Server Magazine Awards. The Solution is free.


I would recommend this script to anyone. Take a look at his website.

Monday, January 30, 2012

Filtered indexes


A filtered index is an optimized nonclustered index, especially suited to cover queries that select from a well-defined subset of data. It uses a filter predicate to index a portion of rows in the table. A well-designed filtered index can improve query performance, reduce index maintenance costs, and reduce index storage costs compared with full-table indexes.

Filtered indexes can provide the following advantages over full-table indexes:

Improved query performance and plan quality

A well-designed filtered index improves query performance and execution plan quality because it is smaller than a full-table nonclustered index and has filtered statistics. The filtered statistics are more accurate than full-table statistics because they cover only the rows in the filtered index.

Reduced index maintenance costs

An index is maintained only when data manipulation language (DML) statements affect the data in the index. A filtered index reduces index maintenance costs compared with a full-table nonclustered index because it is smaller and is only maintained when the data in the index is affected. It is possible to have a large number of filtered indexes, especially when they contain data that is affected infrequently. Similarly, if a filtered index contains only the frequently affected data, the smaller size of the index reduces the cost of updating the statistics.

Reduced index storage costs

Creating a filtered index can reduce disk storage for nonclustered indexes when a full-table index is not necessary. You can replace a full-table nonclustered index with multiple filtered indexes without significantly increasing the storage requirements.


Here is a small example of using Filtered Indexes from BOL.

use AdventureWorks2008
go

select *
  from Sales.SalesOrderHeader
Go

-- create a duplicate table
select 
  into Sales.SOHdup 
  from Sales.SalesOrderHeader
Go

-- check the table
select *
  from Sales.sohdup


--check if there are any indexes
sp_helpindex 'Sales.SOHdup'
Go

-- create a non clustered index

create nonclustered index idx_NC_SOID on Sales.SOHdup(SalesOrderID)


sp_helpindex 'Sales.SOHdup'
GO

-- using DMV
SELECT *
  FROM sys.dm_db_index_physical_stats ( DB_ID(N'AdventureWorks2008')
                                      , OBJECT_ID (N'Sales.SOHDup')
                                      , NULL
                                      , NULL
                                      , 'detailed');
GO


sp_spaceused 'Sales.SOHdup'
GO
SELECT 
  FROM sys.indexes
 WHERE object_id = object_id('Sales.SOHDup')

-- EP means Execution Plan

--select all records and see the EP
SELECT salesorderID
  FROM Sales.sohdup

-- now we want to create a filtered index for the following query
SELECT salesorderID
  FROM Sales.sohdup 
 WHERE salesorderID >= 58659

--first drop the exiting index
DROP INDEX idx_NC_SOID ON Sales.SOHdup

-- create a filtered index
CREATE NONCLUSTERED INDEX idx_NC_SOID ON Sales.SOHdup(SalesOrderID)
 WHERE salesorderID >= 58659
Go

-- now see the usage on index in the EP
SELECT salesorderID
  FROM Sales.sohdup 
 WHERE salesorderID > 58659


-- now observe the size and stats for the filtered index

SELECT i.[object_id]
     , i.name
     , i.index_id
     , p.partition_number
     , p.rows             as [#Records]
     , a.total_pages * 8  as [Reserved(kb)]
     , a.used_pages * 8   as [Used(kb)]
  FROM sys.indexes as i
 INNER JOIN sys.partitions as p ON i.object_id = p.object_id
                               AND i.index_id = p.index_id
 INNER JOIN sys.allocation_units as a ON p.partition_id = a.container_id
 WHERE i.[object_id] = object_id('Sales.SOHDUP')
-- AND i.index_id = 1 -- clustered index
 ORDER BY p.partition_number
go


Try on large tables on your environment and see how it can improve your performance.

Friday, January 27, 2012

Long running queries


One of the big challenges of a DBA is to maintain application’s performance.

 Increasing data on regular basis is often the cause for performance degrades. In most of the cases end users report the slowness in the application, only then DBA’s/Developers jump in and begin the optimization exercise.

Ideally, DBA’s/Developers supporting the application should be the first in identifying the performance problems and should proactively optimize the faulty/poor code.

Well to acheive this there should be a way to identify the queries taking long time. SQL server has been evolving greatly. SQL server 2005 and above are shipped with very helpful DMV (Dynamic management views). These views expose performance related statistics.

Here is the query using couple of such DMV’s .
This query returns top 10(configurable) slow performing queries.

-- Execute the query inside target database

 SELECT TOP 10 qs.total_elapsed_time / qs.execution_count / 1000000.0 AS average_seconds
                           ,  qs.total_elapsed_time / 1000000.0 AS total_seconds
                           ,  qs.execution_count
                           ,  SUBSTRING (qt.text, qs.statement_start_offset / 2,
                                                      (CASE
                                                            WHEN qs.statement_end_offset = -1
                                                                  THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
                                                                   ELSE qs.statement_end_offset
                                                       END - qs.statement_start_offset)/2) AS individual_query
                          , o.name AS object_name
                          , DB_NAME(qt.dbid) AS database_name
   FROM sys.dm_exec_query_stats qs
 CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
     LEFT OUTER JOIN sys.objects o ON qt.objectid = o.object_id
WHERE qt.dbid = DB_ID()
ORDER BY average_seconds DESC;


Thursday, January 19, 2012

Password Audit for SQL Server Logins


This time I am going to talk about password audit for SQL Server. Weak or Blank passwords are an invitartion to trouble. It results in a security risk. Everybody knows the importance of Strong and Complex password but as they say it’s always easy to preach but difficult to practice, we tend to make use of weak and sometimes blank passwords.

How can we avoid it.

Regular Password Audits is the solution. As a SQL DBA we must do a regular audit and remind the users to change their passwords to something more complex. For understanding Password Auditing, you need to understand the built-in function PWDCOMPARE

This function hashes a password and compares the hash to the hash of existing password. You must be wondering why are we again hashing, SQL Server stores the Hash of password for SQL Logins and hashing is only one side, un-hashing or decrypting the hashed data is not at all possible. This function is available from SQL Server 7.0 to SQL Server 2012, means you can make use of this function in all SQL Server versions.

The function takes 3 parameters Clear_test_password, Password_Hash and Version where the first two are mandatory and the third one is optional and obsolete but it is there for backward compatibility. It returns 1 if the hash of the clear_text_password matches the password_hash parameter, and 0 if it does not.

Note: PWDCOMPARE function is not a threat against the strength of password hashes because the same test could be performed by trying to log in using the password provided as the first parameter.

Permissions required: PWDENCRYPT is available to public and CONTROL SERVER permission is required to examine the password_hash column of sys.sql_logins.

1. Identifying logins that have no passwords.

The following example identifies SQL Server logins that have no passwords.

SELECT [name] FROM sys.sql_logins WHERE PWDCOMPARE('', password_hash) = 1;

2. Searching for login names wich have the same value for passwords.

This code identifies the login where the user has given the same text as login and password, for example Login ButerJ is a login with password ButerJ

SELECT [name] FROM sys.sql_logins WHERE PWDCOMPARE([name], password_hash) = 1;

3. Seaching for common passwords

To search for common passwords that you want to identify and change, specify the password as the first parameter. This will check for a password specified as password.

SELECT [name] FROM sys.sql_logins WHERE PWDCOMPARE('P@ssw0rd', password_hash) = 1;

Wednesday, January 18, 2012

The usage of ISNULL and NULLIF

Explanation of the NULLIF:

Syntax:  NULLIF ( expression1, expression2)


NULLIF returns a null value if the two specified expressions are equal. NULLIF returns the first expression if the two expressions are not equal. If the expressions are equal, NULLIF returns a null value of the type of the first expression. NULLIF is equivalent to a searched CASE function in which the two expressions are equal and the resulting expression is NULL.

Following is good example of NULLIF and CASE from BOL:

USE AdventureWorks;
GO
SELECT ProductID

    , MakeFlag
    , FinishedGoodsFlag
    , NULLIF(MakeFlagFinishedGoodsFlagAS 'Null if Equal'
 FROM Production.Product
WHERE ProductID < 10;

GO
SELECT ProductID

    , MakeFlag
    , FinishedGoodsFlag
    ,'Null if Equal' CASE
                         WHEN MakeFlag = FinishedGoodsFlag
                           THEN NULL
                         ELSE MakeFlag
                       END
 FROM
Production.Product
WHERE ProductID < 10;

GO

Explanation of the ISNULL:


Syntax: ISNULL( check_expression, replace_expression )

Replaces NULL with the specified replacement value. The value of check_expression is returned if it is not NULL; otherwise, replacement_value is returned after it is implicitly converted to the type of check_expression, if the types are different.

Following is good example of ISNULL from BOL:

USE AdventureWorks;GO
SELECT AVG(ISNULL(Weight, 50))

  FROM Production.Product;
GO

An interesting observation is that NULLIF returns null if it comparison is successful, where as ISNULL returns not null if its comparison is successful. In one way they are opposite to each other.