Showing posts with label sql server 2005. Show all posts
Showing posts with label sql server 2005. Show all posts

Monday, December 13, 2010

datetime conversions

Today i received a question about datetime conversions, because twitter is only allowing 160 characters in a reply i deceided to put it here.


The question was quite simple, what is the best way in SQL Server to receive only the datepart from a datetime variable. I got into it, and this is what it can be :



declare @dt1 datetime
      , @dt2 datetime
      , @dt3 date
     
set @dt1 = GETDATE()
set @dt2 = CONVERT(datetime, CONVERT(varchar(10), @dt1, 101))
set @dt3 = cast(@dt1 as date)

select @dt1, @dt2, @dt3


the @dt3 variable is a with a new type, date. Date is available in SQL Server 2008 and higher. If you can't use that.. use the @dt2 variable.

 

Thursday, December 2, 2010

String Concatenation in SQL Server. Concat vs REPLACE

Every now and then i get to see some T-SQL code where you get that feeling that something could get a performance upgrade, while not being sure if it would benefit in a positive way. I was going through a lot of code today and saw some lines which made me think.


In pseudo code i had the following line :


set @variable_a = 'First part of a string, with ''' + @variable_b + ''' concatenated to it'

Straightforward, it concatenates a string to another string. How many times you that? The code i went through was full with these kind of concatenations. 
Every time i see a string concatenation, i am thinking about the String vs StringBuilder example in .Net code, where the StringBuilder is much faster in concatenations. So, maybe there is a same general rule for this in SQL Server.

With that in mind, what about the REPLACE function. I use it very often, and to be honest, i used it mainly to make my code more readable. If you think about dynamically build up sql, a placeholder to be replaced reads better then a string concatenation.

So, time to put this to a test and see who is the David that brings down Goliath.  And i made the following script to see which one is faster, and which is slower.

------------------------------------------
-- String CONCATENATION
------------------------------------------
declare @totrow int
      , @teststring varchar(max)
      , @startdate datetime
      , @enddate datetime

set @startdate = getdate()
set @teststring = ''
set @totrow = 1000
while @totrow > 0
begin
 
  set @teststring = @teststring + 'aa'
 
  set @totrow = @totrow - 1
end
set @enddate = getdate()
print 'String CONCATENATION : ' +
        cast(datediff(ms, @startdate, @enddate) as varchar(10)) +
        ' ms'
------------------------------------------
-- String REPLACE
------------------------------------------
set @startdate = getdate()
set @teststring = '[A]'
set @totrow = 1000
while @totrow > 0
begin
  set @teststring = replace(@teststring, '[A]', 'aa[A]')
  set @totrow = @totrow - 1
end
set @teststring = replace(@teststring, '[A]', '')
set @enddate = getdate()
print 'String REPLACE       : ' +
        cast(datediff(ms, @startdate, @enddate) as varchar(10)) +
        ' ms'


The changing part in the 2 blocks of code is the REPLACE functionality. When i run this code, i see that the first part is taking up approximately 0 ms and approximately 70ms for the replace part.And that is with ONLY 1000 replacements. Try to guess what happens when we give the total rows 
variable a value of 10000.
The concatenation part then is using 230 ms and the REPLACE is taking up to 5100 ms to process.

So, next time you use a REPLACE function, use is wisely. On lower rates it can be used, 
higher numbers could end up in useless performance losses.

Thursday, September 16, 2010

Fragment an Index?

Some time ago i wrote a script for defraging indexes on sql server, this one will do the trick to! The old script looked at the sys.indexes table, where the information about fragmentation levels was not being used. The script below does.


I have set the maximum percentage of fragmentation to 30%, which is low enough. Getting a lower fragmentation percentage could be used maybe once of twice on a certain index, but for the most part it would only cost time and wouldn't give you that much of a boost on your indexes. 


I suggest to run this script and use outcome in the generated SQL columns at least once a weak. The first time it might hurt a bit, especially on long running tables. I had to use this one today on a 10.000.000 records counting table and it helped a lot in performance gains and only cost about 30 minutes or so. 


When you don't want to run ALL the generated SQL statements, but only the ones that have a high level of fragmentation look at the columns 'page_count' and 'avg_fragmentation_in_percent'. The higher both are, the more you want to fragment these. On the indexes i had to fragment today some where 50.000 pages with a 98% fragmentation (!?).


Like i said, you have to run the outcome of this query, preferable the 'statement_of_choice' column should be used. The fragmentation levels below 30% will result automatically into a reorganize in stead of a rebuild. That column could be used in my script from a while ago, where the statements were executed. 




declare @database_id int


select @database_id = database_id
from sys.databases
where name = db_name()


select distinct
o.name
, i.name
, s.avg_fragmentation_in_percent
, s.avg_fragment_size_in_pages
, s.page_count
, case
when avg_fragmentation_in_percent < 31 then 'reorganize'
else 'rebuild'
end as choise
, case
when avg_fragmentation_in_percent < 31 then replace(replace('alter index [[INDEXNAME]] on [OBJECTNAME] reorganize', '[INDEXNAME]', i.name), '[OBJECTNAME]', o.name)
else replace(replace('alter index [[INDEXNAME]] on [OBJECTNAME] rebuild', '[INDEXNAME]', i.name), '[OBJECTNAME]', o.name)
end as choise
, replace(replace('alter index [[INDEXNAME]] on [OBJECTNAME] rebuild', '[INDEXNAME]', i.name), '[OBJECTNAME]', o.name) as statement_rebuild
, replace(replace('alter index [[INDEXNAME]] on [OBJECTNAME] reorganize', '[INDEXNAME]', i.name), '[OBJECTNAME]', o.name) as statement_reorganize
, replace(replace(replace('dbcc indexdefrag([DATABASE_ID], ''[OBJECTNAME]'', ''[[INDEXNAME]]'')'
, '[DATABASE_ID]', @database_id)
, '[INDEXNAME]', i.name)
, '[OBJECTNAME]', o.name) as statement_defrag
from sys.dm_db_index_physical_stats(@database_id, null, null, null, null) s
inner join sys.tables o
on o.object_id = s.object_id
inner join sys.indexes i
on i.object_id = o.object_id
inner join sys.index_columns c
on c.object_id = o.object_id
and c.index_id = i.index_id
inner join sys.columns u
on u.column_id = c.column_id
and u.object_id = o.object_id
--order by 5 desc, s.avg_fragmentation_in_percent desc
order by 3, 1




Tuesday, June 1, 2010

WCF Hosted in IIS cannot access a SQL Server

WCF Hosted in IIS cannot access a SQL Server. Looks like a WCF Webservice that is hosted in IIS makes a remote connection to a SQL Server, while it looks like it is a local connection.
http://social.msdn.microsoft.com/forums/en-US/wcf/thread/2fdc2490-f29d-41df-bc47-1ff848871996

Probably i have to set the TCP connection to enabled, and allow it through the firewall. To configure SQL Server for various protocols see this link:
http://blog.sqlauthority.com/2009/05/21/sql-server-fix-error-provider-named-pipes-provider-error-40-could-not-open-a-connection-to-sql-server-microsoft-sql-server-error/

Thursday, May 27, 2010

Execute a query on ALL databases? sp_MsForEachDB might do the trick!

Ever wanted to execute a query on ALL databases available on 1 server?


sp_msforeachdb @command1 = '
select  *
from    [?].sys.database_files
order by name'

the question mark will be replaced with the database name, so generating some SQL for each DB that exists will be easy. I used this one to shrink some large transaction logs



if object_id('##tmp_sqlstatements') is not null
begin
  drop table ##tmp_sqlstatements
end
create table ##tmp_sqlstatements (
  Id int identity(1, 1)
, SqlStatement nvarchar(max)
)
go

sp_msforeachdb @command1 = '
insert
into    ##tmp_sqlstatements
select  ''use [?];
          dbcc shrinkfile('''''' + name + '''''')''
from    [?].sys.database_files
where   type_desc = ''log''
order by name'


declare @totrow int
      , @currow int
      , @nsql nvarchar(max)
select  @totrow = count(Id)
      , @currow = 1
from    ##tmp_sqlstatements

while @currow <= @totrow
begin
  select  @nsql = SqlStatement
  from    ##tmp_sqlstatements
  where   Id = @currow
 
  exec sp_executesql @nsql

  set @currow = @currow + 1
end

drop table ##tmp_sqlstatements

It executes all the statements at once, so it can be used as a part of a maintenance plan. If creating more disk space is the issue for executing this script, make sure that you put the recovery mode into simple. But of course only when you don't need the transaction logs!!

This is very easy to do :
select  replace('alter database [[NAME]] set recovery simple ', '[NAME]', name)
from    sys.databases
where   recovery_model_desc = 'full'
order by name

This will give a result of SQL statements to execute.