Archive

Posts Tagged ‘sql server’

Know the cost of SQL Server 2012’s AlwaysOn Availability Groups

May 30, 2012 Leave a comment

SQL Server 2012 takes Database Mirroring to the next level. Many of us have used SQL Server 2008’s Database Mirroring as part of a disaster recovery (DR) solution. And it’s serves well in that capacity (unless you try and use synchronous mirroring on a high usage DB across to another data center – trust me, don’t do that!).

Some of the great improvements with the touted replacement to Database Mirroring:

  • Can have up to 4 replicas (not just one)
  • Can failover multiple databases as a unit (not just one)
  • Can read from the replicas

However, don’t just blindly go in for this model without counting the cost. There are some requirements for this new technology that impact your licensing cost, and possibly administration cost. Here they are:

1) The feature requires Windows failover clustering – this means you’ll need the Enterprise edition or higher of Windows Server. A Windows cluster requires a minimum of two servers, and since your replica must be on a separate cluster, you’re talking 4 servers, at least. NOTE that there is no requirement to install SQL Server as a clustered application.

2) The feature requires SQL Server Enterprise Edition. See this article for a comparison of the various editions: http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-editions.aspx

Now the question arises: Is Database Mirroring still available as an option in SQL Server 2012? The answer is yes. But beware that AlwaysOn is the replacement and Database Mirroring will likely disappear in the next release. This is explained in the AlwaysOn FAQ here: http://msdn.microsoft.com/en-us/sqlserver/gg508768#OtherEnhancements

My suggestion is to give careful thought to your DR design BEFORE you make firm decisions on your hardware and software editions. You don’t want to be painted into a corner with Standard edition if you could really benefit from features like this which requires Enterprise Edition, and of course on the flip side if Database Mirroring is sufficient for your needs, then you might be just fine with Standard Edition.

Reference for this post:

Introducing SQL Server 2012’s AlwaysOn Availability Groups SQL Server Magazine, January 2012

http://www.sqlmag.com/article/sqlserverdenali/sql-server-2012-configure-alwayson-141127

Should you turn off Autogrowth for tempdb data files in a production system?

July 6, 2011 Leave a comment

Yes, if you want consistent and optimal performance.

And, note that we are talking about the production system (and any other test system where you test system performance).

According to Microsoft tempdb optimization recommendations, you should have “one tempdb data file for each CPU on the server” (cores count as CPUs), and you should “make each data file the same size”.  This recommendation is repeated by experts in the field.

So to ensure the multiple tempdb data files are always the same size, you must disable any Autogrowth settings.  Preallocate the files to meet the demands of your largest workload and then monitor usage to make sure you have plenty of reserve space.

The danger of leaving Autogrowth on is you rely on that as a crutch to ensure your files are large enough.  Then when they autogrow you are not alerted and your system performance is compromised.

Granted while the Instant File Initialization feature of Windows Server 2003 and SQL Server 2005 make for much faster file growth, there is still a window where operations pause.

Note this quote from Microsoft in Working with tempdb in SQL Server 2005:

By default, tempdb is set to auto grow; once the space in the file is full,
the file is allowed to grow 10 % of the initial file size. However,
relying on auto grow to manage file growth causes all applications or 
internal operations that use tempdb to pause. Instant data file
initialization (see Improvements in SQL Server 2005) shortens this pause,
improves performance, and allows files to grow much faster as compared to
SQL Server 2000. It is important to note that auto grow can lead to data 
fragmentation and should be used as a last resort.

-Krip

Sequences in Denali

June 28, 2011 Leave a comment

Great introductory article on Sequences in the next version of SQL Server, code-named Denali.

Finally a way to have more than one sequence in a single table.  And of course, the sequences are fully table independent.

Check out Itzik Ben-Gan‘s article.

-Krip

SQLskills’ Blogs

June 22, 2011 Leave a comment

Just added Kimberly and Paul’s blogs to my blogroll (see sidebar for links).

They provide fantastic tips in SQL Server Magazine.

For example in the May 2011 issue they touch on items like:

Follow them!

-Krip

 

How to work around “Saving changes is not permitted” prompt in SSMS

June 5, 2011 Leave a comment

In a recent update to SQL Server Management Studio (I believe SSMS for SQL Server 2008 R2), changes made via the table designer that require a table to be dropped and recreated result in the following message:

“Saving changes is not permitted. The changes you have made require the following tables to be dropped and re-created. You have either made changes to a table that can’t be re-created or enabled the option Prevent saving changes that require the table to be re-created.”

To go back to legacy behaviour where SSMS will just drop and re-create the table for you, go to Tools–>Options–>Designers and untick the box “Prevent saving changes that require table re-creation”.

-Krip

source

Categories: SQL Server Tags:

How to protect against SQL injection attacks

June 3, 2011 Leave a comment

SQL injection is the act of injecting some characters (e.g. SQL) into a SQL statement causing it to perform an unintended (e.g. malicious) action.  It most often occurs when some criteria fed to the query is provided by user input (or is editable by a user – say the querystring in a URL or a field on a web form).

A number of things contribute to making this type of attack possible including: 1) insecure code, 2) unnecessarily highly privileged database access accounts.

Things you can do to protect against SQL injection:

  1. DO NOT execute any T-SQL by concatenating values, particularly where user input is involved
  2. Use parameterized SQL instead – this ensures that values are escaped such that a T-SQL command cannot be terminated
  3. Use stored procedures where the T-SQL is static and criteria values are passed in as parameters (then use in conjunction with point 2 above)
  4. Do not grant your database accounts used by your application unrestricted access to databases.  DO NOT grant them DBO rights in a production OR test environment.  This is a HUGE BENEFIT of stored procedures.  Stored Procedures can be granted execute rights and SQL Server’s chaining system (known as ownership chains) will automatically allow those operations to run against objects (e.g. tables) without explicit rights there.  Grant rights directly to tables only if application code must run SQL directly.  Even then pay attention to SELECT vs UPDATE rights.

References:

  1. How To: Protect from SQL Injection in ASP.NET (MSDN article from Microsoft Patterns & Practices)
  2. SQL Injection (MSDN article with great tips on protecting against this)

-Krip

Performance of in-line SQL vs. stored procedures

May 27, 2011 Leave a comment

Just been doing some load tests with .NET code against SQL Server 2008 R2 Developer edition via Data Access Application Block in Enterprise Library.

I insert 5,000 records into a table with a primary key on an Identity column and a clustered index on the same.  I’m setting 3 nvarchar fields and one smallint one, and then returning the Identity value of the newly added record.  The in-line SQL method calls SELECT SCOPE_IDENTITY() as the second statement in the batch and utilizes the ExecuteScalar() method while the stored proc path merely returns the same value and is called via the ExecuteNonQuery method.  Using caching as detailed below, I almost achieve 1,000 records per second on my developer machine.

I’m afraid I can’t help the stored procedure vs. in-line SQL debate as sometimes the in-line SQL is quicker and other times the stored proc is quicker!

However to get the greatest throughput (on both in-line and stored procs) be sure to cache (or reuse) the following:

  • Database object (from DatabaseFactory.CreateDatabase) – although caching this didn’t make a huge difference
  • DbCommand (needed for next item!)
  • ALL DbParameters – merely overwrite the values on subsequent calls instead of calling Database.AddInParameter

-Krip

Minimum configuration for data access block v5

May 27, 2011 1 comment

If you’re using the Data Access Block from Microsoft’s Enterprise Library (v5, e.g. 5.0.414.0), here is the minimum configuration you’ll need to be able to do data access:

<configuration>

<configSections>

<sectionname=dataConfigurationtype=Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35requirePermission=true />

</configSections>

<dataConfigurationdefaultDatabase=myDbConn>

</dataConfiguration>

<connectionStrings>

<addname=myDbConnconnectionString=data source=(local);Integrated Security= SSPI;Initial Catalog=myDatabase;providerName=System.Data.SqlClient/>

</connectionStrings>

</configuration>

Without this you’re liable to get the error: “The type Database cannot be constructed. You must configure the container to supply this value.”

-Krip

How to save disk space on dev SQL Servers

May 23, 2011 Leave a comment

One of the first things I do on any development installation of SQL Server is change the Recovery Model on the “model” System Database from “Full” to “Simple”.  This ensures that new databases created carry the Simple model which means the transaction log will be truncated automatically.  Just checked a new SQL Server 2008 R2 Developer install and it was Full – I would have thought that Microsoft would default to Simple on dev installs but I guess not.

One other tip: If you restore a database – say you bring back a live or test copy onto your dev environment for analysis / debugging – it will keep the Recovery Model of the original – so be sure to change to Simple if you don’t want a large transaction log building up during development!

-Krip

BizTalk Server Tuning Tips

May 22, 2011 Leave a comment

Here are a number of things to pay attention to in order to get the most out of your BizTalk system (many of these are for SQL Server in general):

  • Place data and log files on separate disks
  • Create one tempdb data file per CPU (including cores); ensure each is exactly the same size; do not exceed 8 (see Optimizing tempdb Performance for more details)
  • Use dedicated SQL Servers
  • Be careful of constant database auto-growth – set to sensible values
  • Ensure all databases are getting backed up (which reduces transaction log size) – use the generated SQL Agent job to backup the BizTalk databases – it’s the only supported means of BizTalk database backups
  • Ensure all BizTalk Agent jobs are running successfully (there is one that runs in a loop and never terminates so be aware of that) (and another does backups so if doing your own, skip that one)
  • Microsoft does not want you changing any of their customized settings on the databases made during the BizTalk install – this includes any schema changes to the MsgBox database including indexes (See What you can and can’t do with the MsgBox database)
  • Separate the MsgBox and Tracking database and log files onto separate disks (so 4 disks – so I hope you’re reading this before you’ve committed to your kit!)
  • Change the DTA Purge and Archive job to just purge if you don’t need archiving (see documentation below)
  • Place sending, receiving, processing (i.e. orchestrations), and tracking into separate BizTalk hosts (so 4 are needed at a minimum) (and note again, a separate host for tracking)
  • If on a 32-bit system and getting out of memory (host restarted) error messages, try the /3GB OS startup switch – this gives each host 3GB instead of 2 (I’ve used it with great success)

Finally a couple of great resources to check out.

First I highly recommend the BizTalk Server Operations Guide (from which many of the above suggestions were derived).  It’s chock full of goodness.  There’s one for each version but the 2010 may be sufficient:

Also an absolutely fantastic tool to perform an automatic check of the health and configuration of your system and provide tuning tips is MsgBoxViewer.  I think it’s completely misnamed as it does a lot more than just look at the MsgBox database.

-Krip