Archive

Posts Tagged ‘sql server’

Know the cost of SQL Server 2012’s AlwaysOn Availability Groups

May 30, 2012 Leave a comment

SQL Server 2012 takes Database Mirroring to the next level. Many of us have used SQL Server 2008’s Database Mirroring as part of a disaster recovery (DR) solution. And it’s serves well in that capacity (unless you try and use synchronous mirroring on a high usage DB across to another data center – trust me, don’t do that!).

Some of the great improvements with the touted replacement to Database Mirroring:

  • Can have up to 4 replicas (not just one)
  • Can failover multiple databases as a unit (not just one)
  • Can read from the replicas

However, don’t just blindly go in for this model without counting the cost. There are some requirements for this new technology that impact your licensing cost, and possibly administration cost. Here they are:

1) The feature requires Windows failover clustering – this means you’ll need the Enterprise edition or higher of Windows Server. A Windows cluster requires a minimum of two servers, and since your replica must be on a separate cluster, you’re talking 4 servers, at least. NOTE that there is no requirement to install SQL Server as a clustered application.

2) The feature requires SQL Server Enterprise Edition. See this article for a comparison of the various editions: http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-editions.aspx

Now the question arises: Is Database Mirroring still available as an option in SQL Server 2012? The answer is yes. But beware that AlwaysOn is the replacement and Database Mirroring will likely disappear in the next release. This is explained in the AlwaysOn FAQ here: http://msdn.microsoft.com/en-us/sqlserver/gg508768#OtherEnhancements

My suggestion is to give careful thought to your DR design BEFORE you make firm decisions on your hardware and software editions. You don’t want to be painted into a corner with Standard edition if you could really benefit from features like this which requires Enterprise Edition, and of course on the flip side if Database Mirroring is sufficient for your needs, then you might be just fine with Standard Edition.

Reference for this post:

Introducing SQL Server 2012’s AlwaysOn Availability Groups SQL Server Magazine, January 2012

http://www.sqlmag.com/article/sqlserverdenali/sql-server-2012-configure-alwayson-141127

Advertisements

Should you turn off Autogrowth for tempdb data files in a production system?

July 6, 2011 Leave a comment

Yes, if you want consistent and optimal performance.

And, note that we are talking about the production system (and any other test system where you test system performance).

According to Microsoft tempdb optimization recommendations, you should have “one tempdb data file for each CPU on the server” (cores count as CPUs), and you should “make each data file the same size”.  This recommendation is repeated by experts in the field.

So to ensure the multiple tempdb data files are always the same size, you must disable any Autogrowth settings.  Preallocate the files to meet the demands of your largest workload and then monitor usage to make sure you have plenty of reserve space.

The danger of leaving Autogrowth on is you rely on that as a crutch to ensure your files are large enough.  Then when they autogrow you are not alerted and your system performance is compromised.

Granted while the Instant File Initialization feature of Windows Server 2003 and SQL Server 2005 make for much faster file growth, there is still a window where operations pause.

Note this quote from Microsoft in Working with tempdb in SQL Server 2005:

By default, tempdb is set to auto grow; once the space in the file is full,
the file is allowed to grow 10 % of the initial file size. However,
relying on auto grow to manage file growth causes all applications or 
internal operations that use tempdb to pause. Instant data file
initialization (see Improvements in SQL Server 2005) shortens this pause,
improves performance, and allows files to grow much faster as compared to
SQL Server 2000. It is important to note that auto grow can lead to data 
fragmentation and should be used as a last resort.

-Krip

Sequences in Denali

June 28, 2011 Leave a comment

Great introductory article on Sequences in the next version of SQL Server, code-named Denali.

Finally a way to have more than one sequence in a single table.  And of course, the sequences are fully table independent.

Check out Itzik Ben-Gan‘s article.

-Krip

SQLskills’ Blogs

June 22, 2011 Leave a comment

Just added Kimberly and Paul’s blogs to my blogroll (see sidebar for links).

They provide fantastic tips in SQL Server Magazine.

For example in the May 2011 issue they touch on items like:

Follow them!

-Krip

 

How to work around “Saving changes is not permitted” prompt in SSMS

June 5, 2011 Leave a comment

In a recent update to SQL Server Management Studio (I believe SSMS for SQL Server 2008 R2), changes made via the table designer that require a table to be dropped and recreated result in the following message:

“Saving changes is not permitted. The changes you have made require the following tables to be dropped and re-created. You have either made changes to a table that can’t be re-created or enabled the option Prevent saving changes that require the table to be re-created.”

To go back to legacy behaviour where SSMS will just drop and re-create the table for you, go to Tools–>Options–>Designers and untick the box “Prevent saving changes that require table re-creation”.

-Krip

source

Categories: SQL Server Tags:

How to protect against SQL injection attacks

June 3, 2011 Leave a comment

SQL injection is the act of injecting some characters (e.g. SQL) into a SQL statement causing it to perform an unintended (e.g. malicious) action.  It most often occurs when some criteria fed to the query is provided by user input (or is editable by a user – say the querystring in a URL or a field on a web form).

A number of things contribute to making this type of attack possible including: 1) insecure code, 2) unnecessarily highly privileged database access accounts.

Things you can do to protect against SQL injection:

  1. DO NOT execute any T-SQL by concatenating values, particularly where user input is involved
  2. Use parameterized SQL instead – this ensures that values are escaped such that a T-SQL command cannot be terminated
  3. Use stored procedures where the T-SQL is static and criteria values are passed in as parameters (then use in conjunction with point 2 above)
  4. Do not grant your database accounts used by your application unrestricted access to databases.  DO NOT grant them DBO rights in a production OR test environment.  This is a HUGE BENEFIT of stored procedures.  Stored Procedures can be granted execute rights and SQL Server’s chaining system (known as ownership chains) will automatically allow those operations to run against objects (e.g. tables) without explicit rights there.  Grant rights directly to tables only if application code must run SQL directly.  Even then pay attention to SELECT vs UPDATE rights.

References:

  1. How To: Protect from SQL Injection in ASP.NET (MSDN article from Microsoft Patterns & Practices)
  2. SQL Injection (MSDN article with great tips on protecting against this)

-Krip

Performance of in-line SQL vs. stored procedures

May 27, 2011 Leave a comment

Just been doing some load tests with .NET code against SQL Server 2008 R2 Developer edition via Data Access Application Block in Enterprise Library.

I insert 5,000 records into a table with a primary key on an Identity column and a clustered index on the same.  I’m setting 3 nvarchar fields and one smallint one, and then returning the Identity value of the newly added record.  The in-line SQL method calls SELECT SCOPE_IDENTITY() as the second statement in the batch and utilizes the ExecuteScalar() method while the stored proc path merely returns the same value and is called via the ExecuteNonQuery method.  Using caching as detailed below, I almost achieve 1,000 records per second on my developer machine.

I’m afraid I can’t help the stored procedure vs. in-line SQL debate as sometimes the in-line SQL is quicker and other times the stored proc is quicker!

However to get the greatest throughput (on both in-line and stored procs) be sure to cache (or reuse) the following:

  • Database object (from DatabaseFactory.CreateDatabase) – although caching this didn’t make a huge difference
  • DbCommand (needed for next item!)
  • ALL DbParameters – merely overwrite the values on subsequent calls instead of calling Database.AddInParameter

-Krip