Monday, December 27, 2010

Know Your Data

Knowing your data might be the most important quality of the new database administrator (DBA). Just like a marketing person needs to know their customers, a CEO needs to know their market; a DBA needs to truly understand their data. This article is about how knowing your data becomes even more important with cloud databases like SQL Azure Database, and NoSQL style data stores like Cassandra and Windows Azure Storage.

SQL Server

Using a scale up approach for your on-premise SQL Server database, the SQL Server DBA needed:

  • A good working understanding behind the relationships of the data, i.e. how tables related to each other. This allowed the DBA to develop foreign key to primary key relationships, one of the mantras of a relational database.
  • Understanding how data was being inserted and updated into the table allowed the DBA to optimize their queries for locking, partitions for performance, and transaction logs for growth.
  • Understanding how data was queried from the database and how paging worked, allowed the database administrator to tune the database with covered indexes and denormalization of data for better performance.
  • How much data was being added over a period of time, how to scale up the database to grow with the data and how to warehouse that data.

SQL Azure

If you are designing your SQL Azure data store for growth, you need to design in a scale out approach. For more about scale out your SQL Azure database read: Scale Out Your SQL Azure Database. Scaling out divides your data across multiple databases, a concept that SQL Azure calls federation – federation is the concept of sharding in SQL Azure. For more about SQL Azure Federation read: Intro to SQL Azure Federations: SQL Azure Federations Overview.


Federating your data breaks transaction boundaries and pushes SQL Azure towards a NoSQL database structure. The reason I call it a step toward NoSQL is that SQL Azure doesn’t allow you to have transactions that span multiple databases. The parity of transactions to a single database is caused in part because the databases exist on different hardware. If the transaction needs to travel on the network there would be considerable performance degradation. Because SQL Azure is on commodity hardware, the maximum scalable size is 50 Gigabytes. If you need a database bigger than 50 Gigabytes, you need to federate your data across multiple databases and across transaction boundaries. You lose the ACID principles of the database – closer to NoSQL than to a relational database.

The DBA needs to understand more about data in order to properly partition or federate the data correctly. Specifically the DBA needs to know:

  • How to distribute the data into acceptable transaction boundaries.
  • How to find the natural dividing lines of the data so that you can partition it across multiple databases.

Transaction Boundaries

Every query on SQL Azure exists within a transaction context – a principle of ACID relational databases. A transaction is treated in a coherent and reliable way, independent of other transactions. Just like SQL Server, SQL Azure is capable of creating transaction contexts for any statement that you might want to create against the database. And it can rollback any transactions where a failure might prevent the whole transaction from completing.


Regardless of SQL Azure transaction capabilities, typically very few of the statements executed against the database need to have a transaction context. For example, when selecting data with a forward-only, read-only cursor that only row-locks – the transaction context is restricted to the row that is being read. However, what if that table is only updated when the application is offline? The table doesn’t require any locking for read – since there are never any writes. The database doesn’t know this, only the application – or more specifically the application developer.


The pretense of the NoSQL database is that the application developer knows every case where they need a transaction context and every case where they do not. Since true transaction needs are typically very low, the application developer programs all transactions into the client code – and doesn’t require a database that supports transactions. They might even program the rollback code.

Since the NoSQL developer knows the data (some might say better than the relational database developer), they can program their application to use multiple databases to store the data – in other words they know how to scale out using homegrown transactions on the client application.


So they can use commodity hardware in the cloud which is significantly more cost effective than on-premise SQL installations managing the same data volume.

Dividing Lines

If the application is highly dependent on transactions – the developer can keep all transaction to a single database by partitioning the data on transaction boundaries. For example, if the application is supporting software as a service (SaaS) that is used by multiple users in such a way that no user modifies the other user’s data – the user is the transaction boundary. All the data for a user is stored in the same database. This way all the data reads or write happen together in a single database and they don’t span databases. Each database holds a set of users that it can support, however no two databases hold data for the same users.


Knowing your data allows to you “see” the natural transaction boundaries and gives you an edge on scaling out. I foresee the role of the DBA trending away from hardware maintenance and towards database architecture.

Friday, December 24, 2010

Scale Out Your SQL Azure Database


Scaling up an on-premise SQL Server is the concept of adding more computing resources to your existing hardware, like memory, processing power or storage. The purpose is to allow your SQL Server to handle more queries, gain performance or store more data. When you do decided to go to the cloud you should realize that SQL Azure doesn’t scale up like SQL Server – it scales out. Scaling out is adding additional machines with the same data and dividing the computation between them. This article will discuss the differences between scaling up and scaling out and how you need to plan for scale out on SQL Azure to gain performance.

Due to hardware and licensing costs of SQL Server, it is much easier to add resources to existing hardware, than it is to deploy an additional replicated SQL Server on new hardware to gain performance.


The problem with scaling out (adding more machines) in an on-premise SQL Server installation is that this adds a degree of complexity to the design that deters DBAs from this path. The DBA now has to consider clustering, deploying a replication schema, etc. It is easier to purchase bigger hardware and redeploy your current databases on it, replacing the existing hardware, then it is to deploy as a cluster.


In essence, the SQL Server DBA has been trained to scale up, however hardware costs prohibit infinite scale up. If you have to double your server capacity (RAM, DISK, CPU) by purchasing a new server every time you double your workload and you can’t put the old server to use – your server costs get exponentially greater. In contrast, if you can add an additional server in a SQL Server cluster every time your work load doubles (scaling out) then your expenses grow at the same rate as your demand for resources. This is the beauty of scale out – additional resources cost you the same as the current resources – your costs grow linearly with your hardware.


However, most DBAs rely on Moore’s law: Hardware speeds, disk storage, and memory capacity doubles every two years. This means that you can double the performance at the same price as the original cost of your two-year old servers. So, if your workload doubles every two years, you end up purchasing a server twice as big (to scale up) at the same cost as the old server. In this case, all you have to do is purchase a server that will meet your current needs for two years – and hope that your capacity needs do not double beyond that.

SQL Azure

Currently, you cannot scale up SQL Azure processing and memory resources – you are allocated a fixed amount of computing and memory resources per database. Through SQL Azure load balancing, you might gain more resources on a particular machine in the database center, but you cannot crack the case and add more RAM to SQL Azure. For more information see: Inside SQL Azure.

You can, however, scale up SQL Azure storage to the limit of 50 Gigabytes. This is the same as increasing the storage system on an on-premise SQL Server. After 50 Gigabytes you need to scale out your storage.


The inability to scale up confuses many people and leads them to think that SQL Azure doesn’t scale at all. It does scale – it scales out. Not only does it scale out, it scales out at a fixed cost; double the resource costs you roughly double the price.

This table will help determine your options for scaling:


Need Better Performance*

Need More Storage

SQL Server

Scale Out or Scale Up

Scale Out or Scale Up

SQL Azure

Scale Out

Scale Up To 50 Gigabytes Then Scale Out

* You have other options for better performance than just scaling, you can also optimizing your queries, see: Improving Your I/O Performance for more information.

Scaling Out With SQL Azure

In order to scale out with SQL Azure, you need to be able to partition your data, dividing the database across multiple databases. Each of these databases represents a fixed amount of computing and memory resources; the more databases that you divide your data across, the more computing and memory resources you have at your disposal.


Usually scaling involves work that a DBA is used to doing, for example:


  • Adding more RAM.
  • Upgrading the storage system.
  • Clustering two on-premise SQL Servers.
  • Optimizing stored procedures.

However, partitioning your data involves working with the developers that wrote the application that queries the database. Currently, you can’t scale out your SQL Azure database without modifying the client access layer to be aware of the partitioning. This is generally is the developers area, not the DBAs.


Vertical partitioning divides a table into multiple tables that contain fewer columns. For SQL Azure, we take this one step further, that the tables the data is stored on are in different databases. In other words, scaling out your storage, memory and processing power across multiple databases. For more information see: Vertical Partitioning.


Horizontal partitioning divides a table into multiple tables. Each table then contains the same number of columns, but fewer rows. Like vertical partitioning, horizontal partitioning scales out your storage, memory and processing power across multiple databases. For more information see: Horizontal Partioning.


Another option is to shard your database, using multiple databases to store the same data; each database contains an identical copy of your data. With sharding, you scale out the processing power and RAM, however the amount of data stored stays the same. The benefit of sharding is that your application doesn’t have to make multiple queries to collect all the data – since all the data can be found in one of the many databases. However, for each additional database you are paying for the redundant data to be stored – paying for data storage to gain performance For more information see: Sharding with SQL Azure.


In order to efficiently scale, whether with an on-premise SQL Server or SQL Azure in the cloud, you need to be able to scale out. In order to scale out, you need to design databases that scale – which means that the application developers need to be involved in coding the database that can be vertically or horizontally partitioned, or sharded.

Tuesday, December 21, 2010

WP7 Windows Phone 7 Resources for Developers

This is a catchall for any kind of Windows Phone 7 resource for developers. If you have a favorite resource you would like added to the list, add it to the comments and I will update the list.

MSFT Blogs
Windows Phone Developer Blog
Jesse Liberty, Silverlight
Mike Ormond, UK
David Anson
Chris Koenig
Jaime Rodriguez
Jeff Wilcox
Pete Brown
Scott Guthrie
Shawn Oster
James Conard
John Papa

MSFT Sites

App Hub with developer tools
3rd Party Sample Code Repository
Free Frameworks, Tools, and Controls

Channel 9 
Online Training
WP7 Tag Cloud

code samples
class library

Patterns and Practices
Developer Guidance
Touch Enabled Graph Control  (demo video)
Isolated Storage Explorer
Tombstone Helper

PDC 2010 Sessions
Mix 2011 Sessions

Silverlight phone tutorials

Am I missing something? Comment below and I will add it.

Monday, December 13, 2010

Have we forgotten the sport of development?

There is a long standing pattern with sport teams of all types:

  1. The Team Members
  2. The Team Captain
  3. The Team Coach
  4. The Team Manager

This pattern is well established, and it is well accepted that one bad apple in the above can change a champion team into a loser. Development should learn from this pattern. The following appear to be appropriate mappings by functions and roles served:

  • Team Captain –> Dev Lead. The best member with leadership ability. The person that the team members have confidence in and respects. Someone good in the sport (development)
  • Team Coach –> Architect. A coach plans out and teaches the plays. He makes the call on what should happen on the field.
  • Team Manager –> The Development Manager. He is responsible for getting resources (including new people),  dealing with the sponsors etc.

My general observation is that most development teams are crippling themselves by trying to save money by overloading positions. The consequences are the same as for a sports team:  poor execution. A Team Captain on the field in the middle of a game is not able to sit back and observe what is happening and craft the best strategy, his vision is the immediate now. All of the above need to communicate well with those above and below them for an effective team.


Perhaps it is time that we start pressing for a sports analogy for staffing and funding.

Thursday, December 9, 2010

Running Multiple Threads on Windows Azure

The default worker role created by Visual Studio 2010 generates code that only leverages a single thread in your compute instance. Using that generated code, if you make synchronous network requests to SQL Azure or to a web service (for example via REST), your dedicated core for the instance becomes underutilized while it waits for the response from the network. One technique is to use the asynchronous functions in ADO.NET and the HTTPWebRequest classes to offload the work to the background worker. For more information about asynchronous calls read: Asynchronous Programming Design Patterns . Another technique that I will cover in this blog post is how to start up multiple threads, each for a dedicated task, for this purpose I have coded a multi-threaded framework to use in your worker role.


Goals of the framework:

  • Remain true to the design of the RoleEntryPoint class, the main class called by the Windows Azure instance, so that you don’t have to redesign your code.
  • Gracefully deal with unhandled exceptions in the threads.


The RoleEntryPoint class includes methods that are called by Windows Azure when it starts, runs, or stops a web or worker role. You can optionally override these methods to manage role initialization, role shutdown sequences, or the execution thread of the role. A worker role must extend the RoleEntryPoint class. For more information about how the Windows Azure fabric calls the RoleEntryPoint class read: Overview of the Role Lifecycle.


The included framework provides the the ThreadedRoleEntryPoint class which inherits from RoleEntryPoint and is used as a substitute for RoleEntryPoint in WorkerRole.cs. Here is an example of using the framework:


public class WorkerRole : ThreadedRoleEntryPoint
    public override void Run()
        // This is a sample worker implementation. Replace with your logic.
        Trace.WriteLine("Worker Role entry point called", "Information");


    public override bool OnStart()
        List<WorkerEntryPoint> workers = new List<WorkerEntryPoint>();

        workers.Add(new ImageSizer());
        workers.Add(new ImageSizer());
        workers.Add(new ImageSizer());
        workers.Add(new HouseCleaner());
        workers.Add(new TurkHandler());
        workers.Add(new Crawler());
        workers.Add(new Crawler());
        workers.Add(new Crawler());
        workers.Add(new Gardener());
        workers.Add(new Striker());

        return base.OnStart(workers.ToArray());


Inside the OnStart() method we create a list of class instances that are passed to the ThreadedRoleEntryPoint class, each is given its own thread. Also note that some of the classes are listed more than once. Multiple classes mean that simultaneous identical work is being performed – it also allows us to balance the work. In the example above crawling is three times more important than house cleaning.


It would be nice to reuse the RoleEntryPoint class to create subclasses for each thread that we want to start, however Windows Azure requires that there only be one and only one class the subclasses the RoleEntryPoint class in the cloud project. Because of this restriction I have created an abstract class called: WorkerEntryPoint which all the thread class must inherit from. This class as the same lifecycle methods as the RoleEntryPoint class:

  • OnStart Method
  • OnStop Method
  • Run Method

To create a threaded worker you override these methods in just the same way as if you were inheriting from the RoleEntryPoint class. However, the only method you have to override is the Run method. Typically it would look something like this:

internal class Striker : WorkerEntryPoint
    public override void Run()
        while (true)
            // Do Some Work


Handling Exceptions

One thing that a Windows Azure worker role does nicely is to try to stay up and running regardless of errors. If an exception occurs within the Run method, the process is terminated, and a new instance is create and is restarted by Windows Azure. When an unhandled exception occurs, the stopping event is not raised, and the OnStop method is not called. For more information about how the Windows Azure fabric handles exceptions read: Overview of the Role Lifecycle. The reason .NET terminates the process is that there are many system exceptions that you can’t recover from like: OutOfMemoryException without terminating the process.

When an exception is thrown in one of the created threads we want to simulate the Windows Azure processes as close as we can. The framework allows Windows Azure to terminate the process on unhandled exceptions; however it tries to gracefully shutdown all threads before it does. Here are the goals for exception handling within the framework:

  • Gracefully restart all threads that throw an unhandled non-system exception without terminating the other threads or process space.
  • Use the managed thread exception handling to terminate the process on unhandled system exceptions.
  • Leverage Windows Azure role recycling to restart threads when the role is restarted.

When there is an unhandled exception on a thread created with the Start method of the Thread class, the unhandled exception causes the application to terminate. For more information about exception handling in threads read: Exceptions in Managed Threads. What this means is that we need to build in some exception handling in our threads to that framework can exit gracefully. The ProtectedRun method accomplishes this:


/// <summary>
/// This method prevents unhandled exceptions from being thrown
/// from the worker thread.
/// </summary>
public void ProtectedRun()
        // Call the Workers Run() method
    catch (SystemException)
        // Exit Quickly on a System Exception
    catch (Exception exception)
        // Perform Error Logging or Diagnostic


The main RoleEntityPoint class which manages all the threads loops across all the threads and restarts any that are terminated (see code below). The threads become terminated when they exit the ProtectedRun method as opposed to unhandled exception which terminates the process space.

while (!EventWaitHandle.WaitOne(0))
    // WWB: Restart Dead Threads
    for (Int32 i = 0; i < Threads.Count; i++)
        if (!Threads[i].IsAlive)
            Threads[i] = new Thread(Workers[i].Run);



The while loop terminates when a stop is requested from the Windows Azure Fabric, by tripping the EventWaitHandle to a signaled state. The EventWaitHandle provides thread protection, since the stopping request is made on a different thread.

Error Reporting

Windows Azure Diagnostic assembly has diagnostics class for collecting and logging data. These classes can be used inside a worker role to report back information and error about the state of the running role. For more information about diagnostics in Windows Azure see: Collecting Logging Data by Using Windows Azure Diagnostics.


When using the threaded role framework you need to call the diagnostic classes from the threaded class (which inherits from WorkerEntryPoint class) went you have something to report. The reason behind this is that the unhandled system exceptions in the threaded class will cause the process space to terminate. Your last chance to catch and log error information is in the threaded class. Generally I use a try/catch in the Run method to catch all exceptions, report them to diagnostics and then re-throw the expectation – which is not different than what I would implement in the default single threaded worker role.


The included framework enables multi-threaded Windows Azure worker roles within the same context of the Windows Azure Service Runtime. Using multiple threads can increase your processor utilization when your application is waiting for network requests – like calls to SQL Azure to complete.


Download the Framework.



Wednesday, December 8, 2010

How to Combine a Worker and Web Role in Windows Azure

Some of us are running really small web sites on Windows Azure, and for every web site it seems that I needed to do a little background processing. Usually, this means creating both a web role to handle the web requests and a worker role to the background processing – typically through separate Visual Studio projects. However for a small web site this can get expensive if you are not fully utilizing the resources of both instances.


This article is going to show you how to include your background processing into your web role so you can fully utilize the Windows Azure instance. The same background processing that you would typically do in a worker role.


The recommended approach for Windows Azure is always run two instances to maximize uptime – if you were to follow the recommendation you would have the web role plus the extra background work running twice. Which means you need to design your background work to run Idempotency. For more information read: Idempotency for Windows Azure Message Queues.

The tasks that we are moving from the worker role will consume processing power and memory from the Windows Azure instance running the web role. The tasks will not block the web requests, they just share the resources of the instance. If you have a lot of background tasks or a lot of web requests, you should probably have separate worker and web roles in different instances.


An example of a background tasks that I want to include with my web site is a cleanup process for Session state that I blogged about here: Using SQL Azure for Session State, or creating a daily back-up of my SQL Azure database (something that I would do with SQL Server Agent if I was using an on-premise SQL Server). These tasks are not resource intensive – for the most part the work is being off loaded onto SQL Azure. However, at the same time I want them to be performed outside of the request/response cycle of the web role.


A Windows Azure web role is really a worker role with some IIS trim to handle HTTP requests. Every role in Windows Azure starts by calling the class that subclasses the RoleEntryPoint class. For a web role created by Visual Studio 2010 this look like this:


public class WebRole : RoleEntryPoint
    public override bool OnStart()
        // For information on handling configuration changes
        // see the MSDN topic at

        return base.OnStart();

You can find this code in the file named Webrole.cs. You will also find that the Worker role has very similar code. The default worker role from Visual Studio also overrides the Run() method of the RoleEntryPoint like this:


public override void Run()
    // This is a sample worker implementation. Replace with your logic.
    Trace.WriteLine("WorkerRole1 entry point called", "Information");

    while (true)
        Trace.WriteLine("Working", "Information");


The easiest way to create a combined worker and web role is to create the web role first using Visual Studio 2010 and then override the Run() method. Here are the steps to do that:

  1. Open Visual Studio 2010
  2. Create a New Windows Azure Project


  3. Choose ASP.NET MVC 2 Web Role or ASP.NET Web Role


  4. After the files are created, file the WebRole.cs file and open it.
  5. Add this code to override the Run() method


    public override void Run()
        // This is a sample worker implementation. Replace with your logic.
        Trace.WriteLine("WorkerRole1 entry point called", "Information");
        while (true)
            Trace.WriteLine("Working", "Information");


  6. Add a using statement for these two assemblies: System.Diagnostics, System.Threading;
  7. Save and compile.

Now you have a combined worker and web roles for the price of one Windows Azure instance.



Tuesday, November 23, 2010

Deleting an Email Account on the Windows Phone 7

I don’t think that it is completely obvious how to delete an email account on the Windows Phone 7, so I am writing this quick blog post to describe how to do it.

  1. Click on the Windows button to get to the start screen (with the tiles)
  2. Swipe right to get to the applications list
  3. Swipe down until you find the settings icon.
  4. Tap on Settings to enter settings.
  5. Choose email & accounts
  6. Find the email account that you want to delete.
  7. Press down on the email account you want to delete until the secondary menu appears.
  8. There will be an option to delete, tap it.


Saturday, November 20, 2010

A Smart Overloading of a Stored Procedure Call by Mining COALesce.

Often you have a table where different columns needs to be updated by different calls.  Traditionally, you wrote a stored procedure for every variation which can result in a mountain of code (increases your LOC/hr but drives maintenance cost up more!)


IF you can eliminate NULL as being a valid value (use ‘’ instead?) then you can reduce everything to a single stored procedure as illustrated below.


Create proc harvest.P_UpsertEvent @ID uniqueidentifier=null ,@ParentID uniqueidentifier=null ,@ConsolidationSequenceNo bigint=null ,@Level smallint=null ,@EventFulId varchar(50)=null ,@VenueID uniqueidentifier=null ,@Title nvarchar(120)=null ,@Text nvarchar(max)=null ,@Html nvarchar(max)=null ,@StartAt datetime=null ,@EndAt datetime=null ,@ExtendedXml xml=null AS If Exists(Select 1 from Events.Event Where ID=@ID) BEGIN Update [Events].[Event] SET [ConsolidationSequenceNo]=Coalesce(@ConsolidationSequenceNo,ConsolidationSequenceNo) ,[Level]=Coalesce(@Level,Level) ,[EventFulId]=Coalesce( @EventFulId,EventFulId) ,[VenueID]=Coalesce(@VenueID,VenueID) ,[Title]=Coalesce(@Title,Title) ,[Text]=Coalesce(@Text,Text) ,[Html]=Coalesce(@Html,Html) ,[StartAt]=Coalesce(@StartAt,StartAt) ,[EndAt]=Coalesce(@EndAt,EndAt) ,[ExtendedXml]=Coalesce(@ExtendedXml,ExtendedXml) Where ID=@ID END

In short, if a new (NOT NULL) value is supplied, it is used – otherwise the existing value is used.


If you are working with column level permissions, then you can just insert logic to set values from an unauthorized user (for the column but not the row) to a null. This actually makes the number of lines of code to implement column level permissions a fraction of other implementations.

Sql Upgrade Strategies

In my prior post I described how you can identify the different between a prior production database and a developer’s evolution of the database. You could make the developer responsible for developing the scripts but that is not always the best use of such a resource (do you make the developer responsible for writing all of the automated unit tests for their code and thus do not need testing resources? The appropriate best use of a developer issue is similar). Whether the developer’s does it, the database developer does it or a contracted resource does it – you end up with a batch of TSQL for each ‘upgrade’ (there may be dozens of developer’s upgrades in a product release cycle). This post looks at the management issue.


  1. We have the TSQL to go from Prod –> Dev
  2. We want to apply it to some other Prod system


The basic pattern is a series of update files and a console application that reads the TSQL in , configuration information and then apply the information. The files must be sequenced (manifest file or by file naming).


NOTE: For a release system, the TSQL may be in a password protected zip file that the console application has the password for. For development, the TSQL may be coming from files in the same folder.


Put a Version Table in the database

This table is essential for easy management of the database. It is also useful for well managed code, your C# code can check to see if the database supports an feature and thus provide backward compatibility. An example is shown below:

var spName = string.Empty(); if (DBVersion > 1783) { spName = "p_GetOrgaizationTree_3"; } else if (DBVersion > 234) { spName = "p_GetOrgaizationTree_2"; } else if (DBVersion > 83) { spName = "p_GetOrgaizationTree_1"; } else { spName = "p_GetOrgaizationTree"; }

The DBVersion is obtained and cached when the application starts so there is only one call to it. My preference is to use an Int and increment it by 10 each time (there’s are advantages to leaving space between them – see below).

Sequential Update

If you put each batch of TSQL into XML (instead of using a .sql file) you are able to do a better control update of databases. For example, if the root element is:


  <dbupdate fromversion=”230” toversion=”240”>


Then if the database version is not a match, the file is skipped. If it is a match, then the dbversion is incremented by 1 at the start (to 231) of executing the file content and then updated to 240 at the successful end. If any statement fails, the console utility exits – leaving a log file to be sent to support. The database is left in state 231 (so a repeat execution will not execute anything because there is no script for ‘231’). After supports figure out the problem, it sends a new file with a header of:


   <dbupdate fromversion=”231” toversion=”240”>


This gives support some 9 attempts at fixing any problems…


The use of XML allows better control of what SQL in an update actually gets implemented.  Often the TSQL is sufficient, for example:


In some cases, writing a pre-condition that is executed by the console application is more efficient or easier to understand.

<function check="Select count(1) From sys.objects where name='GetProfileTypeID'" id="function17" executed="10/30/2010 1:40:49 PM"> CREATE FUNCTION [Sync].[GetProfileTypeID] ( @ProductionID Int ) RETURNS Int AS BEGIN DECLARE @DevId Int DECLARE @ProfileTypeName nvarchar(max) SELECT @ProfileTypeName=ProfileTypeName FROM BoardDemos.dbo.ProfileType Where ProfileTypeID=@ProductionID SELECT @DevID=ProfileTypeID From BoardDemos_Dev.dbo.ProfileType WHERE @ProfileTypeName=ProfileTypeName RETURN @DevID END </function>

Remember that in some cases, items cannot be dropped and recreated because they are in use elsewhere.

Bail on first exception

The console application should bail on the first exception. In general, it should also do a backup of the database before any execution. The console application should also support doing an automatic rollback/restore if any exception occurs (this may be ON by default, and set to OFF when debugging issues).  You should not leave a customer installation in limbo.


In some cases, you may also wish to do a diagnostic backup at the point of failure before doing the restore.


Gotchas for Data

In the past I have encountered two situations where there’s been some pain in writing scripts.  When the primary key is Identity(1,1) and new data is being added that must match up with existing records; OR  when the primary key is a GUID and new data is being added that must match up with existing records AND there is no reliable alternative key (unique index) to use for match ups.  The easiest solution is often to create a SyncID column that is a GUID and explicitly set it (instead of relying on NewID()) and use this as an alternative key.  In some cases, I have had to infer the match by doing a:




to insure that there is an expected match.  In other cases, there may be rename of something, for example “Winslow” –>  “Bainbridge Island” in the production system (with the center in terms of longitude and latitude being updated too!). The development system did not have this change applied. If there was a SyncID() in the table (which used an Identity(1,1) for primary key) then life is much easier.  This actually facilitate a two way update of the Prod and Dev databases (Some data from prod can update the dev database).


Over Christmas I may make a drop of my version of the above console application available for any interested parties – so drop me a line (or leave your email here)…

Startups and Sql Server Databases

Often a startup tosses out a database to get a web site up or product out of the door. This is reasonable when there is neither the time(-to-market) nor the money (for SQL Data Modelers and developers).


For a web site there is usually one of two patterns of evolution:

  1. Use the production database for development.
    1. There’s no problem of matching database to next generation code base.
    2. Side-effects can be horrible from miscoding or security breaches.
  2. Use a clone of the production database and then evolve it with the next generation code.

For a product database, 2. above is the typical option. We will consider this type of option alone in this post. The second pattern we need to look at is a non-issue for a one-man development shop, but arises with there are multiple developers:

  1. A single shared development database
    1. Any problems impacts the entire development team
  2. Each developer has their own development database
    1. Each developer evolves the database to suit their own needs

Even with ‘get-out-of-jail (= SQL skill sets)’ free cards of dbml / Linq-to-Sql, problems arise.


The problem is simple:  What TSQL is needed to convert old DB to new DB while maintaining existing (customer) data. For the rest of this post we will call the databases ‘Prod’ and ‘Dev’.  In reality this could be Prod –> Dev1 –> Dev2 –> Dev3 –> Dev4—> NextRelease.


The process can be simplified by identifying the differences between Prod and Dev. This is actually simple to do via code. Below we identify changes of objects between the two. There are two statements (what is in Prod that is not in Dev, what is in Dev that is not in Prod, and what has changed (i.e. Table –> Updatable View).


Select name,type from dev.sys.objects EXCEPT Select name,type from prod.sys.objects Select name,type from prod.sys.objects EXCEPT Select name,type from dev.sys.objects

The above uses the EXCEPT to remove everything that is the same.  The same logic may be applied to other sys.tables, for example


Select name,type_desc,is_unique, ignore_dup_key, is_primary_key from dev.sys.indexes EXCEPT Select name,type_desc,is_unique, ignore_dup_key, is_primary_key from prod.sys.indexes Select name,type_desc,is_unique, ignore_dup_key, is_primary_key from prod.sys.indexes EXCEPT Select name,type_desc,is_unique, ignore_dup_key, is_primary_key from dev.sys.indexes

There compares can also be on done on other system objects and views:

Select Table_Name, Column_name, Column_Default, Is_Nullable, Data_Type, Character_Maximum_Length, Numeric_Precision, Numeric_Precision_Radix, Numeric_Scale, Collation_Name from prod.Information_Schema.Columns EXCEPT Select Table_Name, Column_name, Column_Default, Is_Nullable, Data_Type, Character_Maximum_Length, Numeric_Precision, Numeric_Precision_Radix, Numeric_Scale, Collation_Name FROM dev.Information_Schema.Columns


The one thing that is needs to be remembered is that some columns are expected to be different and thus should NOT be included in the compare, for example, db_id, object_id etc.


Once you know the difference, you know what needs to be altered in get the schema between prod and dev to match. 95% of the time this can be done by just finding the object in SSMS and right click,  with Drop and Create:


and a few ALTER Tables.


You still have the issue of data migration and the issue of not being able to immediately add a Not Null column or new Foreign Key constraint until after the data issues are resolved. You also may have sequencing issues (which can become very hairy when there are cyclical foreign keys)


In the next post, I will look at patterns for putting the above TSQL into a build or installation script.

Buy or Roll Your Own or Contract Out?

There are a variety of commercial tools available (see wikipedia). The problem with these tools is that there is often some art involved – some analysis that needs to be done by a human.  Rolling your own script depends on two items: do you have people with the skill set and experience to do it? and is this the best use of this resource? Contracting out means finding someone or some firm that has the experience and willing to take it on as an ad-hoc jobs with deadlines (when you need it, you need it promptly!). I should put a shameless plug in for myself here because I usually have enough spare hours each week to do this type of ad-hoc work – email me if you need such services. IMHO, the cheapest solution is the contract out – existing development resources can keep focus on more important things; there is no time lost on learning commercial tools and often better quality results because of a fresh-pair of eyes on the data and migration issues (i.e. less risk of assumptions that are false).

Friday, November 12, 2010

Finding out what changed between two versions of a SQL Server Database

This is just a quick note showing how you can compare the structure of two databases  easily.

These queries will spell out what (and their types)

-- One way Select name,type from MyDataBase_dev.sys.objects EXCEPT Select name,type from MyDataBase.sys.objects -- reverse direction Select name,type from MyDataBase.sys.objects EXCEPT Select name,type from MyDataBase_dev.sys.objects

The next item to check is usually columns, which follows a similar pattern:


Select Table_Name,Column_name from MyDataBase.Information_Schema.Columns EXCEPT Select Table_Name,Column_name from MyDataBase_Dev.Information_Schema.Columns Select Table_Name,Column_name from MyDataBase_Dev.Information_Schema.Columns EXCEPT Select Table_Name,Column_name from MyDataBase.Information_Schema.Columns

Many developers do not know about the very useful EXCEPT operator which excludes from one SELECT all matches in another SELECT. You can do this for the rest of the system catalogs (just do not include columns that are expected to be different between databases…)

Thursday, November 11, 2010

Writing Web Pages for a Client That You Can’t Test

Every time I get a C# exception on my ASP.NET web site running in Windows Azure, I have my web site code send me an email; this email includes the call stack, path information, and user agent string. I can use these email to find the problems with the code that caused the error. For a long time I have been ignoring the email (about 50 a day) and letting them stack up – being too busy with other projects. Lately, I have resolved to clean up all the code, so that I get less email and reduce the bugs in the web site. I thought this would lead to a better user experience for people visiting the site.


My typical process is to read the call stack, look at the path information and try to reproduce the issue to gain insight into how I have to fix it. Basically, a debugging process that I have in my toolkit no matter what the environment I am programming.

To get started I make sure that I am running the same browser that the user was using, be that Firefox, Internet Explorer or Google Chrome. I am limited in the number of browsers I can test, since I don’t have a full test lab, however just before deploying new code changes I test all the browsers I have to make sure the CSS renders the same and the JavaScript works across the board, and it all calls my server side code correctly.


As I worked through my stack of emails I started to notice an emerging trend – it was not my users that were having problems with my web pages it was the robots. Robots, Bots, or web crawlers are mainly search engine programs that traverse your web site and glean information to build results for searches on their sites.


The interesting thing about bots is that they don’t ever work as well as the browsers. Or it would appear, because I built my web site for browsers not bots – which means that the web site works best for users. The difference in thinking is that the bots aren’t working wrong; they are just not first class citizens on the web site because I coded the site for browsers. Think of it as a car that is built for adults, tested for adults that a child tries to drive. It sort of works, however it is hard to reach the car peddles and see over the dashboard at the same time if you are a kid. The same goes for the bots, they are trying to consume something that was tested for browsers.


The simple approach would be to ignore the errors from the bots, since they are not my target audience. In fact, I can restrict the bots from the web site altogether with a robots.txt file. However, my intent is to make a better user experience for my users – so does fixing the errors for the bots create a better user experience for people that are really human? The answer is yes – if the web crawlers can find the content on my site (without getting errors first) they can drive traffic to the site. This traffic driven from the search engines is real traffic from humans.


Now that I know I want to fix the errors from the bots, let’s take a look at my debugging technique. Key to process is simulating the environment by using the appropriate web browser; the client that made the request that caused the error. However, I have no access to the web crawlers (the client for the bots) and cannot simulate a request from that client. In fact I am not even sure how they handle the response (the outputted HTML), because a lot of how the web crawlers work is kept a secret; the interactions with the site are intellectual proprietary technology. All I have to go on is the HTTP standard, which dictates how the requests are made and some speculation about how the search engines works which falls within the black arts of Search Engine Optimization.

This leaves me in this limbo land of fixing the suspected bug without being able to reproduce. I have to deploying the fix live to see if it solves the web crawler’s problem all without breaking the human experience via the browsers. Sounds like it isn’t worth the effort right? No true, 97% of my human traffic comes from the search engines. So maybe I should be writing my web site for the bots.



Friday, November 5, 2010

The sales pitch to developers …

Last night I attended the local Linux Group meeting with a presentation on  a MS Access/OO.Base to Drupal presentation described as:


“Most people think of drupal as a website framework system. However, it can also serve effectively as a replacement to the forms/reports/tables system utilized by access and base. (as well as many other things, but we won't discuss that formally tonight) No prior knowledge of Drupal is required, and for those who do understand drupal, we will go through using cck and views, as well as a few other modules to develop a replacement for a small access database.”

Since this is a Windows Blog, it appears to be off target – however since it’s a emotionally-detached example to illustrate what is also seen with Windows stuff. As a FYI, I grew up on CP/M and ran Mark Williams Unix (Coherent) in the mid 1980’s, we have three Linux boxes in the house.


The first  thing that I struck me during the talk was that the speaker was laying smoke (as in a destroyer laying smoke to hide ships behind it) and did not know/research fundamentals. Some quotes:

  • “Base/Access are a flat file system” – wrong both are databases. A CSV file is a flat file system.
  • “Access can only handle 65K rows”,
    • Access can handle as many rows that will fit into 2GB of storage.
    • Excel 2007 can handle up to 1,000,000 rows

This always raised concerns because it means that the presenter is clearly not knowledgeable (and thus give one-sided perception and recitation of justification against the other product). Always ask detail questions and press for hard answers. If you present, do not wing it – do solid research with URLs from the officials authoritative sources to backup claims.


For example, I asked if Drupal supported RBAC. The prompt answer was that it does – fortunately we live in the age of the Internet, so I googled “Drupal RBAC” and the first hit was on the site, it was still unanswered… The speaker appeared to be not informed (or did not know what RBAC is – often people think MAC or DAC is RBAC, it’s not. The National Institutes Of Standards and Technology [] has a clear statement of what RBAC is – I would love the speaker to point me to the RBAC implementation that Drupal has – as a FYI Linux has RBAC available at the operating system. See SELinux.


The speaker also claimed that it was “Secure” to the highest level. Again, the internet is there to verify information.  Well, the Drupal site has FIVE pages of Security Advisories. I then checked the authoritative source, the National Vulnerability Database [NVD], another Nist.Gov, site which listed 364 software flaws security advisories  for the search “Drupal”. Results from other searches:

  • “OpenOffice Base” – 24
  • “Excel” – 161
  • “Microsoft Access” -- 218

Now the speaker recommended doing an install from acquia which bundles it with a bunch of other software, two items I checked on NVD:

  • “PHP” – 18512 Software flaws known
  • “Apache” – 446

I always use NVD to get an objective evaluation of how secure a software product is. It’s your taxpayer dollars doing good work.  Bottom line, it is far less secure.


Part of the same sales pitch was that the Obama White House used it. I google “drupal whitehouse” and the first hit was a Feb 2009 announcement that pointed to Well, when I went to the site, I saw the site currently deliver pages with “.aspx” – AspNet, not drupal. Evidently Drupal was yanked.  I did find a Whitehouse announcement about Drupal from April 2010.  There was also claims that the DOD used Drupal in it’s line of business– I was unable to find any significant google items confirming this is happening, there is an announcement of  a pilot study for social networking.  The speaker hinted that it was being used for DOD secure information projects….


Bottom line, I would not port from Base or Access to Drupal: There is nothing clearly gained and a lot of clear losses:

  • Lost of a relational database structure (native Drupal storage is not a RDBMS)
  • Steeper learning curves – increased cost of business
    • Easier to find general (cheap) office staff that know how to use Access or Base – the two are very similar so if you know one, you know the other – this is not the case with Drupal.
  • You need a lot more IT expertise to be secure:
    • Drupal uses Apache – if the PC is connected to the internet and you don’t have all of the firewalls and gateways properly configured, you may be hacked.
      • Classic issue is not changing default passwords…  or using weak passwords
    • A lot of software flaws

To me, for most “alternative” approaches there are three dimensions of concern:

  • Security – protecting corporate data
  • Cost to maintain – ongoing expense
    • Learning curve
    • Availability of up-to-speed individual today and in 7 years  Business systems should last at least 7 years without needing a refactor.
    • Often magnitudes above original licensing cost (if any).
  • Lock-in Degree – upgrade path
    • How easy is it to move to upscale platforms? Access –> SQL Server Express; BASE –> MySQL
    • Products die – even very very good ones. What happens if this happens. Drupal could be hit with an IP infringement and the open source project ordered removed from the web. Microsoft kill stuff, I still have copies of Microsoft FORTRAN, Borland Turbo Prolog,  and Microsoft PASCAL around. What is the recovery plan?