Monday, March 29, 2010

Configuring any SQL Server – the forgotten homework…

Often developers just toss SQL Server on any handy drive and forget about it.  Later when they are asked to provide guidance on installing SQL Server they provide zero often, or slip into pro-forma “mom’s apple pie” statements.

 

Today, I decided to do a little homework and downloaded SQLIO tool from Microsoft to get some numbers on my drives. I have two RAID-0 logical drives, one RAID-1 logical drive, 1 SSD Drive and an external SATA drive. In general, SATA drives are developer grade and definitely not production grade – production grade is SCSI drives.

 

So running just one test (using random access writes) on the drives, I got the following interesting table. I opted for random writes because it tends to bring out the nasty.

 

Drive Latency Average Latency Max Ios/Sec MBs/Sec
SSD (Intel) 32 414 966 121
Mirror @ 5200 RPM 207 515 153 20
Strip @ 7200 RPM 57 649 552 69
Strip @ 7200 RPM 63 841 500 62
eSata @ variable RPM 69 816 457 57

The RAID controller was the same for all drives.

 

The thing that amazed me was the sluggishness of the Mirrored Drive (yes, I checked, it is 0% fragmented – so that is not it). I expected it to be better then the eSata drive, instead it’s 1/3 the write rate, 3x the latency.

 

The mirrored drives happened to be the drives that came with the machine… which actually suggests that if you want performance, get your own drives!

 

The stripped drives performed 15% better than the eSata, so stripping paid off – but not as much as I would have expected.

 

Now the SSD did double the performance of the hard drives for writes, which is what I was expecting – doing writes is much slower for SSD then reads where 3 to 4x improvement is seen.

 

Drive Model Cache
SSD (Intel) INTEL SSDSA2M160G2GC  
Mirror @ 5200 RPM WDC WD10EARS-00Y5B1 32 MB
Strip @ 7200 RPM WDC WD100FAEX099Z3A0 64 MB
Strip @ 7200 RPM WDC WD100FAEX099Z3A0 64 MB
eSata @ 5-7200 RPM ST315005N4A1AS 32 MB

 

For reference, I did some read tests:

Drive Latency Average Latency Max Ios/Sec MBs/Sec
SSD (Intel) 12 1147 2384 298
Strip @ 7200 RPM 44 1372 710 89
Mirror @ 5200 ROM 101 1326 314 39
eSata @ 5-7200 RPM 74 1489 426 53

Now the SSD is on a 300 MB/sec SATA bus … so 298 is about as good as it can get! The read time for Stripped is twice that of Mirrored – which follows theory, but not quite twice of the eSata.

So what do these numbers means? Well, forget putting TempDB etc on my mirrored drives. One of the stripped drives will be for TempDb and the other one for TempDb log.

 

There are more experiments that I will likely do, using 4 drives for stripping instead of 2. Trying RAID 5 (which I expect to be worst than Mirror), etc. None of these conclusions can be safely generalized. You have to do the grunt work of testing each drive/configuration – and expect some surprises for technology has gotten so complex, nothing is obvious any more.

 

Remember, be a fact-based developer!

Friday, March 26, 2010

Sanity for equipping employees

I work as an independent consultant, one of the reasons that I like doing so is that my productivity is not inhibited by poor equipment! I have worked as a consultant for major firms, as well as an employee for folks like Microsoft and constantly been dumb founded by the triumph of Accountants over employee productivity! i.e. Capital cost versus fixed cost.  The profit center aspect of employees seem to disappear from the equations.

 

The typical developer to days earn between 60-100K. Adding in benefits we are looking at $100-200K often. Giving an employee a $2-3K budget at year to do hardware and software upgrades as they think would best serve their tasks  (subject to reasonableness review – ideally by an external source) seems so simple and productive – that I can assume some folks never did a business management course…

 

Today,  you can get a workstation with built in RAID on the motherboard, 12 Gigs DDR3 ram, quad true cores (with hyperthreading), four terabyte drive (for RAID 1+0 if needed) and a SSD drive for the OS for under 2K. I see many developers trying to work on dual core with 2 gigs of RAM.

 

It reminds me of Sherlock Holmes 2% solution!  i.e. 2% of the loaded cost of an employee may result in a 10-40% improvement in productivity as well a major morale boast!  Giving or keeping a developer on old machines is a nice way of saying “You are not important, you are just a clog…”. Giving them a sweet environment and trusting them to make reasonable decisions actually does wonders for keeping employee.  What is the cost of loosing an employee? Much more than $2-3K.

 

So bosses, be rational over this and build team morale and reduce turnover risks!!

Thursday, March 25, 2010

It is not www.twitter.com

A quick way to programmatically post to twitter is to create a URL that posts directly to their web site. This little tidbit you can find on several other blog posts out there: (link this one). Here is the C# code I am using to launch the default browser from a Winform application:

private void twitterToolStripButton_Click(object sender, EventArgs e)
{
    Cursor.Current = Cursors.WaitCursor;

    Uri twitterUri = new Uri(String.Format(
        "http://twitter.com/home/?status={0}",
        HttpUtility.UrlEncode(((INotification)_current).Tweet)));

    try
    {
        System.Diagnostics.Process.Start(twitterUri.ToString());
    }
    finally
    {
        Cursor.Current = Cursors.Default;
    }
}

This works nicely even when the Tweet has hash marks (#) etc… -- because of the HttpUtility.UrlEncode. However, in the original code I was going to www.twitter.com instead of twitter.com which is a problem – since the subnet of www redirects to the twitter.com and gives the original encoding another layer of encoding. I am not sure if this is an issue with twitter (which might be fixed) or the browser (which will probably never be fixed). Going to http://www.twitter.com creates garbage in the text area of the post like this:

 

ss1

 

Notice that in the picture the address bar has already redirected and added the extra encoding.

 

ss2

 

The moral of the story, make sure you are going to twitter.com, and not www.twitter.com when posting with this technique.

 

{6230289B-5BEE-409e-932A-2F01FA407A92}

Wednesday, March 24, 2010

A Summary of why you should aggressively phrase out SQL IMAGE, TEXT and NTEXT

In my earlier post, I demonstrated the poor Sql Performance for Blobs (up to 4x worst). These are thankfully depreciated in SQL Server 2005[1] and soon may no longer exist. By Blobs I mean the following data types:

  • Text
  • nText
  • Image

So in theory, no one should be using those for new development since 2005 – well, people have this horrible habit of not reading documentation – which keeps me very gainfully employed!

 

Some additional motivations to purge code of these are simple, they require much more complex coding to access – for example, reading an image from a column return in myReader (SqlDataReader) [2]:

 // Create a file to hold the output.
fs = new FileStream("employee" + emp_id + ".bmp", FileMode.OpenOrCreate, FileAccess.Write);
bw = new BinaryWriter(fs);
startIndex = 0;
retval = myReader.GetBytes(1, startIndex, outbyte, 0, bufferSize);
while (retval == bufferSize)
{
  bw.Write(outbyte);
  bw.Flush();
  startIndex += bufferSize;
  retval = myReader.GetBytes(1, startIndex, outbyte, 0, bufferSize);
}
bw.Write(outbyte, 0, (int)retval);
bw.Flush();
bw.Close();
fs.Close();

On the other hand, you can get the “image” in one click with:

    • byte[] row[“VarBinaryDataColumn”]

or if you prefer it as a stream:

 

using (MemoryStream memStream = new MemoryStream((byte[])row["VarBinaryDataColumn"]))
{

} 

as for nText and Text, it’s the very simple:

  • (string) row[“VarcharMaxColumn”]
  • (string) row[“NVarcharMaxColumn”]

varbinary(max) allows storage of up to 2GB in one column…. but, for those folks that store HD movies in SQL Server, this is not sufficient…

 

The last item is the slick FILESTREAM feature on SQLServer --- which allows you to store bulky item in the file system and not in the database using the filestream class.  .Net Framework 3.5 adds a SqlFileStream class with some slickness because you can update fragments without having to read and rewrite the entire piece of data. See http://msdn.microsoft.com/en-us/library/cc716724.aspx for an example.

 

So the bottom line is simple:

  • Blobs do not perform as well as the alternatives introduced in SQL Server 2005
  • Blobs are more complex to code in C#, Java, VB etc, and thus more expensive to both initially write and test
  • Last, but critical, they are going bye-bye!

Code-less passing multiple selected items to a DataSource

Sometime on an ASPX page you need to use an item like a multiple select ListBox or a CheckBoxList but if this is to be a parameter to a DataSource, you find that there is no suitable property to pass as a parameter!

 

My solution is simple, just extend the control and add a new property that extracts the multiple values into a XML string. Why a Xml String? Simple, because it is robust for whatever happens to be in the Value for each item.

 

So we have this piece of simple code (using XmlTextWriter for performance)

 

namespace Ken.Web.UI.WebControls
{
    using System.Xml;
    using System.IO;
    public class CheckBoxLIst : System.Web.UI.WebControls.CheckBoxList
  
  public string CheckedValuesXml
  {
    get
    {
      using (var swriter = new StringWriter())
      {
        using (var xwriter = new XmlTextWriter(swriter) { Formatting = Formatting.Indented })
        {
          xwriter.WriteStartDocument();
          xwriter.WriteStartElement(ID);
          foreach (System.Web.UI.WebControls.ListItem item in this.Items)
          {
            if (item.Selected && !string.IsNullOrEmpty(item.Value))
            {
              xwriter.WriteElementString("item", item.Value);
            }
          }
          xwriter.WriteEndElement();
          xwriter.WriteEndDocument();
        }
        return swriter.ToString();
      }
    }
  }
}
  

I have to add my own control collections via web.config – so I would use:

<ken:CheckBoxList RepeatColumns="1" RepeatLayout="Flow" RepeatDirection="Vertical"
  runat="server" ID="AccountTransactionAccountsInput" 
  DataSourceID="AccountTransactionAccountsInputDb"
  DataTextField="text" DataValueField="value" />

And thus on my page I can have:

<asp:ObjectDataSource runat="server" ID="AccountTransactionResultGridDb" TypeName="Controller.Account"
  SelectMethod="GetFilteredTransactions">
  <SelectParameters>
    <asp:ControlParameter ControlID="AccountTransactionAccountholderInput" Name="accountHolder" PropertyName="SelectedValue" />
    <asp:ControlParameter ControlID="AccountTransactionAccountsInput" Name="accounts" PropertyName="CheckedValuesXml" />                
  </SelectParameters>
</asp:ObjectDataSource>

and avoid having any code behind to package the selected items. The same approach may be used for the ListBox.

Friday, March 19, 2010

Code samples for Advanced C# Programming by Paul Kimmel

Over this week I was wanting to look at some code samples for PerfMon in C#. I found a lot pages referencing a PerfMon.sln, including on MSDN – but no one could find the source code.  The above book (http://www.mhprofessional.com/product.php?isbn=0072228288) had an entire chapter on it and was available as a eBook.

 

After a few emails back and forth with McGraw-Hill Education, they finally sent me a downloadable location for the source:

 

http://www.softconcepts.com/Books/Source/Advanced%20C%23%20Programming.zip

 

I suspect other folks would appreciate getting access to the samples!

 

The purpose of getting the code samples was to build a check box tree to select the counters that I wanted to include in SQL Server Management Data Warehouse (aka SQL Server Performance Studio) so it’s a click the counter, click a button and you are logging those counters in your MDW database.

Tuesday, March 9, 2010

Before you ship code – make sure there are no leaks etc

Whether you coding in C# or TSQL, you can often have a variety of challenges in your code that never expose themselves on a development box. Good practice is create unit tests that runs for ever exercising all code paths and seeing what happens.

 

When you do this, you want to open PerfMon and watch the following items carefully (there are other items that are performance related – these are items indicating common code problems).

Object Performance Counter Issue
Memory Free System Page Table Entries > 5000: Memory Leak
  Pool Non-Paged Bytes > 175MB: Memory Leak
  Pool Paged Bytes > 250MB: Memory Leak
  Pages Per Second > 1000: Memory Leak
Processor % User Time >80%: Code refactor may be needed
Process Handle Count > 10000: Handle Leak
SQL Server:SQL Statistics Failed Auto-Parms/Second High number: refactor TSQL
SQL Server: Locks Average Wait (MSec)
Lock Waits
Lock Timeout/sec
High value: Transaction code problems and/or index structure
     

 

Also look in the System Event Log for:

  • Event ID 2020: Memory Leak

One approach is to set  the polling frequency to 60 seconds or more and then see if any counter grows over time.

Monday, March 8, 2010

A very inconvenient bug for Array.ForEach

Today I was trying to debug a user control and kept hitting an error being thrown on this line in the debugger:


          Controls.Add(cell = new HtmlGenericControl("span"));

 

To make it more confusing the message was:

Exception Details: System.ArgumentNullException: Value cannot be null.
Parameter name: array

 

So now I have a double confusion – a statement that should never throw an error that complains about an ARRAY!!!!  

 

Finally, I got frustrated and just let the page error out when I noticed that the error shown on the page was at a different location:

 

Line 36: Array.ForEach(GetEmailAddresses(), item =>

Line 37: {

Line 38: sb.Append("<br/>");

 

In other words, the interactive debugger was pointing to the wrong line!  I am very interested to see if this problem will occur in RTM of VS2010. Be warned if you are using Array.ForEach!

Sunday, March 7, 2010

Checking Bandwidth to SQL Server

Once in a while the bandwidth to a remote SQL Server accounts for poor performance. In some odd cases, the problem can be tracked down to someone accidently dropping a 10BaseT switch or hub in a route that is otherwise 1000BaseT. This can be very hard to detect because your machine to the internet shows the speed of the first link and not the speed across the entire link.

 

There is an easy way to do this if you have a big table on your SQL Server. Open SSMS, connect to the database execute Shift-Alt-S (Query / Show Client Statistics) . Execute a Select * From {tablename} and look at the Client Statistics tab. Do a division of

  • bytes received from server/client processing time

to give you a rough estimate. Two examples are shown below:

RemoteSqlServer

which results in 136

RemoteSqlServer

which results in 11207

 

  • The first case was hitting a SQL Server over a 3G link.
  • The second case was hitting a database on the local network.

A nice simple way of checking bandwidth end-to-end.

Why you want to run Server 2008 R2 and not Server 2003 if you have a newer Dev Box.

Windows Server 2003 is licensed by core. Enterprise edition supports 8 SMP cores. With new CPUs supporting 6 cores with hyperthreading, you end up with 12 cores – 4 of which will not be used by the OS!!!! This can be a bummer if you are running SQL Server Enterprise Edition because it will use only the cores supported by the OS (i.e. 8, not the 12 that you have)…. ooops!

 

On the other hand Server 2008 R2, the licensing is by X64 sockets. So a single CPU with 12 cores will use all of them with any edition (including Web and Standard). In short, you will get more thru-put with Server 2008 R2 Standard Edition then you would with Server 2003 Enterprise Edition. If you go for a fancy box, you could be running 48 cores on 4 x64 sockets with Server 2008 R2 Standard Edition (something that required the Data Center Edition with Server 2003).

Friday, March 5, 2010

A silly SQL Performance test

A question came up on the relative performance of the old Blobs (Image, Text, NText) and the newer nvarchar(max) and nvarbinary(max). It seemed a moot question since the blobs are depreciated in SQL Server 2008 and may not see the light of 2011. The question of relative performance came up and that was an interesting problem to try to assess natively in SQL Server. I like interesting questions. Some Bingoogling found nothing on the web evaluating actual performance between the two – so an experiment was in order.

 

First I created a simple table:

CREATE TABLE [dbo].[PerfTest](
  [Id] [int] IDENTITY(1,1) NOT NULL,
  [FieldA] [nvarchar](max) NULL,
  [FieldB] [ntext] NULL,
  [FieldC] [image] NULL,
  [FieldD] [text] NULL,
  [FieldE] [varchar](max) NULL,
  [FieldF] [varbinary](max) NULL

 

Then I grabbed a 500K row table and inserted data into FieldE. Once inserted, I updated all of the other fields from it – so all fields contain “equivalent” data.  For performance, I did the weakest test possible, but one that applies to all of the data type uniformly… i.e.

 

Print @@CPU_BUSY
Select count(1) from perfTest where FieldA is null
Print @@CPU_BUSY
Select count(1) from perfTest where FieldB is null
Print @@CPU_BUSY
Select count(1) from perfTest where FieldC is null
Print @@CPU_BUSY
Select count(1) from perfTest where FieldD is null
Print @@CPU_BUSY
Select count(1) from perfTest where FieldE is null
Print @@CPU_BUSY
Select count(1) from perfTest where FieldF is null
Print @@CPU_BUSY

From the output, I could calculate the cpu time that each took.  The results are shown below:

DataType Pass 1 Pass 2 Pass 3 Pass 4
nvarchar(max) 18 17 17 16
ntext 170 64 60 61
Image 62 63 67 60
text 66 62 64 63
varchar(max) 15 15 15 17
varbinary     16 17

 

It is clear that the blobs take FOUR TIMES the @@CPU_BUSY that the newer data types took. Definitely, it’s time to have a good Irish Wake (with Jamieson, Baileys and Guinness ) when the Blobs are actually dropped. IMHO, the test is interesting because the issue of mechanism for retrieving the data was side stepped nicely.

 

That’s it – if you are maintaining a legacy system using BLOBS, feel free to use the above to argue that there is a need to update the code.

 

Addendum

The following was also executed to evaluate retrieval time:

Print @@CPU_BUSY
Select count(1) from perfTest where LEN(CAST(FieldA AS NVARCHAR(MAX))) < 12
Print @@CPU_BUSY
Select count(1) from perfTest where LEN(CAST(Fieldb AS NVARCHAR(MAX))) < 12
Print @@CPU_BUSY
Select count(1) from perfTest where LEN(CAST(Fieldd AS NVARCHAR(MAX))) < 12
Print @@CPU_BUSY
Select count(1) from perfTest where LEN(CAST(Fielde AS NVARCHAR(MAX))) < 12
Print @@CPU_BUSY

With the following results:

DataType Pass 1 Pass 2
nvarchar(max) 21 16
ntext 106 103
text 122 128
varchar(max) 46 46

 

In this we are retrieving the data and it’s interesting to note that converting from varchar to nvarchar was approximately the same difference as betwen text and ntext (25-35).  What was interesting to observe was that instead of being four times slower, blogs became five time slower.

SSD Drives and SQL Server Temp DB

I’m in the midst of specing out a new server and I know that it will have a SSD drive in it on which the operating system and key applications will sit.  The idea came up that this is where I should also put my TempDB. Lightening performance!!!!  Right?

 

After all the literature reports:

 

with striping and short stroking of 10 high-end 15,000rpm hard disk drives performance gains of more than 16× are attainable, resulting in more than 3,000 random reads per second. [*]

 

I am coming to the conclusion that this could be a dumb idea if randomly implemented. There are two main types of SSD Drives:

  • single-level cell (SLC) NAND memory – typical enterprise.
  • multi-level cell (MLC) NAND flash memory – typical consumer, cheaper, slower, as much as 10 times fewer read/writes over its life span

So I looked at the specification of one MLC drive, the Super Talent MasterDrive PX Series 64GB 2.5in SATA II Solid State Drive, SLC NAND Flash and read:

  • Read: Unlimited
  • Write: 350.7 yrs yrs @ 50GB write-erase/day

      So for OS and other stuff – it is sweet. For a TempDB for SQL Server it is definitely not industrial strength – 500 GB of writes/hr into tempDB is not unreasonable so the life expectancy is down to < 2 years. In fact, digging deeper I read:

      one SSD drive has specifications of 130 random writes per second and 18,000 random reads per second, enabling the drive only to maintain a random write rate less than 1% of its read rate. In contrast, hard disk drives can typically maintain write rates close to 90% of their read rates.[*]

      Bottom line is clear: SSD drives are NOT for TempDB. The 16x performance gain could be reduced to 1% x 16 = 0.16 performance gain or SIX TIMES worst performance… Of course, you may find a SSD drive that could work…  but you want at least 16% write rate versus read rate.

      The resulting configuration that I am planning:

      • Core i7 920/930
      • SSD Drive – likely the Super Talent cited above for OS and key applications only
      • 12 Gigs Ram (to start)
      • Raided SATA drives

      Wednesday, March 3, 2010

      Best organization for a one man consulting shop…

      This question came up twice this last week so I thought that I would dump my logic and see what people may correct me on… First, the difference between a LLC and proprietorships is simple – with a LLC, you are not personally liable; so a LLC is definitely preferred. For a S-Corp versus a C-Corp, the difference is that with a C-Corp you may have double taxation.

      Bottom line is a LLC (which may be a proprietorship) and a S-Corp. For a short summary see here

       

      The argument is often to go with an S-Corporation because you can take the money out as distributions and avoid employment taxes. This is true if the reasonable salary for your skill set is below the magic ceiling of $106,800. If you are a software consultant – forget it – the first time that the IRS does an audit, your lower reasonable salary will be laughed at and you will be hit by back taxes, penalty and interest!  At $106,800 you have no more employment tax. The rest is subject to a 2.9% Medicare tax.

       

      So there is no savings from employment taxes – and what is the relative cost of a LLC versus a S corporation. Well with S Corp you have a lot more paper work, may have state filing costs every year etc. to quote the link above: “if you form an S-corp, know that you're going to be spending a great many more billable hours with your accountant/attorney.

       

      So if you gross, $200,000/year – you would pay 2.9% x 94,000= $2726 for medicare versus the legal fees, accounting fees and filing costs of a S-Corp PLUS the time to do the paperwork (which could be billable hours!). See this nice table for reference.

       

      On the flip side, what if you need to hire someone to help a bit?  Well, I would run them through RentACoder.com with a clean specification and fix price per quantity of work. There is no risk of them being deemed an employee and if they suck – the folks at rentacoder will mediate and resolve the issue (usually in your favor if you done a good clean specification and criteria).

       

      When I started up, the wife had managed a S-Corp for many years and frankly, did not want the paperwork and filing hassles that came with it. We worked out that a S Corp was actually going to be a wash for $$$ (ignoring time consumed) or actually cost us money!  So beware of the sell-jobs that accountants may make for an S-Corp, remember they make money from burdening you with paperwork!

      Tuesday, March 2, 2010

      Making a full data backup of SQL Server in XML via .Net

      I have often worked with clients whose data is on a hosted SQL Server database. They have often encountered problems getting a copy of the database on their local machine or have experienced pain when the hosting service failed. They can script the database fine, but often scripting the data will not work.

       

      The code below does a full database TABLE backup into a collection of XML files. The logic is simple:

      • Get all of the tables in the database (aka CATALOG)
      • Do a SELECT * on each table specifying that the data should be formatted as XML
        • This will write BINARY and IMAGE data correctly.
        • This will write XML data correctly.
      • Read the XML (which many sites have bad code for).
      • Write it to a folder with today’s date.

      Nice and simple! Just code your configuration file and away you go!!!

       

      using System;
      using System.Configuration;
      using System.IO;
      using System.Collections.Generic;
      using System.Xml;
      using System.Text;
      using System.Data;
      using System.Data.SqlClient;
      namespace BackupDatabase
      {
      class Program
      {
      static void Main(string[] args)
      {
          List<string> dbTables=new List<string>();
          SqlConnection conn=new SqlConnection(ConfigurationManager.ConnectionStrings[0].ConnectionString);
          conn.Open();
          DirectoryInfo Backup=new DirectoryInfo(Path.Combine(@"C:\SqlXmlBackups",DateTime.Now.ToString("yyyyMMdd")));
              if(! Backup.Exists)
              {
                  Backup.Create();
              }
          //Step 1 get the tables. Make sure to put [ ] around the names
          SqlCommand cmd=new SqlCommand("Select '['+TABLE_SCHEMA+'].['+TABLE_NAME+']' from INFORMATION_SCHEMA.Tables where TABLE_TYPE='BASE TABLE' ",
              conn){CommandTimeout=50000,CommandType= CommandType.Text };
          using (var rdr = cmd.ExecuteReader())
          {
              while (rdr.Read())
              {
                  dbTables.Add(rdr.GetString(0));
              }
          }
          foreach (var table in dbTables)
          {
              // remove the [ ] around names
              FileInfo file=new FileInfo(
                  Path.Combine(
                  Backup.FullName,
                  String.Format("{0}.Xml",
                  table.Replace("[",string.Empty).Replace("]",string.Empty))));
              cmd.CommandText = String.Format("Select * from {0} for Xml Auto, Root('Backup')",table);
              using (XmlTextWriter wrt = new XmlTextWriter(file.FullName, Encoding.UTF8))
              {
                  using (var xrdr = cmd.ExecuteXmlReader())
                  {
                      xrdr.Read();
                      while (xrdr.ReadState != ReadState.EndOfFile)
                      {
                          wrt.WriteRaw(xrdr.ReadOuterXml());
                          Console.Write(".");
                      }
                  }
              }
              Console.WriteLine(table);
          }
      }
      }
      }
      

      Monday, March 1, 2010

      A Deception building controls off WebControl!

      When you build a web control inheriting from WebControl you will see AccessKey as a property and assume that if you assign to it that it will appear in the HTML produce. WRONG.

       

      Not only does it not appear but if you naively do something like

      this.Attributes.Add(“accesskey”,AccessKey);

      you will see it in the HTML but usually it will NOT work….

       

      The first problem is that the accesskey is valid only on a few html elements according to W3C. They are:

      • A
      • Area
      • Button
      • Input
      • Label
      • Legend
      • TextArea

      Tables elements, divisions and spans may NOT use accesskey So what happens when you want to have the access key jump to a pure data <table>. The answer is simple, add a

      • <a name=”xyz” accesskey=”V” />

      None of the other items are appropriate. This also answers the question when you wish to give an access key to a <span>, <div>, <ul>. <ol>, etc etc etc.

       

      So if you are writing a webcontrol based on web control then you want to do something like this:

       

      // If an access key was assigned to this control add an A element at the top
      if (!String.IsNullOrEmpty(AccessKey))
      {
          HtmlGenericControl accessA = new HtmlGenericControl("a");
          // make sure it is unique
          accessA.Attributes.Add("name", string.Format("A_{0}", ClientID));
          accessA.Attributes.Add("accesskey", AccessKey);
          Controls.Add(accessA);
      }

      Now you have a functional Access Key pointing to your Web Control that conforms to HTML standards.

      A robust set of HTML Access Key assignments

      Access keys are used to insure easier navigation of a web page by the blind etc. They are required for Section 508 Accessibility. One of the problems is that different browsers consume different keys, or do not support certain keys. Additionally some accessibility add-ins also consumes some of the keys.  In short, it’s a mine field!

       

      The following is a recommendation on how to walk this mine field following government standards and the quarks of various browsers and add-ins.  If there are any problems, please email or post.

       

      NOTE: Some items are duplicated – this is not avoidable with this mine field. There are only five open slots unless you get into the funky symbols.

       

      Key

      Conflict in

      Usage

      A

      IE

       

      B

      NS7,O7

       

      C

         

      D

      IE

       

      E

      IE,NS7,O7

       

      F

      IE,NS7,O7

       

      G

      NS7

       

      H

      IE,NS7,O7

       

      I

         

      J

        Direct Jumps Panels ( aka Quick Links Panel)

      K

       

      “Site Map” à Menu

      L

       

      “Glossary” – page jump

      M

      O7

       

      N

      O7

       

      O

       

      Home Page

      P

       

      “Feedback Form”

      Q

       

      “Go to FAQ page” – page jump

      R

       

      “What’s New” à Bulletin

      S

       

      Skip Navigation. Goes directly to Main Content of page

      T

      IE,NS7

       

      U

       

      “Terms and Conditions”

      V

      IE,O7

       

      W

      NS7,O7

       

      X

       

      “Help”

      Y

       

      “Complaints”

      Z

         

      0

      !O7

      “Access Key Details”

      1

      !O7

      “Home Page”

      2

      !O7

      “What’s New” à Bulletin

      3

      !O7

      “Site Map” à Menu

      4

      !O7

      “Search” to search box

      5

      !O7

      “Go to FAQ page” – page jump

      6

      !O7

      “Help”

      7

      !O7

      “Complaints”

      8

      !O7

      “Terms and Conditions”

      9

      !O7

      “Feedback Form”

      >

         

      <

         

      !O7 – reports are that these Access Keys letters do not work in Opera 7

       

      Since I use Master Pages often, then it’s a simple matter to set them up appropriately once and not worry about them on individual pages. Also, for accessibility, you want to make sure that the complete menu of all pages (Site Map) is on the page and hide items via CSS that you do not wish to have displayed.