Sunday, January 31, 2010

Using Resources / Resx objects on a Web Site

In an earlier post I showed how you can take any Resources object(especially PinnedBufferMemoryStream and UnmanagedMemoryStreamWrapper) and obtain it’s byte[]. Unfortunately on the web this is not sufficient because when the byte[] is sent to a client. The browser expects a mime type header to tell it how to handle the byte[]. A partial solution is to simply map each mnemonic to a specific known mime type. This is a fragile solution.

 

If you are doing localization then a mnemonic such as SiteLogo may be different mime types in different languages, for example:

  • en-US: image is a  .bmp
  • es-Mx: image is a  .png
  • fr-CA image is a  .gif (animated)
  • dk-DK image is a  .tiff

Of course, one could simply specify to human-ware that they must all be the same. In reality, if the web-site is a commercial product, this will usually break down.

 

The practical solution is to use Magic Numbers. Use the magic number to determine the file type, then lookup the mime type based on the file type.  Gary Kessler has a good set of magic numbers AKA File signatures to get you started. In a new file type shows up, then just collect a sample and run a utility to obtain additional file signatures.

Capturing Signatures

We create two structures to capture the file signatures:

struct MagicNumberInternal
{
    /// <summary>
    /// Number of bytes offset
    /// </summary>
    public Int32 Offset;
    /// <summary>
    /// The byte sequence expected there
    /// </summary>
    public Byte[] Offsetbytes;
}

struct MagicNumberSet
{
    /// <summary>
    /// The bytes expected at the start of the file
    /// </summary>
    public Byte[] Startbytes;
    /// <summary>
    /// The bytes expected at the end of the file
    /// </summary>
    public Byte[] Endbytes;
    /// <summary>
    /// The bytes expected at some offset in the file
    /// </summary>
    public MagicNumberInternal[] Offsets;
}

   The next item is defining how to input the magic numbers. I opted for a simple string format as shown below:

  • AA3G – the leading bytes
  • *AA3G – the ending bytes
  • AA3G*AA3G  – trailing and leading bytes
  • *12345:AA3G* – bytes located at an offset of 12345.
  • AA3G|FFEEAC – Alternative leading characters
  • BBCC*12345:AA3G*987654:AA3G*FFEE – leading, ending and two sets of offset bytes.

This information is parsed easily as shown below:

/// <summary>
/// Character to separate sets
/// </summary>
char[] barSep = { '|' };
/// <summary>
/// Character to separate parts:  AA*AA is start end,
/// AA*4563:FF*EE  means start with AA, at 4563 bytes FF is found, ends with EE
/// </summary>
char[] pSep = { '*' };

/// <summary>
/// Delimiter between bytes offset (int) and the bytes (Hex)
/// </summary>
char[] offSep = { ':' };
Queue<MagicNumberSet> _MagicNumberPatterns;

ParsePattern

/// <summary>
/// Parses the magic pattern into a structure.
/// </summary>
/// <param name="pattern">A Magic Number pattern, for example AA*4563:FF*EE  </param>
/// <returns>An structure with the contents ready for use</returns>
MagicNumberSet ParsePattern(string pattern)
{
    var mniQueue = new Queue<MagicNumberInternal>();
    MagicNumberSet ret = new MagicNumberSet() { Startbytes = null, Endbytes = null, Offsets= null };
    var parts = pattern.Split(pSep, StringSplitOptions.None);
    for (var i = 0; i < parts.Length; i++)
    {
        switch (i)
        {
            case 0:
                if (!String.IsNullOrEmpty(parts[0]))
                    ret.Startbytes = parts[0].FromHexToByte();
                break;
            default:
                if (!String.IsNullOrEmpty(parts[i]))
                {
                    var breakdown = parts[i].Split(offSep, StringSplitOptions.RemoveEmptyEntries);
                    switch (breakdown.Length)
                    {
                        case 1:
                            ret.Endbytes = breakdown[0].FromHexToByte();
                            break;
                        case 2:
                            mniQueue.Enqueue(new MagicNumberInternal { Offset = int.Parse(breakdown[0]), Offsetbytes = breakdown[1].FromHexToByte() });
                            break;
                        default:
                            throw new DataMisalignedException(String.Format("The MagicNumber string {0} is invalid", pattern));
                    }
                }
                break;
        }        
    }
    return ret;
}

 

The test to see if there is a match is shown below:

 

IsMatch

/// <summary>
/// Takes some data and see if there are any matches.
/// </summary>
/// <param name="data"></param>
/// <returns></returns>
bool IsMatch(byte[] data)
{
    foreach (MagicNumberSet set in _MagicNumberPatterns.ToArray())
    {
        bool pre = set.Startbytes == null || PrefixMatch(data, set.Startbytes);
        bool post = set.Endbytes == null || PostfixMatch(data, set.Endbytes);
        bool mid = true;
        if (set.Offsets != null)
        {
            foreach (var item in set.Offsets)
            {
                if (item.Offset < 1 || item.Offsetbytes == null || OffsetMatch(data, item.Offsetbytes, item.Offset))
                {
                    mid = false;
                }
            }
        }
        if (pre && post && mid)
        {
            return true;
        }
    }

        if (mniQueue.Count > 0)
       {
           ret.Offsets = mniQueue.ToArray();
       }

    return false;
}

Thus, the supports multiple signatures for one file type with complex matching patterns.  An dictionary of <magicNumberSignature, MimeType> is the closing piece of the solution.

Post Script:

Some of the supporting functions are shown below.

 

FromHexToByte

/// <summary>
/// Utility to take 0x2DF8 or 2DF8 into a byte array
/// such as { [2D],[F8] }
/// </summary>
/// <param name="value">A string </param>
/// <returns>an equivalent byte arra</returns>
public static byte[] FromHexToByte(this string value)
{
    Queue<byte> ret = new Queue<byte>();
    var local = value.StartsWith("0x", StringComparison.OrdinalIgnoreCase) ? value.Substring(2) : value;
    for (int i = 0; i < local.Length; i = i + 2)
    {
        ret.Enqueue(Byte.Parse(local.Substring(i, 2), System.Globalization.NumberStyles.HexNumber));
    }
    return ret.ToArray();
}

PostfixMatch

/// <summary>
/// Determines if the bytes match for the first common bytes
/// </summary>
/// <param name="a">a byte array</param>
/// <param name="b">a byte array</param>
/// <returns>true if all of the bytes are the same</returns>
static bool PostfixMatch(byte[] a, byte[] b)
{
var length = a.Length > b.Length ? b.Length : a.Length;
for (int i = 0; i < length; i++)
{
if (a[a.Length - i] != b[b.Length - i])
  {
   return false;
  }
}
return true;
}

OffsetMatch

/// <summary>
/// Determines if the bytes match for the first common bytes
/// </summary>
/// <param name="a">a byte array</param>
/// <param name="b">a byte array</param>
/// <returns>true if all of the bytes are the same</returns>
   static bool OffsetMatch(byte[] a, byte[] b, int offset)
       {
           if (a.Length < b.Length + offset)
           {
               return false;
           }
           for (int i = 0; i < b.Length; i++)
           {
               if (a[i + offset] != b[i])
               {
                   return false;
               }
           }
           return true;
       }

PrefixMatch

/// <summary>
/// Determines if the bytes match at the specified offset
/// </summary>
/// <param name="a">a byte array</param>
/// <param name="b">a byte array</param>
/// <returns>true if all of the bytes are the same</returns>
   static bool PrefixMatch(byte[] a, byte[] b)
       {
           var length = a.Length > b.Length ? b.Length : a.Length;
           for (int i = 0; i < length; i++)
           {
               if (a[i] != b[i])
               {
                   return false;
               }
           }
           return true;
       }

Saturday, January 30, 2010

NEVER use NT Groups to control SQL Server Permissions

About a year I go I looked at the use of Windows Group as a mechanism for controlling access to SQL Server for a PCI-DSS project. PCI requires that all access be done by accounts that uniquely identify the user. Best practices mandate Integrated Security so we must use Windows Users. There were two apparent solutions:

  • Associate each Windows user to appropriate SQL Login
  • Associate each Windows user to a Windows Group that is associated to a SQL Login

Since the expected administrators of the PCI application are not DBA’s and may not be SQL Server knowledgeable, the second approach looked ideal. Their role is to determine who may or may not access the application.

This simple idea had a nasty gotcha:

  • Grant SQL permissions to anyone in the Windows Group.
    • This work perfectly for granting permissions
    • I expected that if a user was removed from the Windows Group, permissions would be immediately removed.

UNFORTUNATELY, if a Windows user was removed from the Windows Group, the user still retained SQL permissions.

Some Bingoogling found that the issue is known in another context and cited on an official Microsoft Site:

Note If you use SQL Server integrated security, keep in mind that if you grant a Windows NT user group access to the SMS site database, this permission is not dynamic. As new users are added to the Windows NT user group, they are not given SQL Server security rights unless you add them individually.

technet.microsoft.com 

 

Hence the apparent behavior of granting permissions to users in the current Windows group (a snapshot) appears to be confirmed. So this preferred approach totally collapses:

  • When the association is originally done, it grants all NT Users the specified permission. The User Group is nothing more than a temporary container.
  • Adding a user to the User Group later will NOT result in permissions being given.
  • Removing a user to the User Group later will NOT result in permissions being revoked.

In practice,  when someone adds a user to the NT Group they will expect it to work. When the user complains, the admin will assume something went wrong and manually add the user’s SQL Permission. When they remove a user from the NT Group, they will also expect it to work – after all, looking under Security, they not see the user, just the NT Group; the user will not complain – and you have a security breach.

 

Bottom line: The Domain/Windows Users must be individually added to and removed from SQL Server (for example mapped to a specific SQL Login). Never use a NT Group account to control permissions is SQL Server. If you are ever called on to audit a system, check if any NT Groups are used – if so, feel free to go berserk (it is totally justified)!

Will it be fixed? Is this a bug?

Including the ability to assign Windows/Domain Group to a SQL Login is the bug.  A Domain/NT Group can consist of Domain/NT groups – so doing a dynamic permission could means every query may require a Remote Domain Server to be contacted and hundreds of groups walked.  This can be a performance devastating overhead. To resolve the group to individual users and grant those users the permission is a logical solution.  The presentation/representation is the bug, not the behavior. The presentation should read “Grant to all current NT Users in this NT Group” as an action, SMS  not show any NT Group tied to a login <—that is the bug! It misleads most users.

 

Another way to view it is this: It is easy to walk all children of a NT Group to get a list of all NT Users under that NT Group and then give those users permissions tied to their NT User account. The reverse is an exploding search problem: To find out in an user has SQL permission, you have to walk all of the NT Groups that they belong;  then the NT groups that those groups belong to, etc until you have exhausted the parentage of every group that the user may belong to OR until you found the needed permission. It is a potential complete enumeration problem – manageable for a small domain with few NT Groups – but with a large corporation it can result in a massive number of groups that must be traced upwards until SQL permission is found or the NT groups are exhausted – a SQL performance buster because every security operation may take minutes to determine if it is allowed.

Thursday, January 28, 2010

An Improved Path.Combine to Handle Relative Paths and ~’s

Recently I was working with parsing a .Resx file which stored file locations as relative locations (i.e. ..\..\..\Images\Logo.jpg ) and found that path combine does not work with such. I have often wished Path.Combine(DirectoryInfo, string) was valid but because Path is static I could not extend it. I ended up writing a short utility below which handles more variations than Path.Combine.

 

/// <summary>
/// A utility to resolve ..\ in the relative path
/// </summary>/// <param name="directory">Directory Information on the base </param>
/// <param name="relativePath">The dotted relative path</param>
/// <returns></returns>
public static string CombinePath(DirectoryInfo directory, string relativePath)
{
    while (relativePath.StartsWith(@"..\"))
    {
        relativePath = relativePath.Substring(3);
        directory = directory.Parent;
    }
    if (relativePath.StartsWith(@"~"))
    {
        if (System.Web.HttpContext.Current == null)
        {
            //An intelligent guess of the intent     
            relativePath = relativePath.Substring(1);
        }
        else
        {
            return System.Web.HttpContext.Current.Server.MapPath(relativePath);
        }
    }
    return Path.Combine(directory.FullName, relativePath);
}

public static string CombinePath(string directory, string relativePath)
{ return CombinePath(new DirectoryInfo(directory), relativPath); }

Wednesday, January 27, 2010

Using Datasets in WCF and Webservices with Legacy Applications

Recently I came across the issue of XML DataTable to Clean XML, specifically the problem of “Web Service has to deliver neutral/agnostic XML and NOT .NET related information.”  The solution code used XmlDocument which is not the best performing, as well as not being to “monkey-see, monkey-do” quality that is often needed.

 

The solution is simple if you add an extension to the dataset as shown below:

/// <summary>  
/// Returns a dataset as a Xml String 
/// </summary> 
/// <param name="ds">A DataSet</param> 
/// <param name="isDotNet">If true, optimize for .Net client, if false for Legacy(Java)</param> 
/// <param name="includeSchema">If true,include the Schema in the Xml</param> 
/// <returns>The dataset serialized as Xml</returns> 
public static string GetXml(this DataSet ds, bool isDotNet, bool includeSchema)
{
    var swriter = new StringWriter();
    using (var xwriter = new XmlTextWriter(swriter))
    {
        xwriter.Formatting = Formatting.Indented;
        if (includeSchema)
        {
            ds.WriteXml(xwriter, XmlWriteMode.WriteSchema);
        }
        else
        {
            ds.WriteXml(xwriter, XmlWriteMode.IgnoreSchema);
        }
    }


    if (isDotNet)
    {
        return swriter.ToString();
    }

    // remove the Microsoft tags 
    XDocument doc = XDocument.Parse(

                   swriter.ToString(), LoadOptions.PreserveWhitespace);


    XNamespace namespaceToRemove =

    XNamespace.Get("urn:schemas-microsoft-com:xml-msdata");


    doc.Descendants().Attributes().Where(

         a => a.Name.Namespace == namespaceToRemove).Remove();


    doc.WriteTo(new XmlTextWriter(swriter = new StringWriter()));


    return swriter.ToString();
}

What is the result? With isDotNet=true, we get the code below (with the offending items being struck thru):

   

<NewDataSet>
  <xs:schema id="NewDataSet" xmlns="" xmlns:xs="
http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">
    <xs:element name="NewDataSet" msdata:IsDataSet="true" msdata:UseCurrentLocale="true">
      <xs:complexType>
        <xs:choice minOccurs="0" maxOccurs="unbounded">
          <xs:element name="Table1">
            <xs:complexType>
              <xs:sequence>
                <xs:element name="LastName" type="xs:string" minOccurs="0" />
                <xs:element name="WT" msdata:DataType="System.DateTimeOffset" type="xs:anyType" minOccurs="0" />
              </xs:sequence>
            </xs:complexType>
          </xs:element>
        </xs:choice>
      </xs:complexType>
    </xs:element>
  </xs:schema>
  <Table1>
    <LastName>Lassesen</LastName>
  </Table1>
</NewDataSet>

 

With isDotNet=false, we get:

 

<?xml version="1.0" encoding="utf-16"?><NewDataSet>
  <xs:schema id="NewDataSet" xmlns="" xmlns:xs="
http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">
    <xs:element name="NewDataSet">
      <xs:complexType>
        <xs:choice minOccurs="0" maxOccurs="unbounded">
          <xs:element name="Table1">
            <xs:complexType>
              <xs:sequence>
                <xs:element name="LastName" type="xs:string" minOccurs="0" />
                <xs:element name="WT" type="xs:anyType" minOccurs="0" />
              </xs:sequence>
            </xs:complexType>
          </xs:element>
        </xs:choice>
      </xs:complexType>
    </xs:element>
  </xs:schema>
  <Table1>
    <LastName>Lassesen</LastName>
  </Table1>
</NewDataSet>

Summary

This means that instead of having one contract that is .Net specific:

 

[OperationContract] 
DataSet GetAccountSummary(string user);      

We could use the following which performs as well for .Net and simplifies use from Java or other Legacy issues.

[OperationContract]  
String GetAccountSummary(string user, bool isDotNet, bool includeSchema);

Exporting .Resources to Sql Server (byte [] or string)

One of my current tasks is writing a utility to import the contents of an arbitrary random .resources file (compiles .Resx file) into SqlServer. The problem is that there some of the resources returned as objects are INTERNAL classes, namely:

  • System.IO.PinnedBufferMemoryStream
  • System.IO.UnmanagedMemoryStreamWrapper (1 page of Google results only… the unknown .Net Class)

Two solutions to this":

                                     byte[] bytes=null;
               // The following two classes are not exposed :-(
               // So we must use name instead of IS
               switch (value.GetType().Name)
               {
                   case "System.IO.PinnedBufferMemoryStream":
                       using (Stream stream = (Stream)value)
                       {
                           bytes = new byte[stream.Length];
                           stream.Read(bytes, 0, (int)stream.Length);
                       }
                       break;
                   case "System.IO.UnmanagedMemoryStreamWrapper":
                       using (Stream stream = (Stream)value)
                       {
                           bytes = new byte[stream.Length];
                           stream.Read(bytes, 0, (int)stream.Length);
                       }
                       break;

               }

It turned out that there is a simpler routine (thanks to Bradley Grainge)

else if (value is Stream)
{
    using (var stream=value as Stream)
    {
        bytes = new byte[stream.Length];
        stream.Read(bytes, 0, (int)stream.Length);
        return bytes;
    }
}

The rest of the code is pretty simple, all of the other object types are easy to convert. There was one minor twist that I did because the resources were to used on a website – handling WMF and related non-web images, I converted every thing but GIF to PNG. GIF was not converted because they could be animated.

 

The entire routine to convert a resource object to byte[] is shown below.  The is FileInfo was added to allow the handling of file references in a .Resx file.

/// <summary>
/// Converts a resource object into a byte array
/// </summary>
/// <param name="value">The resource object</param>
/// <returns>a byte[] suitable for returning to a Response</returns>
public static byte[] ObjectToArray(object value)
{
    byte[] bytes = null;
    if (value is String)
    {
        throw new DataMisalignedException("A string was found, an object was expected");
    }
    else if (value is Byte[])
    {
        return (Byte[])value;
    }
    else if (value is FileInfo)
    {
        return ((FileInfo)value).GetFile();
    }
    else if (value is Icon)
    {
        Icon ico = (Icon)value;
        using (MemoryStream mem = new MemoryStream())
        {
            ico.Save(mem);
            return mem.ToArray();
        }
    }
    else if (value is Bitmap)
    {
        Bitmap bitmap = (Bitmap)value;
        using (MemoryStream mem = new MemoryStream())
        {
            if (bitmap.RawFormat.Guid == System.Drawing.Imaging.ImageFormat.Gif.Guid)
            {
                bitmap.Save(mem, System.Drawing.Imaging.ImageFormat.Gif);
                return mem.ToArray();
            }
            else
            {
                bitmap.Save(mem, System.Drawing.Imaging.ImageFormat.Png);
                return mem.ToArray();
            }
        }
    }
    else if (value == null)
    {
        return null;
    }
    else if (value is Stream)
    {
        using (var stream = value as Stream)
        {
            bytes = new byte[stream.Length];
            stream.Read(bytes, 0, (int)stream.Length);
            return bytes;
        }
    }
    else
        return null;
}

Tuesday, January 26, 2010

Getting detail System Information without a DLLImport in dotNet

The two functions below can make getting information trivial because   it writes the information out into clean XML which may be examined by XPath or LINQ.

  • The first function calls the second using a built in list of the WMI objects (this can take a long time to run and will produce a huge file: BE WARNED -- run it once to see what is available.)
  • The second function writes a short list specified in a string array (for example: use a text file with one item per line). This is likely the most common use.
  • If you are planning to write to a file, you may find encoding issues.
using System; 
using System.Collections.Generic; 
using System.Management; 
using System.Text; 
using System.IO; 
using System.Xml;

Everything in the Wmi

The following obtains the list of available objects (some 1200+ on my Windows 7 box) and then gets information about each

 

/// <summary> 
/// Enumerates all of the WMI classes on the local machine 
/// </summary> 
/// <returns>Array of WMI providers available</returns> 
public static string[] AvailableWmiClasses() 
{ 
    var queue =new Queue<string>(); 
    ManagementObjectSearcher searcher = new ManagementObjectSearcher( 
      new ManagementScope("root\\cimv2"), 
     new WqlObjectQuery("select * from meta_class"), 
     null); 

    foreach (ManagementClass wmiClass in searcher.Get()) 
    { 
        queue.Enqueue(wmiClass["__CLASS"].ToString());                
    } 
    return queue.ToArray(); 
} 
    
/// <summary> 
/// Get information about all WMI providers installed locally 
/// </summary> 
/// <returns>String containing information formatted as XML</returns> 
public static String GetFullWmi() 
{ 
    return GetWmi(AvailableWmiClasses()); 
}

 

The Core Routine

public static string GetWmi(string[] wmiObjects) 
{ 
    StringWriter stringWriter = new StringWriter(); 

    XmlTextWriter twriter = 
        new XmlTextWriter(stringWriter); 
    twriter.Formatting = Formatting.Indented; 

    twriter.WriteStartDocument(); 
    twriter.WriteStartElement("wmiinformation"); 
    twriter.WriteAttributeString("start", DateTime.Now.ToString("s")); 
    foreach (var wmiObject in wmiObjects) 
        try 
        { 
            // next line may fail so we wrte Start Element only if it succeeds 
            DateTime start = DateTime.Now; 
            ManagementObjectSearcher searcher = new ManagementObjectSearcher(String.Format("SELECT * FROM {0}", wmiObject)); 
            DateTime endquery = DateTime.Now; 
            twriter.WriteStartElement(wmiObject); 
            TimeSpan elapse = endquery - start; 
            twriter.WriteAttributeString("queryMsec", elapse.TotalMilliseconds.ToString()); 

            foreach (ManagementObject wmi in searcher.Get()) 
            { 
                twriter.WriteStartElement("item"); 
                foreach (var prop in wmi.Properties) 
                { 
                    if (prop.Value != null) 
                    { 
                        twriter.WriteStartElement("property"); 
                        switch (prop.Value.GetType().Name) 
                        { 
                            case "Boolean": 
                                twriter.WriteAttributeString(prop.Name, (bool)prop.Value ? "true" : "false"); 
                                break; 
                            case "String": 
                                twriter.WriteAttributeString(prop.Name, (string)prop.Value); 
                                break; 
                            case "UInt64": 
                                twriter.WriteAttributeString(prop.Name, prop.Value.ToString()); 
                                break; 
                            case "UInt32": 
                                twriter.WriteAttributeString(prop.Name, prop.Value.ToString()); 
                                break; 
                            case "UInt16": 
                                twriter.WriteAttributeString(prop.Name, prop.Value.ToString()); 
                                break; 
                            case "Int64": 
                                twriter.WriteAttributeString(prop.Name, prop.Value.ToString()); 
                                break; 
                            case "Int32": 
                                twriter.WriteAttributeString(prop.Name, prop.Value.ToString()); 
                                break; 
                            case "Int16": 
                                twriter.WriteAttributeString(prop.Name, prop.Value.ToString()); 
                                break; 
                            case "Double": 
                                twriter.WriteAttributeString(prop.Name, prop.Value.ToString()); 
                                break; 
                            case "Byte": 
                                twriter.WriteAttributeString(prop.Name, prop.Value.ToString()); 
                                break; 
                            case "UInt16[]": 
                                foreach (var x in (UInt16[])prop.Value) 
                                { 
                                    twriter.WriteElementString("item", x.ToString()); 
                                } 
                                break; 
                            case "String[]": 
                                foreach (var x in (String[])prop.Value) 
                                { 
                                    twriter.WriteElementString("item", x.ToString()); 
                                } 
                                break; 
                            case "Byte[]": 
                                foreach (var x in (Byte[])prop.Value) 
                                { 
                                    twriter.WriteElementString("item", x.ToString()); 
                                } 
                                break; 
                            default: 
                                twriter.WriteAttributeString(prop.Name, prop.Value.ToString()); 
                                break; 
                        } 

                    twriter.WriteEndElement(); 
                    } 
                } 
                twriter.WriteEndElement(); 
            }

            twriter.WriteEndElement(); 
        } 
        catch (Exception exc) 
        { 
            twriter.WriteElementString("Error", exc.Message); 

            twriter.WriteEndElement();  // Close the WmiObject Tag 
        } 
    twriter.WriteEndElement(); 
    twriter.WriteEndDocument(); 
    twriter.Close(); 
    return stringWriter.ToString(); 
}

 

Writing to a file with appropriate encoding

using (var output = new StreamWriter("systeminformation.xml",false,Encoding.Unicode)) 
{ 
      output.Write(wmiList == null ?  GetFullWmi() :  GetWmi(wmiList)); 
}

Sunday, January 17, 2010

Purple Software

This is a follow-up to my post: What Is A Software Company?

 

A software company’s job is to distribute the cost of making the software over many customers.  If this is true then acquiring as many customers as possible is important to the company.  The more customers you acquire the less you can sell your software for.  If your software costs a million dollars to develop and you have ten customers you need to sell it for $100,000 per customer.  However, if you have 100,000 customers, you can sell it for $10 per customer (somewhere you need profit). 

 

Being able to sell it for less means you make it harder for your competition.  If both companies require a million dollars to make the software and you have 100,000 customers and they have 10 customers, you will have very different price points, making it hard for them to reach profitability or  acquire the next customer.  It also makes it very hard for another company to enter the market.  Because of the market dominance effect of software, we see a lot of new software companies enter emerging markets, trying to acquire enough customers early in the new technology space. 

 

Lots of customers means that the software must solve the needs of all the customers, which means broad generalized software with lots of features.  However we are seeing profitable software companies appearing that have more specialized software that caters to a market niche.  How is this possible?  It is because they have keep their expenses low. 

 

“If you can keep your costs down when running a software company you can cater to a smaller group of customers.”

 

It is less expensive to make software now than in anytime in history.   Technologies like MVC, .NET, Ruby on Rails, The Internet (for distribution), Forums/Wikis/Video Streaming (for documentation), Flash/Silverlight/HTML (for platform) and MSI (for windows installation) have greatly decreased the costs of making software.  Customers want to buy software that is more specialized to what they are doing – it makes their jobs easier.  Before this software was expensive, an example being television script writing software (specialized) as compared to Microsoft Word (generalized), lower costs for software companies have brought down the prices for specialty software.

 

Seth Godin wrote a book called Purple Cow where he stressed that in order to market your business you needed to be remarkable.  One type of remarkable software is software that feels like custom software (developed specifically for a customer), however is priced like generalized commercial software.    Successful new software companies are emerging that take the Purple Cow philosophy and making specialized software for niche markets by using new technologies to keep their costs down.  This is Purple Software.

 

{6230289B-5BEE-409e-932A-2F01FA407A92}

What Is A Software Company?

As a programmer running a software company I didn’t realize this at first:

 

“A software companies job is to distribute the cost of making the software over many customers”

 

It is a disservice to your customers to not acquire enough customers to cover the cost of making the software.  If after you have spent your investors money to make the software, and you get too few customers to continue operations and repay the investors eventually you will go out of business.  Going out of business is not what your customers want from your company, they purchased your software (over choosing open source or shareware) so that it would have company support.  They have an expectancy of continual upgrades, and some form of product support.

 

In other words, if you can’t acquire enough customers to distribute the cost of the software you don’t have a software company, you have an open source project.  Though open source is many things, it can also be thought of as a software product whose potential sales will not cover the development of the software.  We see a lot of software start as open source, and when the controllers of the project find that they have a mass of customer they transition the software to a commercial product.  Some of them successfully, some of them not successfully, depending on how many customers they offend.  Notice that this transition has nothing to do with technology – it has to do with customers.

 

If you have one customer (or one big customer and a couple of others) you are a service company, not a software company.  The customer has hired you to build their software, and they plan to pay all the costs of making that software.

 

A software company is all about customers, and has very little to do with technology or technology people.  A software company could outsource all the technology and as long as the expenses (and some profit) could be divided amongst all the customer sales, they would have a viable company.    Hiring good programmers and program managers is a great way to keep your costs down, since outsourcing is expensive.  However, the act of making software is not as strategic to the software company as technology people might think.

 

Software companies usually make money hand over fist, they have huge profit margins and high price/earnings ratios.  Which really means they is plenty of money for outsourcing, I would content that if you software company couldn’t afford to outsource you should have a lower P/E ratio then comparable software companies.  Don’t get me wrong hiring in house reduces your expense, and that is always good.

 

Business people will find the above statements blandly obvious, it boils down to “If you don’t have customers you don’t have a company”.  However it is surprising how many technologist, programmers and program managers will answer the question “What is a software company?” with “A company that makes software”.  From their technology perspective it is the technology that drives the company, not the sales.

 

{6230289B-5BEE-409e-932A-2F01FA407A92}

Tuesday, January 12, 2010

Lazy exception handling...

The typical C# code seen does: try{ } catch(Exception exc){ }  Best practices are to handle each type of exception that could occur separately, tools like Telerik JustBeans can/will soon autogenerate the exception possibilities. This brings me to the second aspect, the failure to use:

 

/// <exception cref="System.Exception"> Thrown when... .</exception> 

see documentation.


What is sad, is that tools like StyleFx do not support checking for this (when it is possible). I often seen magic-phrase strings being returned in code that I review, instead of the exception being allowed to bubble up the stack.

Saturday, January 2, 2010

New Years Predication - Photography

My parents are scanning their slide collection into digital images and for the years between 1971 and 1981 they have 780 pictures, for the first ten years of my life there are 78 pictures a year. I remember getting my first camera around the age of 13 the cost of film was roughly $5.00 per roll and $10.00 to develop. 24 pictures were worth 60 cents apiece. Photography was too expensive in the late 80s to experiment with – every picture had to be a good one and we would wait weeks to see if the pictures we took would turn out.

 

Today, 110 years after George Eastman introduced the Brownie to the world, I will take 100 pictures in a day. There are no costs associated with the picture taking, just the price of the camera. Because it doesn’t cost me anything, my 4 year old daughter has her own Fisher Price camera and has taken more photos then her grandparents took those first ten years.

 

Just like Tiger Woods started golfing at age 2 to become a world champion golfer, children today will be able afford to start earlier in life taking pictures. Since the film doesn’t need to be developed budding photographers will get visual feedback on the quality of the composition, lighting, and framing of their pictures. Lower costs, more opportunity, children starting at a younger age, along with better equipment means better photos.

 

“Over the next twenty years we will see better photos then at anytime in history.”

 

{6230289B-5BEE-409e-932A-2F01FA407A92}