Tuesday, December 4, 2012

The Urgent versus the Wise… the next crisis?

I caught this blog post, The Tyranny of the Urgent, via IEEE. It happen to resonate with some recent reflections on the plus and minus of Continuous Deployment or Continuous Integration. Jim cites “Agile development easily devolves into management by crisis.”

 

I have observed that being agile and delivering quickly tend to deliver superficial products. Products that are then reworked, and reworked, and reworked. At one extreme, you have waterfall development which results in absolutely detailed plans that are difficult to adjust. At the other extreme, agile has been been taken by some into a gospel of immediate execution.  All of the troops get up and charge across the open field. Yes, the objective may be taken but there is often a high body count, a wasteful body count.

 

Recently I talked to a neuropsychologist who have treated many many battle causalities from Microsoft, Amazon, Google and many start-ups. Unlike the US Marines, with “no man left behind”, causalities are brushed off as “well, they should not be in this business if they cannot handle it”.

 

Having a career that span a few decades, I have seen several decades where “no man left behind” was the norm, the social contract between company and employees.  Employees that were fried or needed R&R, would be shifted to suitable work without any thought about it. Today, firms will show an employee  out of the door because they are not up to speed in today’s favor-of-the week. (What, no training of employees?)  Today, it’s out-of-the door. In reading comments at GlassDoor.com, it seems that Netflix is likely the worst. A quick Google will find many firms with a 100+% annual turnover rate for developers.  When I started working in this industry, a rate of 5% or less was not unusual.

 

So the question is:  Where will this end? How will it end?

Monday, December 3, 2012

Installing Motorola Droid as an Android Debug Bridge device

In my last post I successfully installed the Kindle driver as an Android Debug Bridge (adb) device. My Motorola Droid was not recognized as an adb device – and for one application that I am planning to do, I need GPS. So once more into the world of partially correct forum posts.

  1. First item to check was to insure that adb was enabled on the device!  It turned out that I had not enabled it (I had enabled Unknown Sources).
    1. When I connected the USB after changing the adb setting, I received a driver error. The phone now shows as a Drive (G:\) only instead of MTP USB Device.  One step forward, half a step back.
  2. I decided to just double check that the adb is not working… so at the command line,image
  3. What! It’s there. The device ID is different from the Kindle. Smile

This was a mercifully short post.  If I had not double checked, I could easily gone down a rabbit hole trying to get something looking like a phone showing up in the device manager. A rabbit hole that could have resulted in hours of frustrations.  Instead, I just see a misleading “G:\” for my Motorola Droid, and “Kindle” for my Kindle. Aaarrrgh!
image

Titanium Studio “Appcelerator” and Kindle HD development on Windows-64

,I’ve started to play with Appcelerator because it is effectively a cross-platform development platform for Droid, HTML5 and iOS. About two years ago I had done some Droid development just using the SDK. I have encountered a few frustrations and have not been pleased with the load time for the Droid Emulator on a Windows-7 x64 box with quad cores and 10 gigs of memory. For some simple applications, I have drunk a cup of coffee in the time that it took to build and launch in the emulator.

 

My traditional “magical chant” to get rid of technical frustrations is to start documenting. Often the process of documentation result in discovering skipped tests or alternative paths that could be needed.

 

Steps

  1. Download Titanium from http://www.appcelerator.com/. Large download. I ended up having to re-install it to C:\Titanium, it appears that having a space in various paths results in problems according to forums. Doing this resolved some build issues and set a pattern that I kept
  2. Download the Android ADT Bundle for Windows, Based on the above experience, I installed it to C:\Android. KISS.
  3. Verified that I had the latest JAVA SDE, Installed into C:\Java.
  4. Modify the environmental variables.
    image
    1. Added JAVA_HOME, set it to C:\Java\jdk1.7.0_09\ (needless to say, this should match what you have installed)
    2. Added to PATH, the JAVA bin folder,   C:\Java\jdk1.7.0_09\bin;
  5. After some failed compiles and digging in forums, it appears that Android 2.2 (API 8) needs to be installed also. See Android SDK Manager (C:\Android\sdk\tools\android.bat ) image below..
    image
  6. At this point, I could successfully build the HelloWorld application and launch it in the emulator. Needless to say, there seems to be a lot more overhead then with .Net Development in Visual Studio.
  7. Because the emulator was so slow to load, and Titanium offers build to device, I decided to try it.  Plugging in both Kindle Fire and a proper Android phone vias USB cable --- nothing was seen. No new drives…. Manure! Evidently the above process altered something with drivers, registry etc. Since using Kindle HD was my preferred Droid device, I proceeded over to Amazon to see how I needed to make it so.

Amazon Kindle (and HD)

  1. First, we needed to add additional Andorid SDKs, see this page for a complete list (API 10, 15).
    1. There is also a list of Kindle Emulators and how to set them up at this page.
      Personally, I prefer defining them from the command line (“..\tools\android avd”). Then switch to Device Definitions and just click:
      image
    2. OUCH, I also have to install Java 6 according to this page, the x86 version (even if you are running x64),
  2. Then you need to edit some files (more details here)
    1.  C:\Users\{UserName}\.android\adb_usb.ini, add  0x1949 on the last line
    2. C:\Users\{UserName}\.android\android_winusb.inf … which appear to be missing!
  3. Searching the drive,
    image
    I found two copies of the file android_winusb.inf
    1. C:\Android\sdk\extras\google\usb_driver
    2. C:\eclipse\AndroidSDK\extras\google\usb_driver
  4. When in doubt, modify all of them…
  5. Completing the instructions here,  I expected success --- unfortunately more manure….
    image
  6. I got the message:
    image
  7. Checking this page, I found different instructions specific for Windows-7. Just run C:\Android\sdk\extras\amazon\kindle_fire_usb_driver\Kindle Fire ADB drivers.exe.
  8. Checking Device Manager, I now see that the Kindle now has a driver.
    image
  9. Checking things at the command line shows that it is now there!
    image
  10. I disconnected the Fire, plugged in my phone – no device listed… Checking Device Manager shows that I am missing a driver…
    image
  11. Getting the phone working will be covered in a subsequent post.

That’s it for today (this weekend’s) battle against incomplete documentation…

Thursday, September 27, 2012

How to convert SVG data to a Png Image file Using InkScape

Introduction

The project required the need to put the visual pie chart on a web page on the client and in a Pdf file created on the server. I eventually choose to use a Telerik Kendo DataViz Chart control on the client, and use a PNG file on the server for the Pdf file. This blog post will explain the process and code to convert the client-side pie chart Svg data to a Png file, and provide a download to the working code.

Prerequisites

You need to have a trial version of the Telerik Kendo library installed in order to make the sample code work. You also need InkScape installed. InkScape is the application that does the actual conversion. You will need to change the Web.config appSetting of “ExeDirectoryForInkscape” to represent your own installation location.

Any SVG Data Will Work

The pie chart was created using the Telerik Kendo DataViz chart control. While I name a specific control, as long as you have access to the SVG data, the same conversion process on the server will work.

The Sample App

The sample application is a Asp.Net MVC 4.0 C# application with a pie chart via the Kendo JavaScript library. Above the pie chart is a convert button and to the right of the JavaScript pie chart is the final converted Png image.

 

image

Not Pixel-Perfect

You can see from the image above that the conversion is close but not perfect. The title of the chart, the date and time, is larger on the left than on the right. If you need pixel-perfect conversion, this method is not for you. If you can tolerate minor discrepancies, InkScape is a great conversion tool.

Grab the SVG Data

Since the Telerik Kendo UI pie chart is on the client, the JavaScript needs to grab the data and send it to the server for conversion.

 

function getPieChartPngImage() {

    "use strict";

    // get svg data from Telerik Kendo Pie Chart
   
var piechart = $("#piechart").data("kendoChart");

    // prepare string by escaping it
   
var svgPieString = escape(piechart.svg());

    // send svg data to server
   
$.ajax({
        url: '/Home/SvgToPng/',
        type: 'POST',
        data: {
            // actual data
           
svgpie: svgPieString
        },
        // webImageFileLocation is the UNC path to the converted image
       
success: function (webImageFileLocation) {

            // load UNC path into image's src attribute
           
$('#image').attr('src', webImageFileLocation);

        },
        error: function () {
            alert('error in call to /Home/SvgToPng/');
        }
    });
}

The Controller & Method

The SvgToPng controller method gets the UrlDecoded string and passes it to the SvgToPng.Save method along with the Server.MapPath of the location of where to save the converted file. It returns a file path which is then converted into the UNC path. Then the UNC path is returned to the client for use in an <IMAGE> src attribute.

[HttpPost]
public JsonResult SvgToPng(string svgpie)
{
    SvgToPng svgToPng = new SvgToPng(null);

    // convert svg data to png file
    string gridPngFileLocation = svgToPng.Save(HttpUtility.UrlDecode(svgpie), "sample", "pie", Server.MapPath("~/Content/Images"));

    // convert file path back to unc
    string uncPath = gridPngFileLocation.Replace(Request.ServerVariables["APPL_PHYSICAL_PATH"], "/").Replace(@"\", "/");

    return Json(uncPath);
}

Using InkScape

The InkScape code is in a separate class library in the application. There are several things it needs to do

 

  1. Set the file name
  2. Grab the Exe location from the Web.config for InkScape
  3. Save SVG data to a file
  4. Convert SVG file to a PNG file

I’m going to assume you can read the code for yourself for the first three steps. I’ll skip to the last step.

Conversion Process

The conversion process is in Convert(string fileAndPathToSvg, string newFileAndPathToPng). The command line arguments for InkScape are built up:

this.CmdLineArgs = "-f " + fileAndPathToSvg + " -e " + newFileAndPathToPng;

 

Then the InkScape location and command line arguments are passed into the CommandLineThread(this.ExeLocation, this.CmdLineArgs) method where a new process is created to execute in the shell.

private void CommandLineThread(string inkscapeCmdLine, string cmdLineArgs)
{
    if ((string.IsNullOrEmpty(inkscapeCmdLine)) || (!inkscapeCmdLine.Contains(".exe")))
    {
        throw new Exception("command line empty or doesn't contain exe: " + inkscapeCmdLine);
    }

    Process inkscape = new Process();

    inkscape.StartInfo.FileName = inkscapeCmdLine;
    inkscape.StartInfo.Arguments = cmdLineArgs;
    inkscape.StartInfo.UseShellExecute = false;
   inkscape.Start();

    inkscape.WaitForExit();
}

At this point the Png file and the Svg file should both be on the file system. All the application has to return is the location of the Png file as a UNC path that the <IMAGE> tag can use.

 

Download The Code

The Visual Studio 2010 project is available on Github.

Summary

The conversion of a Svg data string to a Png file is easy with the InkScape application. Calling InkScape from a process using the shell is the actual conversion process. Then the location of the converted file is the only information the client browser needs to display it.

Thursday, September 13, 2012

Restyling an Html <SELECT> Element from a Telerik Kendo Panel Bar

Introduction

Recently, I needed to restyle an HTML <SELECT> box so that the selected element had a different background color. In all other ways, the element could be exactly like an HTML <SELECT>. Below are both the original element, and the restyled element.

 

clip_image001

 

This type of style change allows the novice user to immediately identify the control’s implied usage while allowing for some coordination with the site and page’s overall style.

 

Technology Platform

The project is an Asp.net MVC site with jQuery, and Knockout.js on the front-end requesting json content on the backend. For the purposes of this blog post, I’ve stripped down the functionality so that just the required elements are visible.

The project was already using Telerik Kendo controls so morphing the Kendo Panel Bar into a <SELECT> element was suggested.

The main work is in the css file but the entire working sample project is available for download.

 

Telerik Kendo Trial Version

The Telerik Kendo JavaScript files in the demo are part of a trial installation. Please make sure you install the trial version of Kendo in order to make sure the trial works for you. Make sure to update the Kendo files if your installed trial has more recent versions.

 

Kendo Styling

Kendo controls come with complete styling in several themes. In order to override the style of the panel bar, very few css styles need to change. Below is an image of the default Kendo Panel Bar.

 

clip_image002

 

The main issues with the default styles are the background color, text color, the item separator (line), and the hover/selected styles.

 

Restyling (SelectDDL.css)

In order to override the default style of the items, add an entry for “.k-item > .k-link” to the style sheet.

 

#panelbar .k-item > .k-link
{
    /* remove default styles Kendo provides */
   
font-family: Segoe UI,Verdana,Helvetica,Sans-Serif;
    display: block;
    position: relative;
    font-size: 0.93em;
    border-bottom-style: none; /* next 3 lines remove line separator */
   
border-bottom-width: 0px;
    border-bottom-color: transparent;
    padding-left: 2px;
    line-height: normal;
    text-decoration: none;
    zoom: 1;
    white-space: nowrap;
    background-image: none; /* get rid of image kendo uses to diminish highlight color of selected item */
   
color: #000000;         /* overwrite default grey color */   
   
background-color: #ffffff;
}


The next area to restyle is the actions: hover and selected. When the item is hovered over in an html <SELECT> element, there is no change in background color or text color. In order to reproduce that for the Kendo control, change the “.k-state-hover:hover” style for the items.

#panelbar .k-state-hover:hover
{
   /* control the colors when item is hovered  */ 
   background-color: #ffffff;
   color: #000000;
}

Now that all the Kendo styling has been ripped out, the html <SELECT> element styling for the select element needs to be added back for the list items.

#panelbar .k-item > .k-state-selected
{
    /* control the colors when item is selected  */ 
    background-image: none; /* stop Kendo diffuser */
    color: #ffffff;
    background-color: Red;
}

In order to get the element’s box outline, add a border to the containing element. In this example, the containing element’s HTML is:

<ul id="panelbar"> 
</ul>

And the css to restyle the border is:

#panelbar
{
    border: 1px solid #000000; /* simple black border */
}

Demo Page (Index)

On the demo page, you will see three boxes. The first is an html <SELECT> with only enough styling to make it conform to the height, width, and page position for the demo. The second box is the CSS styled <UL> element using the Kendo panel bar control. The third box is the Kendo panel bar control with the default styling.

 

clip_image003

 

JavaScript (selectDDL.js)

The JavaScript is straightforward. The data is returned from the server then bound to the html elements (<SELECT> or <UL>) with knockout.js. After that, the Kendo panel bar control is attached to the html element. The click event for the middle box’s <LI> element grabs the <LI>’s text and places it into a <DIV> right below – just so you see something happen.

Since the Kendo Panel Bar doesn’t have to do much, except look pretty, the configuration of the control is minimal:

// Add control 
$("#panelbar").kendoPanelBar({
    expandMode: "single"
});

The right-most control shows all the items by default. That isn’t the expected behavior of an html <SELECT>element. In order to shorten the displayed items, and have the middle box to behave more like an html <SELECT>, the element is styled in jQuery after the Kendo control is attached to the <UL> element:

// size the control to not overflow box
$('#panelbar').css('overflow-x', 'hidden');

Download

The complete working Visual Studio 2010 project can be found on GitHub.

 

Summary

Restyling the Kendo Panel Bar to behave as an html <SELECT> element took very few changes to the css and JavaScript.

Wednesday, August 22, 2012

Migrating your Sql Azure Database Using Data-tier Application Technology

This blog post by Wayne Berry shows how to migrating your Windows Azure SQL Database Using Data-Tier Application Framework (DacFX) Technology to an on-premise SQL Server. With the Window Azure Portal, you can easily create a Data-Tier Application logical backup package (BACPAC), store it to your Windows Azure Blog Storage; and then, using SQL Server Management Studio 2012, you can import that package to your local database server.

Read: Migrating your Windows Azure SQL Database Using Data-Tier Application Framework (DacFX) Technology

{6230289B-5BEE-409e-932A-2F01FA407A92}

Monday, July 16, 2012

Using XMLHttpRequest.getResponseHeader to get Json Response Headers in jQuery

Introduction

While developing a web page with a summary graph and a detail grid below, I needed to pass a small amount of data outside of the existing Json call data sets. I also wanted to do this without another asynchronous call back to the server. I wound up stuffing the data in a custom http response header while already on the server. This article explains how I retrieved the data on the client.

Existing Client Libraries

Both the graph and the grid were independent jQuery libraries with very specific Json response requirements. Stuffing the small amount of data in one of those response data sets could have been done with enough tinkering but I wasn’t sure how this would affect those libraries now or in future versions. While the data was related, it was also separate and I wanted the code design to reflect that.

Server Http Response Stuffing

This project uses Asp.Net MVC 4. The code to add the small amount of data should be the same across the .Net platform:

HttpContext.Response.AddHeader("RetrievalDate", this.RetrievalDate.ToShortDateString()); 

Since either Json call (the graph or the grid) will have this retrieval date, I added the additional header to one of those calls.

 

Make sure you set the no-cache response:

HttpContext.Response.AddHeader("Cache-Control", "no-cache"); 

Client jQuery to retrieve the Custom Json Response Header

On the client, I was using a $.getJson call to make the asynchronous call and get the data. The jqXHR parameters contains all the response headers.

jQuery.getJSON( url [, data] [, success(data, textStatus, jqXHR)] 

The jqXHR variable is the XMLHttpRequest object.

 

I wanted to be clear about the different parameters so I switched the code to use the older $.ajax style. This change moved the variable for the response headers from the initial getJSON line to the interior success invocation.

 

The response headers are not available until the asynchronous response successfully returns from the server so grabbing the custom response header happens in the success function:

    var request = $.ajax({
        url: uri,
        dataType: 'json',
        success: function (data, textStatus, jqXHR) {

            // function using returned Json data
            setGraphFromData(data, chartname, title);

            // get Custom Header from Json Response
            var retrievalDate = jqXHR.getResponseHeader('RetrievalDate');
            $("#retrievaldate").text(retrievalDate);

        }
    });
Once I have the retrievalDate, I add it to the text of my span with the same Id:
Built on:  <span id="retrievaldate"></span>

Summary

This article explains how to retrieve a header value from a Json Response using the XMLHttpRequest.getResponseHeader function.

Wednesday, June 13, 2012

Background tasks (continuous code) in the Cloud: Cloud Foundry beats AWS and Azure

Introduction

I have a background task collection I need run, continuously, but with different timer intervals for each task. It is a critical part of my web services and provides the data gathering and transformations the make the web service valuable. How should I package the code and where should I deploy it? Only in the cloud.

 

As the cloud space is moving faster than I can write this, any of this could be outdated by the time you reader it.

Consider Time, Money, and Ease of Deployment

In struggling with where to deploy the code, I considered the cloud cost, the time to learn and build the solution as well as any cloud gothchas. I’m now on my sixth cloud provider trying to determine if they are the best at background tasks. Why? Because background tasks are where the heavy lifting happens. I want to spend my time getting that heavy lifting correct and not fighting with the cloud environment.

Background Task Defined

Just to be clear, I consider a background task any code that runs continuously. Whether it has a UX or responds to http(s) is a detail at this point.  As long as I have the continuous part, I can work around the other caveats.

Timed Events (.Net Timer, Scheduled Task, or Cron) are critical

I’ve looked at the .Net Timer class inside code and the Scheduled Task (windows) and Cron (linus). I need the timer so it is critical, how I get it is less important. However, the farther toward IT-ish settings I go from my code, the higher a chance I will forget to update the live code, or verify the timing device. So the timer does need to be contained in code but I’ll be language and platform agnostic to get this.

Basic Cloud Companies don’t care about Background Tasks

Basic cloud companies are still writing their analysis and deployment tool code. They only care enough about background tasks to point you to a framework that might, kind of, if you look the right way, consider background tasks. Good luck there.

The Big Guys know Background Tasks are important

Amazon and Azure both have some strategy for Background Tasks.

 

AWS is more IT-ish in that you have to grab an Amazon Machine Image (AMI) then dink with the system control for timers (Cron or Schedule Tasks), then deal with a Daemon or Service (yours or someone else’s), then deploy to the AMI. I just want to Git Push and skip the IT headache so thanks but no.

 

Azure (gosh love ‘em) knows we love to write code and has provided the background task concept as a worker role. Awesome! Love it! The only caveat is that the Worker Role is a very specific project framework. You have to adopt and support the framework in order to deploy your background task. That is so close to ideal. But is there anything better?

Cloud Foundry is doing Background Tasks right

Cloud Foundry has a novel approach to background tasks. You write the code and they treat it as a task. Period. No framework, no IT settings. Just code. That is doing background tasks right. Granted I haven’t deployed yet but I’ll give Cloud Foundry the benefit of the doubt.

Did I leave someone out?

I probably left someone’s favorite company off this list. Cloud providers pop up so fast, it is hard to keep up. Sorry about that. If you know of a Cloud company that does background tasks just the way you like, leave a comment below so I can investigate.

Tuesday, June 5, 2012

Steps for Consuming XML data in .Net

Introduction

While consuming third-party RSS feeds, I found I had to relearn how to deal with XML data. This post is meant to prepare any developer who needs to consume XML which they do not control. While I used RSS feeds, any XML will apply. I wanted to change the meta data and data of the XML file into a model of data that I could control with .Net classes and conventional data storage.

 

This post is organized to take you from an xml file to .Net classes able to consume, serialize, and test the xml.

Generating an XSD file from an XML file using Xsd.exe

The first step is to make sure you have the XML Schema Definition tool, xsd.exe, installed. It is part of the .Net Framework tools. Make sure the executable location is part of the system path, user path, or command prompt path. On my computer the path is “C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin” and I added it to the system path so it is available at any command prompt, regardless of the user.

 

Separate out the xml file(s) into a separate directory. At a command prompt, generate the schema file(s) from the xml file with “xsd file.xml” where file.xml is the name of your xml file. The new file just created is the schema definition files for the xml. Open it up and make sure the schema makes sense.

 

One gotcha you can spot in the schema is to make sure all content is encoded. An RSS feed content section may include html markup. Make sure the HTML is encoded. For example, make sure a <br> appears as &lt;br&gt;. If the br is un-encoded, appearing as <br>, the xsd schema definition will create a new section of the definition to deal with it when you don’t want it to be separated out either in the schema or the resulting .Net classes.

 

If there is more than one xsd file, you will need to know which is the primary for the next part of the process. The primary xsd file is the one that has the data definitions. 

Generating the .Net classes from the XSD schema file(s)

Now that the schema files are just as you want them, you generate the .Net classes containing the associated models with “xsd file.xsd  file2.xsd /classes” at the command prompt. The example assumes you have several schema definition files. Each definition file must be listed to create the classes correctly. You may have several schema definition files if your xml references more than one namespace.

Minor clean-up of the auto-generated .Net classes

The single .cs file will contain all the classes required to deserialize the xml into models. If you need to change class names, start with the parent name only and change it’s name but add the original, scheme-determined name in the XmlRootAttribute.

 

For example, the generated .cs file may produce a parent/top class name that doesn’t correspond with your current naming practices. For an rss file, it would be “rss.” The following is the top of the auto-generated file.

 

image

 

If you want to change the class name from “rss” and still parse the rss, you need to change the class name to your new name (“RssXmlModel” below) and modify the XmlRootAttribute to include the “rss” name.

 

 

image

Add a namespace to the classes.

 

You may be inclined to cleanup the auto-generated classes changing all sorts of names, definitions, etc. If you do not control the xml, but just consume it, you may have to do this cleanup again when the producer/owner of the content changes their xml. You should either only change the top xml node’s definition (class name, xmlrootattribute), or you should create an entirely new model with a process to convert between the auto-generated model and your final model.

 

Notice the auto-generated file doesn’t include the tool’s name, xsd.exe. You may want to add that for the next developer that has to deal with this file in your project.

A Generic method to Request XML using HttpWebResponse

The following method allows you to request the xml, put the response content into a string, and deserialize into the model. It assumes you already have an HttpWebRequest object correctly setup in the class before calling GetXmlRequest(). Feel free to dry out the code to suit your purposes.

 

   1:  public static T GetXmlRequest<T>(Uri uri)
   2:  {
   3:      if (uri == null)
   4:      {
   5:          throw new NullReferenceException("uri");
   6:      }
   7:   
   8:      TimeSpan timeSpan = new TimeSpan(0, 2, 0);
   9:   
  10:      HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(uri);
  11:      request.Timeout = (int)timeSpan.TotalMilliseconds;
  12:      request.ReadWriteTimeout = (int)timeSpan.TotalMilliseconds * 100;
  13:      request.Method = "GET";
  14:      request.ContentType = "text/xml";
  15:   
  16:      try
  17:      {
  18:          using (HttpWebResponse httpWebResponse = (HttpWebResponse)request.GetResponse())
  19:          {
  20:              using (StreamReader streamReader = new StreamReader(httpWebResponse.GetResponseStream()))
  21:              {
  22:                  // leave this in, to look at string in debugger
  23:                  string xml = streamReader.ReadToEnd();
  24:   
  25:                  if (string.IsNullOrEmpty(xml))
  26:                  {
  27:                      return default(T);
  28:                  }
  29:   
  30:                  T temp = Serializer.XmlDeserialize<T>(xml, Encoding.GetEncoding(httpWebResponse.CharacterSet));
  31:   
  32:                  // DFB: Object couldn't be deserialized
  33:                  if (EqualityComparer<T>.Default.Equals(temp, default(T)))
  34:                  {
  35:                      Debug.WriteLine("default T");
  36:                  }
  37:   
  38:                  return temp;
  39:              }
  40:          }
  41:      }
  42:      catch (WebException webException)
  43:      {
  44:          if (webException.Response != null)
  45:          {
  46:              using (Stream responseStream = webException.Response.GetResponseStream())
  47:              {
  48:                  if (responseStream != null)
  49:                  {
  50:                      using (StreamReader reader = new StreamReader(responseStream))
  51:                      {
  52:                          Trace.TraceError(reader.ReadToEnd());
  53:                      }
  54:                  }
  55:              }
  56:          }
  57:   
  58:          throw;
  59:      }
  60:  }

A Generic method to Deserialize into the auto-generated Classes

Once you have the xml (line 23 above), you can deserialize into the auto-generated classes (line 30).

 

   1:  public static T XmlDeserialize<T>(string xml, Encoding encoding)
   2:  {
   3:      try
   4:      {
   5:          T obj = Activator.CreateInstance<T>();
   6:   
   7:          XmlSerializer serializer = new XmlSerializer(obj.GetType());
   8:   
   9:          using (MemoryStream memoryStream = new MemoryStream(encoding.GetBytes(xml)))
  10:          {
  11:              T temp = (T)serializer.Deserialize(memoryStream);
  12:              return temp;
  13:          }
  14:      }
  15:  }
 

A Unit Test library to View the Rss xml in the Generated Models

The project containing this code is available on GitHub. Download, build and run the test named “StringToObject” found in the UnitTest1.cs file of the XmlTestProject. Set a breakpoint on Assert.IsNotNull(newRssObject) and add the newRssObject to the Watch window.

 

You can see the data in the classes via the Watch window below.

 

image

 

The test reads an xml file using auto-generated classes.

 

Summary

This example shows how to take a raw xml string and convert it into  C# .Net classes you can use to deserialize the xml. Now that the data is in a model, you can put the data in any traditional data store. My most common next steps are adding Ling to choose some interesting queries and serializing back to a file on disk.

Saturday, March 31, 2012

Post-Ruby High with BDD in .Net

I mentioned in my last post how much I enjoyed Ruby and wanted to continue to use it in my project. That is easier said than done.

 

In order to leverage all the great Ruby (RoR) tools and methodologies I learned, I’m attempting to find .Net equivalents. This started with finding a Behavior Driven Development (BDD) tool. There are several that come up but they are a hybrid to a BDD (I picked SpecFlow) tool siting on top of a TDD tool (NUnit or MSTest).  I also needed a web driver to make calls to a browser and verify results. I looked at WatiN but went with Selenium.

 

The next two tools I need to add are a mocking tool (Moq) and an injection tool (probably NInject).

 

Since the class used GitHub and Heroku, I’ll also stick with that. My GitHub account is DFBerry.  All my example code is posted up their in public repositories.

Saturday, March 24, 2012

Results of .Net Developer’s 5 weeks with Ruby (RoR) and SaaS

Introduction

Several things happened in a short period of time to influence my decision to take a Ruby/SaaS class. I’m a lifer on .Net or more specifically Asp.Net and it’s precursor, Asp. First, the MVC .Net book I was reading at the time said programmers either use .Net or Ruby for MVC development but not both. Second, an online UC Berkeley class was free and gave me both a new language (Ruby) and backfilled any software-as-a-service (SaaS) holes in my knowledge. Third, the class used the Agile methodology which I had bits and pieces of. I wanted to see someone else’s interpretation put into practice.

 

Ruby on Rails (RoR)/MVC

I knew MVC, web protocols, and web development so the learning curve was all Ruby. The class had a quick pace where I knew how to do it in .Net but not Ruby. Ruby, fortunately, is a very easy language to pick up. It feels very much like a script kiddie toy but  more powerful.

 

Interpreted versus Compiled

Ruby is interpreted while .Net is compiled. Ruby is a conglomeration of files in specific locations while .Net is a single executable with the possibility of additional library classes in dlls. I like the separation of concerns between the libraries and the final executable code. But I also like the Ruby instant gratification of interpreted code. I would like to be able to intermix the interpreted and the compiled. Ruby for the front-end web development with Behavior Driven Development – basically string parsing and regular expression matching, then .Net compiled dll libraries with traditional unit tests for the heavy lifting. 

 

Yes, I know .Net has BDD but Ruby’s options are more mature with a larger audience. This gives me many more choices to implement as well as wider community support via StackOverflow.  Definitely points to the Ruby community here.

 

Yes, I know Ruby has TDD but since it is all string parsing anyway, it feels more like a great place to introduce more bugs instead of find ones already there. I think .Net exposes/fixes that better than Ruby does. Also as a single file (version number, date compiled, etc), and not many files and configurations, it is easier to know I have the correct object to test in the first place. And if the entry points are class method calls, a string-based language is the wrong hammer for that nail.

 

Language Constructs

C# is  a growth of C++ which came from C. Ruby is inspired by Perl, Smalltalk, and Lisp but also has oop features. I’ve compared languages intensely in the last five weeks. Ruby does some great oop things. Method naming, inheritance, poetry mode and method missing are, so far, my favorites.   Method missing is the only one I absolutely want to see in C# but probably won’t get because it is compiled. 

 

Method missing is a class-level catch-all for any method called in that class that isn’t actually defined for that class. Assume the Foo class only has method bar as in “Foo.bar.” With a “method_missing” defined in Foo , I could easily call Foo.Bake, Foo.Grill, and Foo.Reheat and the “method_missing” method would be called and I have access to the method name “Bake”, “Grill”, or “Reheat” so the method can either use that information or pass it on.

 

While I understand mix-ins (duck-typing) is awesome, apples to oranges, C#/.Net has this covered well enough for me in class extension methods.

 

As for reability of Ruby, such as the following code:

    a.should be >= 7

C# can’t reproduce this entirely but, with some naming conventions, the C# code could be just a readable.

 

Environments

Linux versus Windows is quickly becoming a non-issue with platform expansion and cloud hosting. At this point, for me, the choice comes down to upfront cost alone. Performance, documentation, support, and the other usual metrics are either head-to-head equal or an apples to oranges comparison resulting in no clear winner.

 

Sometimes it was nice to have text configuration files in Ruby, sometimes it is nicer to have an IDE for configuration.

 

Free versus Not-free

So what about the money? Microsoft products aren’t free and that is a big hurdle for boot-strapping a Startup company. And once you are down the road a ways with your free development tools, why use .Net at all since you have to pay for it? That’s a question Microsoft gets to answer. With the new influx of developers coming from countries where the cost of Visual Studio is seriously out of reach, Microsoft needs to provide that free product. I don’t know if the Express versions of Visual Studio meet that need.

 

A Free Microsoft Interpreted Language

I would love to see Asp (or some interpreted equivalent) come back and take Ruby on as a free, interpreted language.

 

Final Results

Until then, I will try to marry Ruby and C#/.Net for myself with Ruby handling the html front-end and C# handling middle-ware and backend. This may be some hack of IronRuby and C# or it may be Linux/Ruby calling into a .Net REST service. Either way, I think both languages bring something to address the entire web space in a better way than either does alone.

Thursday, February 16, 2012

Combining Multiple Azure Worker Roles into an Azure Web Role

Introduction
While working on apps.berryintl.com’s web service, it appeared from the Windows Azure project templates that I might need several worker roles because they cycle at different times. One worker needed to cycle every day. The other needed to cycle every five minutes. Since the work and the cycle rate were very different, it felt “right” to isolate each role.

In this post, I’ll show you how I combined multiple worker roles into a single, underutilized Azure web role thereby keeping the concept and design of worker roles while not having to pay extra for them. In order to verify the solution, the project uses Azure Diagnostics Trace statements.

The solution file is available for download.

Combining Multiple Azure Worker Roles into an Azure Web Role
Currently, I use a small Azure instance and there isn’t enough traffic to justify a separate worker role. I implemented Wayne’s process for combining one worker role with a web role a while back. Now I needed to add another worker role to the mix. Wayne wrote another post about multiple worker roles, however these two articles used slightly different methods in that one uses the Run() method and the other uses the OnStart() method of the WebRole.cs file. I needed to come up with a solution that included both concepts.

Note: Wayne Berry is the founding member of the 31a Project. I won’t cover material from Wayne’s two posts. If you haven’t read them, you should. There is a lot of great insight into Azure that I do not duplicate in this post.

Run()
Combining worker and web roles meant adding a Run() method to the Web Project WebRole.cs file. In the standard WebRole.cs generated from a web role project template in Visual Studio, the Run() method is not present.
clip_image001
Original WebRole.cs
public class WebRole : RoleEntryPoint
    {
        public override bool OnStart()
        {
            // For information on handling configuration changes
            // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.

            return base.OnStart();
        }
    }


In this modified WebRole.cs, the Run() method contains the code to be treated as a worker role. A WebRole class created from a Worker Role template project created by Visual Studio has the Run() method overridden. By adding the Run() method to my web role project I can get both a web role and a worker role in the same instances. I wanted to use the Run() to both be 1) consistent with how the worker roles were treated in the role lifecycle and 2) to separate out what was logically background process from web site functionality.

In the new WebRole class below, notice that the base class is no longer RoleEntryPoint but is ThreadedRoleEntryPoint. ThreadedRoleEntryPoint sits between WebRole and RoleEntryPoint and gives the project the threaded workers but calls the RoleEntryPoint so all the roles can behave as expected.

clip_image003
New WebRole.cs

/// <summary>
    /// Manages creation, usage and deletion of workers in threads
    /// </summary>
    public class WebRole : ThreadedRoleEntryPoint
    {
        /// <summary>
        /// Treated as WebRole Start
        ///     If the OnStart method returns false, the instance is immediately stopped.
        ///     If the method returns true, then Windows Azure starts the role by calling
        ///     the Microsoft.WindowsAzure.ServiceRuntime.RoleEntryPoint.Run() method.        
        /// </summary>
        /// <returns>bool success</returns>
        public override bool OnStart()
        {
            // For information on handling configuration changes
            // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
            Trace.TraceInformation("WebRole::OnStart ", "Information");

            // Setup Azure Dignostics so tracing is captured in Azure Storage
            this.DiagnosticSetup();

            return base.OnStart();
        }

        /// <summary>
        /// Treated as WorkerRole Run
        /// </summary>
        public override void Run()
        {
            Trace.TraceInformation("WebRole::Run begin", "Information");
            
            List<WorkerEntryPoint> workers = new List<WorkerEntryPoint>();

            // ONLY CHANGE SHOULD BE TO ADD OR REMOVE FROM THE NEXT TWO LINES
            // MORE OR LESS ADDITIONS
            // WITH SAME OR DIFFERENT WORKER CLASSES
            workers.Add(new Worker1());
            workers.Add(new Worker2());

            base.Run(workers.ToArray());

            Trace.TraceInformation("WebRole::Run end", "Information");
        }
    }


By inserting a class between the original child, WebRole, and the original parent, RoleEntryPoint, we have a place to manage the threaded worker roles.

ThreadedRoleEntryPoint()
The ThreadedRoleEntryPoint contains thread creation, adds one worker to its own thread and runs the workers.

ThreadedRoleEntryPoint.cs
/// <summary>
    /// Middle class that sits between WebRole and RoleEntryPoint
    /// </summary>
    public abstract class ThreadedRoleEntryPoint : RoleEntryPoint
    {
        /// <summary>
        /// Threads for workers
        /// </summary>
        private List<Thread> threads = new List<Thread>();

        /// <summary>
        /// Worker array passed in from WebRole
        /// </summary>
        private WorkerEntryPoint[] workers;

        /// <summary>
        /// Initializes a new instance of the ThreadedRoleEntryPoint class
        /// </summary>
        public ThreadedRoleEntryPoint()
        {
            EventWaitHandle = new EventWaitHandle(false, EventResetMode.ManualReset);
        }

        /// <summary>
        /// Gets or sets WaitHandle to deal with stops and exceptions
        /// </summary>
        protected EventWaitHandle EventWaitHandle { get; set; }

        /// <summary>
        /// Called from WebRole, bringing in workers to add to threads
        /// </summary>
        /// <param name="workers">WorkerEntryPoint[] arrayWorkers</param>
        public void Run(WorkerEntryPoint[] arrayWorkers)
        {
            this.workers = arrayWorkers;

            foreach (WorkerEntryPoint worker in this.workers)
            {
                worker.OnStart();
            }

            foreach (WorkerEntryPoint worker in this.workers)
            {
                this.threads.Add(new Thread(worker.ProtectedRun));
            }

            foreach (Thread thread in this.threads)
            {
                thread.Start();
            }

            while (!EventWaitHandle.WaitOne(0))
            {
                // Restart Dead Threads
                for (int i = 0; i < this.threads.Count; i++)
                {
                    if (!this.threads[i].IsAlive)
                    {
                        this.threads[i] = new Thread(this.workers[i].Run);
                        this.threads[i].Start();
                    }
                }

                EventWaitHandle.WaitOne(1000);
            }
        }

        /// <summary>
        /// OnStart override
        /// </summary>
        /// <returns>book success</returns>
        public override bool OnStart()
        {
            return base.OnStart();
        }

        /// <summary>
        /// OnStop override
        /// </summary>
        public override void OnStop()
        {
            EventWaitHandle.Set();

            foreach (Thread thread in this.threads)
            {
                while (thread.IsAlive)
                {
                    thread.Abort();
                }
            }

            // Check To Make Sure The Threads Are
            // Not Running Before Continuing
            foreach (Thread thread in this.threads)
            {
                while (thread.IsAlive)
                {
                    Thread.Sleep(10);
                }
            }

            // Tell The Workers To Stop Looping
            foreach (WorkerEntryPoint worker in this.workers)
            {
                worker.OnStop();
            }

            base.OnStop();
        }
    }


Now that the WebRole has a Run() passing an array of different worker roles and the intermediate class deals with the threads, we need to develop each worker role we want to run.

The Worker Role
The WorkerEntryPoint class is the base class for each worker role that needs to be created. The ProtectedRun() method allows any system exceptions to get back up to the WebRole class so that the worker role can be restarted.

Note: The important distinction is that all worker roles in this sample will stop and restart when the system exception bubbles back up to the WebRole.cs class. In the normal Azure Worker Role practice, where each worker role is in its own assembly, only the troubled worker would be stopped and started.

WorkerEntryPoint.cs

/// <summary>
    /// Model for Workers
    /// </summary>
    public class WorkerEntryPoint
    {
        /// <summary>
        /// Cycle rate of 30 seconds
        /// </summary>
        public readonly int Seconds30 = 30000;

        /// <summary>
        /// Cycle rate of 45 seconds
        /// </summary>
        public readonly int Seconds45 = 45000;

        /// <summary>
        /// OnStart method for workers
        /// </summary>
        /// <returns>bool for success</returns>
        public virtual bool OnStart()
        {
            return true;
        }

        /// <summary>
        /// Run method
        /// </summary>
        public virtual void Run()
        {
        }

        /// <summary>
        /// OnStop method
        /// </summary>
        public virtual void OnStop()
        {
        }

        /// <summary>
        /// This method prevents unhandled exceptions from being thrown
        /// from the worker thread.
        /// </summary>
        internal void ProtectedRun()
        {
            try
            {
                // Call the Workers Run() method
                this.Run();
            }
            catch (SystemException)
            {
                // Exit Quickly on a System Exception
                throw;
            }
            catch (Exception)
            {
            }
        }
    }

Each Worker
Each worker needs to inherit from WorkerEntryPoint with the code unique to that worker. The two workers in the sample app (Worker1 & Worker2) have a couple of details you could change. The first is the milliseconds passed to Thread.Sleep. This controls the cycle rate of the worker thread. The second change should be adding the work of the thread. In the sample, the workers each print out trace information to the output window as their work.

Worker1.cs
/// <summary>
    /// Worker1 contains the entire cycle of the worker thread
    /// </summary>
    public class Worker1 : WorkerEntryPoint
    {
        /// <summary>
        /// Run is the function of an working cycle
        /// </summary>
        public override void Run()
        {
            Trace.TraceInformation("Worker1:Run begin", "Information");

            try
            {
                while (true)
                {
                    // CHANGE SLEEP TIME
                    Thread.Sleep(this.Seconds30);

                    //// ADD CODE HERE

                    string traceInformation = DateTime.UtcNow.ToString() + " Worker1:Run loop thread=" + System.Threading.Thread.CurrentThread.ManagedThreadId.ToString();
                    Trace.TraceInformation(traceInformation, "Information");
                }
            }
            catch (SystemException se)
            {
                Trace.TraceError("RunWorker1:Run SystemException", se.ToString());
                throw se;
            }
            catch (Exception ex)
            {
                Trace.TraceError("RunWorker1:Run Exception", ex.ToString());
            }

            Trace.TraceInformation("Worker1:Run end", "Information");
        }
    }


Using a Separate Library for the logical Worker Role
When you download the sample application and look at the solution, you will notice that the four files that make up the logical Worker Role are in a separate assembly. This is explained in the comment at the bottom of the Running Multiple Threads post:
Azure seeks the worker role assembly for the first class that derives from RoleEntryPoint, and tries to load the abstract class in this library.

clip_image004
Do not move the files back into the assembly that has the webrole.cs file. It won’t work.


Viewing the Work
In order to make the sample as simple as possible and yet give you something you can see, the code prints trace statements. This will let us see evidence of the work.

On the local development box, you can see the trace statements in the Visual Studio Output Window or the Azure Compute Emulator. In order to see those statements in the Azure Cloud, I’ll use Azure Diagnostics to insert each statement into an Azure Storage Table. Once the information is in Azure Storage you can use the Visual Studio Server Explorer to view the storage or a third party tool. My current favorite is Azure Storage Explorer.

TODO: In the WindowsAzureProject1 project’s ServiceConfiguration.Cloud.cscfg file, change the value of "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" to your own Azure Storage account if you plan to deploy this sample to the Azure cloud.

Using Azure Diagnostics to Verify Deployment
The Azure Diagnostics takes the Trace statements and sticks them in an Azure Storage table named WADLogsTable on a timed cycle. In the image below, you can see four lines of tracing where each worker role is spitting out date, time, and thread number.

clip_image006

If you are running the sample, make sure you give Azure Diagnostics enough time to move the traces to Azure Storage. The sample’s rate is 1 minute for all trace statements however you can configure this. The DiagnosticSetup() method below shows those two specific lines in bold.

DiagnosticSetup() called from WebRole.cs OnStart()


/// <summary>
        /// This sets up Azure Diagnostics to Azure Storage. 
        /// </summary>
        private void DiagnosticSetup()
        {
            // Transfer to Azure Storage on this rage
            TimeSpan transferTime = TimeSpan.FromMinutes(1);

            DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration();

            // Transfer logs to storage every cycle
            dmc.Logs.ScheduledTransferPeriod = transferTime;

            // Transfer verbose, critical, etc. logs
            dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;

            System.Diagnostics.Trace.Listeners.Add(new Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener());

            System.Diagnostics.Trace.AutoFlush = true;

#if DEBUG
            var account = CloudStorageAccount.DevelopmentStorageAccount;
#else 
            // USE CLOUD STORAGE    
            var account = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString"));
#endif

            // Start up the diagnostic manager with the given configuration
            DiagnosticMonitor.Start(account, dmc);
        }

Verify Resources
The idea of combining worker roles into a web role is that the worker roles don’t take down the web role to the point that a customer or web request can’t be answered in an expected time period. For my worker role that I run (in my real app, not this sample) on a 24 hour cycle, I’m not too concerned about over-working my web role. But the worker rule that I run every five minutes is something I need to take a closer look at. In order to verify the combination of web and worker roles is functioning within the limits I set for the response, I need to make sure there are no failed requests. This information is captured in the Failed Requests Log. The next step is to take a look at the performance counters.

Summary
This blog post explains how to combine multiple threaded worker roles into a single web role. It is important to understand this should only be done for deployments that under-utilize the resources of the web role. After you download and alter the sample, verify your web role and worker roles are running as expected and that the resources are not over taxed with the new threaded worker roles.

The solution file is available for download.