Saturday, February 26, 2011

A tale of two conferences…

This month I attended two conference in Vancouver, B.C. One I left looking forward to the next one,  the other I am unsure if I will attend again. I give a table of the characteristics of the two conferences

Attribute Product Camp SQL Saturday
Attendance Limited to 140 (with waiting list) 280+ (no limit)
Breaks Rooms with round tables and sofas In hallways, no seating
Lunch Seating Sitting around round tables Sitting on one side of tables facing stage
Lunch food Buffet ($10) Individual plates brought by staff ($15)
Lunch time activity Talking with each other, making new friends Listening to sponsor presentations (captive audience)
Speakers Local folks Imported from everywhere
Usefulness of talks High (engaged on issues) Low (basics presented in general)
Sessions 4 5
Number of new friends made High Low
Location: University(SFU) Conference Center Commercial Hotel Conference Center
I will leave you to guest which one is which in my opinion. True, one was labeled an “unconference” and one “training event”.

Monday, February 21, 2011

Possible future of Taxes and Teleworking

Last night and this morning there was an ongoing discussion about taxes and teleworking on one of my groups– especially for small firms (LLC often). All of the folks are based in WA-state and thus subject to B&O and sales tax. The discussion also caused me to look at the future of taxation – why, governments are slow to adapt to technological change.


First item was simple: in WA state an individual or business that buys stuff on the internet is suppose to self-report and pay tax to Olympia.  It is unlikely that individuals will be audited on this (cost benefit is negative unless they can get massive cooperation from credit card companies); businesses are a different issue – if you are registered in Olympia and file B&O, then they know the size of revenue target that you present. My own solution is to buy items from Amazon if it is for business use because they charge me tax (thus saving the cost of tracking and self-reporting).


Second item is B&O, there was an opinion on the ability to apportion the tax for services done out of state. IMHO, there’s a gotcha, if it is programming services producing a product for sale, it borders close to manufacturing that does not allow apportioning. The second issue is that you will likely have to declare jurisdiction – this opens up the risk of the taxation authority in the other jurisdiction being informed (perhaps not this year, but perhaps next year). You may now be under double-taxation jeopardy – and states do not have treaties dealing with this area.


One of my favorite sites in dealing with the complexities of cross border taxation in and I have several time corresponded with the tax specialists there. While country by country rules cannot be summarized, the reality of how laws are written is simple: if you attend at the customer site for any reason, then you are likely required to do a tax filing in their jurisdiction. The tax filing may also trigger a demand to obtain a business license (and fines for not obtaining one first). You may also require an appropriate VISA – a tourist visa may be insufficient and could result in being barred indefinitely (some Canadians going to customer meetings  in the US have been hit by this). The positive thing is that at present the cost-benefit of government doing this is negative – but with technology changing and lean taxation closets – this may change quickly.


I expect tax laws will change to define 'residency' in the near future as being 'electronically connected' to a machine in the country or state for a portion of the day. You are electronically doing work in that location and thus governments want to tax (especially as it's not collecting taxes from someone who could do the same and be in the country).  All of the information is easily available on demand from corporate IT departments and it represent an easy way to collect tax – especially from work being done off-shore (hence it’s taxation without representation – always popular with your voters!).


I would actually deem such laws to be rational and appropriate for the times. An employee must obtain proof of a right to work if the person is in the US, it is not required if he is offshore. You could have your entire IT department offshore – even if they were paid the same take-home – you have saved a mountain of taxes to the local governments. The problem is that someone needs to pay taxes to support programs, etc.   Requiring offshore folks to have a Tax Identification Number (TIN) would now facilitates taxation of them. The offshore folks may be a company, which means that the company is obligated (no change in the contract price) – most US folks would deem it appropriate to tax fees going offshore for programming etc. after all those fees can be deducted from a business revenue just like wages can.


The tax-code has not kept up with technological change and globalization. Many approaches are untaxed simply because they were too small revenue in the past to bother with. In the future, I see radical change happening – often forced by financial desperation of government – especially mechanism of taxing non-resident, non-voters… i.e. teleworkers and offshore resources.


How if WA dept of Revenue decides to hire me as a consultant – be scared, very very scared!

Friday, February 18, 2011

Book Review: Pro Web Gadgets Across iPhone, Android, Windows, Mac, iGoogle and More – Part 4

Finally I am at the last part, and the one that most interested me – Mobile Platforms. The idea of learning multiple smart-phone languages does not appeal to me, and if you are involved with a startup – could add a major outsourcing expense. I look at a well established site like the and see that they have an iPhone application and have a pending Android application announced. With their deep pockets, it is clear that time to market for each mobile application can be problematic.

Windows Mobile

First item is that it deals with 6.1 and 6.5 only – the classic problem with books on the cutting edge of technology.  Good solid advice about a gotcha that some developers may not think about because they will not think beyond their own phone…

“For gadget developers, the two editions of interest are Windows Mobile Standard and Professional. The important distinction is that
Professional is meant for touchscreen devices, while Standard is not.”

We also discovered why the Opera API was given a full chapter earlier…

“the Opera Widgets API, covered thoroughly for desk-bound computers in Chapter 8, has a smartphone
counterpart. This allows you to run virtually the same widget on all recent versions of Windows Mobile—
and, as you’ll see in the next chapter, the Symbian S60 phone OS as well.”

The author then describes the T-Mobile “Web’n’Walk SDK is a superset of the Opera Mobile Widgets API, building on its
generic foundation to give significantly greater access to device functionality.” This is important if you startup target population have a significant percentage of T-Mobile users. T-Mobile has it’s own Gallery – something you should be aware of.

Symbian S60

Can we say legacy? Especially after the recent announcement of Microsoft-Nokia.  I will not dwell much on this chapter beyond saying that the author does his usual excellent coverage. For example  :-)

Note As of this writing, the S60 emulators have a bug that prevents openURL calls from completing and may even crash the browser. With luck, this bug will be fixed by the time you read this—but if not, now you know.”

If it’s ain’t fix yet – it may never be…



The treatment is excellent with a lot of solid recommendations that will save developers walking down the wrong paths (typical issue with newbies or those in a non-mentoring environment [“I’ve gotten it to work this way – why should I learn a different way?”-ism]. For example,

“Since it’s not a true gadget API, it doesn’t inherently supply a storage interface, and as an embedded browser instance, neither does it support cookies. As an alternative, I’ve turned to Mobile Safari’s support of SQLite, part of the emerging W3C HTML 5 standard. This allows access to a true SQL database from within JavaScript, more than sufficient for crossPlatform.Storage’s needs.”


I must admit, being an Android owner, this is what I was looking for most – so I can create my own quick web gadgets (instead of firing up Eclipse and writing my own Java apps). It explains how to use the Dalvik (the Java dialect that Android uses) Debug Monitor to find Javascript errors. It was interesting to note “I’ve based my Android code from my existing iPhone port, and it turns out that the changes needed are few indeed.” giving good guidance on the development cascade.

The Future of Web Gadgets

This chapter – seeing what is coming down the adoption pipeline – is sweet. As a result, I’m pretty much converted to using SQL for local storage for future work (bizarre – I pretty much patented that idea in one of my patent filings for Microsoft back in 1998, just 13 years ahead of the industry). He summarizes this chapter with:

“The future for web gadgets is bright. Not only are they at home on the newest, fastest-growing  smartphone platforms, but other technology trends are working in their favor as well. Several nascent standards (from the W3C and OMTP) will allow a single gadget code base to do more and on more platforms. And with the emergence of web-based operating systems, starting with Palm’s webOS and Google’s Chrome OS, these platforms will continue to include the most trend-setting devices. There’s never been a better time to build a web gadget.”

His book and coverages, leaves me in agreement!

Thursday, February 17, 2011

Book Review: Pro Web Gadgets Across iPhone, Android, Windows, Mac, iGoogle and More – Part 3

Today I will cover Part 3 of this excellent (so far) book, Desktop Platforms. The first chapter talks about Windows Vista, Windows 7.  Alas the author failed to cite that it can also be used for Windows Server 2008 if you do a ‘little adjustment’  see this how to. The rest of the chapter is nicely done – covering debugging, publication on Microsoft’s Gallery as well as how to add it directly to your own website (often a better way of promoting an web gadget to your readership). The non-web extensions (i.e. showing CPU usages, critical event log entries, etc) is described but since this is off-topic, he provides the appropriate link and did not get distracted.


The second chapter dealt with Windows nemesis – Mac OS X. He points out key features that could frustrate Windows types, like case-sensitivity of files. For testing the appearance, he points out Mac widgets are always rendered with the Safari browser. Issues like widget bundle is required to include a background image named Default.png do cascade into how you should build the generic version. He explains all of the gotcha because Apple’s requirements will necessitate more code than you’ve needed for other platforms – so much that it likely not wise to common source the code. The chapter provided all of the details that an experienced web and Windows developer needs to create working gadgets for the Mac.


The last chapter in this part deals with Opera. Why? “Opera is an excellent place to port your own widget. There is no other API that natively supports so many hardware and software platforms, so it’s a natural fit with cross-platform development.” The problem is that Opera is not a significant installed based (approximately 2.2% in 2011). The chapter covered the same material in excellent depth of critical items like it did with the prior ones. Although it does not deal with a major player, the W3C specification originated with Opera so I would expect with the formal adoption of the W3C standard (and implementation in various OS) it is an excellent chapter for the future…


That’s it for today, next is mobile platforms.

Wednesday, February 16, 2011

If you move to the cloud and THIS happens….

I caught a breaking story on CBC National this evening and it pretty much resolved a question that I was struggling with.

The story was that two Canadian Federal Government Department had been severely breached. Treasury and Finance. It happened a month ago and they are still in recovery mode (a.k.a. finding all of the virtual moles hiding in all of their systems). What they did was the right way of handling an virtual mole a.k.a. ghost in the machine – they disconnected both departments entirely from the internet. The moles could not report back to their masters – even if they had internal relay points. If an employee needed to check something on the internet (or even send email outside of the isolated departments) they had to head home or to a coffee shop. The hack managed to send emails out to have people change passwords via a bogus site from senior officials because there was a security breach of passwords! Talk about self-fulfilling email messages!

And what was the question? If you move to the cloud and are subject to an equivalent attack, how do you survive? Unlike the above government departments, you cannot pull the plug and get complete isolation. Moving to a private network on the cloud sounds good but it may be a sieve with an elegant infection unfortunately.

The common mistake that I have seen often, is a gross underestimation of the ability of hackers. Typically, people anticipate only what they are capable of doing… hackers are creatures of higher intelligence and perseverance than most folks.

Monday, February 14, 2011

Book Review: Pro Web Gadgets Across iPhone, Android, Windows, Mac, iGoogle and More – Part 2

Finally got some time to get back to this review…

Chapter 3: Developing for Multiple Platforms

The opening paragraph perked my interest: “if your gadget uses a feature from a specific API, how can you avoid being locked into that API?”  This is actually a problem across the industry with most developers struggling to get a feature working and paying no attention to lock-in issues. In general, I see a lot of developers focused solely on getting some functionality working with neither lock-in, performance or robustness ever being considered.


The treatment of issues from HTTPProxy to caching was among the best balanced that I have ever read. A path is given, alternatives paths are cited and instead of defending his choice, he gives the pros and cons and then why he opted to use a specific solution (often it’s clarity of presentation). For example:

Finally, there are also two web gadget platforms, Netvibes and iGoogle, that will supply the configuration user interface for you if you prefer. I’ll be covering the details in Chapters 4 and 5, respectively, but the idea is that you denote your settings in the gadget specification, and the API (p.64)

Chapter 4: Netvibes

Again, crisp and candidate advice is clearly provide, for example

“My preference is to retain control over the branding on the face of my gadgets, rather than devoting valuable screen real estate to someone else’s link, so I don’t advocate following this course. Instead, I’ll show how to deploy to Netvibes on its own and (in subsequent chapters) to the other platforms directly.” p.77

I found a continuous lists of caveats and possible issues are well provided during the development of the chapter. A few examples:

  • Netvibes widgets are served directly from, so all URLs referenced by your gadget must include the global path to the resource; no relative URLs are possible.
  • This is because of a minor peculiarity in the Netvibes process that embeds your gadget within the page. It appears that link elements (like the first syntax shown earlier) are not processed, while style elements are. It’s a minor change but one you need to be aware of.
  • Netvibes caches the source for live widgets shown on user pages. So if you find a problem at this stage, changes you make to your gadget’s source will take up to five minutes to propagate to
  • The concept of gadgets being deployed to third-party web sites (such as blogs) raises a subtle issue concerning user preferences: whose preferences should the gadget be using?

Chapter 5: iGoogle

I found that it was nice that Sterling deal with the minor player before the dominant player. It gives the leader a better rounding than the reverse (if you read the dominant first, you tend to skip or blow off the minor). He continues to do a crisp detail development with excellent coverage of issues that would take weeks, if not months. to acquire by trial and error. An example: “Caution Google sends prefs values to the gadget as URL parameters—so if the amount of data is too large, the URL can get corrupted, and the client gadget can cease to work.” p. 108 and “be aware that myAOL is actually further along this curve than other Google-based containers, and some of my advanced caching techniques actually conflict with this host.”

Although this chapter is called iGoogle, it really deals with all of the widget options in the panorama of Google offerings. It mentions some sweetness available, for example:

Google expects that any given gadget may potentially be installed by millions of users and recognizes that such a load is likely to be problematic for the gadget author’s web host. So, in addition to always caching the XML source, the API includes a function that enables you to use Google’s servers as a proxy for your own resources. p.131

That’s it for today – tomorrow, we crack Desktop Platforms.

Saturday, February 12, 2011

Product Development: Divided we fall?

On the way home from ProductCamp Vancouver, we got discussing the increasing division of product development into smaller and smaller cubicles – each with a hyper specialized individual in it.

Back in 1979, I had my first professional employee - then consulting gig. Technically I reported to the head of IT. My customer was a department head managing a 30+ employee group. I meet directly with the department head, his reports and the clerks that would use the application (an interactive system using CICS running on a big IBM 4331 –2 megabytes of memory!!). I did UX mockups and reviewed them. Shadowed the clerks. Refine the requirements. Then I proceed to write the specifications. After that I proceed to design the database and implement (create a relational database using ISAM – not a trivial feat). Then coded up the application in COBOL and tested it (did QA too!). Then it was passed over to user acceptance. Finally, writing up documentation for users as well as design documentation. Everything was done interactively, often with a new component delivered every 2 weeks.

The process I followed would be known as agile today. That system kept in production until Y2K and had to be replaced because the hardware was not Y2K compliant. A year before that happened, I got a call from one of the folks there and found out that there has been almost no changes in the twenty years of operation and the existing staff was not happy with the replacement – the existing system was still completely satisfactory and had features that the new one lacked.

That pattern was my preferred one and I used it again and again for almost a decade (with nothing but very satisfied customers and robust systems) until I found myself contracting to Microsoft around 1990. At that point I found myself cutoff entirely from contact with the end user who was hidden behind a dev lead. Over the last 20 years I have seen the number of people involved grow and grow) and grow. With each new people-layer, communication issues deepen.

Today a project may well contain the following collection of people:

  • Product Manager
  • User Experience Specialist
  • Graphic Designer
  • CSS Specialist
  • Accessibility Specialist
  • Architect
  • JavaScript Developer Engineer
  • Web Site Developer Engineer
  • Web Services Developer (for JSON etc)
  • Middle Tier Developer Engineer
  • Database Modeler
  • Database Administrator
  • Database Developer Engineer
  • Software Developer Engineer / Test
  • Testers (or Quality Assurance Specialist)
  • Governance Specialist
  • Security Specialist
  • User Documentation Specialist
  • UML Documentation Expert – developing maintenance documentation
  • And of course, a Development Manager and a Test Manager

Each of the above can end up sitting in their own cubical. Worst yet, if something goes wrong because of a lack of holistic analysis (like poor scalability) – finding out the cause or who is accountable becomes a challenge. Personally, I have kept my finger in all of the above specialties and can swing into whatever role is needing assistance – which makes me an oddity. I read C.J.Date and I read Jakob Nielsen (and everything between).

My concern is whether the degree of “divide and conquer” happening is becoming a house of cards – I suspect that I may be the little boy in H.C.Anderson’s Kejserens nye Kl├Žder (1837) (Emperor’s new Clothes – note Denmark did not have an Emperor, but a King) which was a reference to a certain emperor in Europe. His tale was a retelling of Libro de los ejemplos (1335) – illustrating Solomon sayin, there’s nothing new under the sun.

Although Agile is not new, I am concerned that the use of Sprints in SCRUM is encouraging/locking developers into their cubicles often. If development is constantly doing sprints, how can beneficial cross-training in all of the specialties above happen? Are developers being reduced to factory components? Is this hyper specialization a good thing or a bad thing in the long term? Locking developers (and other specialists) in their cubicles often produces short run efficiency – but does it sabotage long term competitive competencies?

ProductCamp–Vancouver: Observations

This weekend I attend Product Camp Vancouver. Some talks were interesting (review of information that I already know), some were so-so (or worst), and others were good “reference card” session such as the one on UX (User Experience). One of the impressions that I had was that product development is becoming a tower of Babel with everyone using in-vogue phrases scattered in each language (or pidgin language). A common frustration was poor communications with others in the team (UX Product Manager – Architecture – Development).


One of the most interesting talks (with a small attendance) was a pragmatic talk about how far does Agile Development scales. The concept of agile development is well known, but we are seeing terms like ‘agile architecture’ and ‘agile product management’ being tossed around. Often they seem to be used as excuse for not doing their homework.  An agile product management means that the architect must be agile – agile to the point of being continuously interrupted. Architect-agility means that often existing successful already completed agile-development has to be re-done.  No one tossed out the term ‘agile testing’ – as opposed to test-driven-development.

The term agile generally means SCRUM approach.


One issued raised was whether SCRUM with a lot of intense sprints resulted in higher job turnover rates and developer burn out. The literature speculate that SCRUM drops turnover rates – however there have been no studies verifying this Sad smile. The question has been raised from personal observations by many people. Some interesting reading is Living in an Agile World: the Strategic Role of Product Management when Development goes Agile. QA and agile presents some major challenges – especially when the agile process results in changes (which should occur with agile). The same issue arises with documentation. The documentation needs to be continuously updated to the agile evolution. There should not 6 months after code-complete until release because QA and documentation takes 6 months to catch up to changes.


Interesting issues….

Wednesday, February 9, 2011

Lamenting the lack of full DDL triggers and SQLCLR in SQL Azure

Recently I have been working with an alpha release of Aditibus™ Policy Server which supports some nice features well beyond the traditional role based access control (RBAC). Some of these features include native support for:

  • Geospatial Constraints – base on physical location of user
  • Historical Constraints – based on user’s past actions or other’s actions
  • Temporal  Constraints – based on time, for example, is someone on duty or not
  • Policy Effectivity – ability to set policy rules to be turned on or off in the future automatically
  • Strict Delegation – giving someone else a set of permissions and by doing so, you no longer have those permissions until you recall the permissions
  • Soft Delegation  --giving someone else a set of permissions and by doing so, you still retain those permissions until you revoke the permissions (or someone removes those permission from you)
  • Obligation – following the UCONabc model

Aditibus™ recommended interface for SQL Server is actually sweet. It is done via DDL, for example their code snipet for controlling login is a very simple:


Code Snippet
  1. CREATE TRIGGER Login_Permission
  4. AS
  5. BEGIN
  6. IF Aditybus.dbo.HasPermission(EVENTDATA()) !=1
  7.     BEGIN
  8.     ROLLBACK
  9.     RETURN
  10.     END
  11. END


This allows the addition of Aditibus™ “Odin’s Oak” product to existing SQL Systems without needing to modify the existing code base. Unfortunately this will not work on SQL Azure per the documentation. Their HasPermission function is a SQLCLR function which fortunately has alternatives available – but at a cost of lower performance.


So I am hoping that SQL Azure will evolve to integrate better (or that Aditibus will find elegant solutions). As a FYI to newbies, the list of DDL events supported in SQL Server may be found here.

Monday, February 7, 2011

Book Review: Pro Web Gadgets Across iPhone, Android, Windows, Mac, iGoogle and More

For my next book review, this title from APRESS caught my attention. The reason is simple, last year I soliciated quotes for a startup that I was involved in and effectively got a typical price of $250K per platform, and $350K for Blackberry for a relatively simple application. For a startup dropping a million on smart phone applications is not the best decision. I have played with some Web Gadgets in the past, and long before that with some Scriptlets (like 10 years ago!), so it looked like an effective way to address the polynomy of Smart Phones and Operating Systems in today’s culture.


The first chapter was very informative, and the key items of interest was:

  • – Emerging standards from W3C – latest version is October 2010. This suggests that they may soon be industry grade.
  • Sound advice:
    At this writing, Flash, Silverlight, and other plug-ins have minimal support on mobile browsers,
    unfortunately. Although this landscape is changing, if you’re hoping to deploy to smartphones, it’s still best to
    avoid such technologies if possible.

  • “The phenomenal exposure that gadgets can provide has its downside. Millions of extra page views can put
    quite a strain on web servers, especially when your gadget is graphics-intensive or makes heavy use of
    server-side application code. This may not be a problem if your gadgets are hosted on enterprise-level
    server hardware, but for small organizations or freelance developers, it can be a real issue.”

    ”a gadget’s single-minded focus to “do one thing and do it well”

The second chapter gave a lot of good pragmatic advice, for example:

    • Avoid requiring the user to log in or create an account.
    • Avoid asking for personal information.
    • Avoid options that must be set before the gadget is used.
    • Avoid splash screens.

This was a major contrast from the last book that simply told you how without any guidance. The guidance often contains nuggets that may not occur to you – for example the situation with a touchscreen phone described below.

“However, there are some issues to be aware of with hover-based techniques, especially if you’re
planning to deploy to mobile phones. First, some handheld devices don’t support the hover modifier in
CSS, so the hovered state won’t be visible. Similarly, touchscreen devices don’t have a usable equivalent
for hover (their screens can’t tell when the user’s finger is hovering over it)”

and more still

“Second, be aware that the precision of touchscreens is less than that of a mouse. Accommodate this
reality by creating larger click areas for all controls. It’s also a good idea to separate adjacent controls
with as much whitespace as possible; this will help avoid the user inadvertently clicking the wrong one.”

A very good discussion of why monolithic (at least for what is sent to the client) code is best. Nice discussion of AJAX, Javascript Frameworks and Namespaces, Plugins, Cross browser followed. It’s written clear and focused for a reader that knows Html page development (no “Hello World” stuff to wade through)


This chapter actually develops a simple illustrative gadget showing the moon phrase – stripped to the essentials and thus very easy to follow and understand the issues being discussed.


That’s it for tonight – 5 thumbs up so far!!!!

Sunday, February 6, 2011

MySql (LAMP) versus SQL Server

I was recently asked for comments on LAMP versus SQL Server. LAMP is a suite of products and SQL Server is a RDBMS, LAMP usually mean MySQL is being used as a RDBMS and thus I will give my thoughts on SQL Server versus MySQL. Often the issue boils down to business factors and not technical issues.

"MySQL Developer DC""SQLServer Developer DC"
  • $60,000+ (843)
  • $80,000+ (519)
  • $100,000+ (218)
  • $120,000+ (70)
  • $140,000+ (38)
  • So SQLServer developers appear to be cheaper (at least they are not significantly more expensive). This means available local supply of resources, especially those holding certifications(proof of minimum knowledge) is an important factor. Both offer certifications. If all of the local expertise is from using MySQL for informational web sites and you are planning to do high volume processing – you likely have a major expertise challenge.

    • SQL Server is "a head of the curve" for features, MySQL is behind. If you don't need the features it's immaterial.
      • For example SQL Server is C2 certified, MySql is not.
      • If you need latest security features, then SQL Server is definitely the only choice.
    • Since things like ODBC and other connection drivers makes the backend immaterial to the front end -- it's a coin flip.
      • Caveat: some drivers are lousy – so make sure you know the relative performance and reliability of your drivers – this is largely a MySQL issue.
    • The last item is the availability and effectiveness of database tuning advisors (DTA). This is available with all SQL Server versions except Express.
      • I have often seen a 90+% improvement when using SQL Server DTA. Without automated tuning tools, you are going to have performance issues or high manpower costs over time.
      • I would generally suggest design and test on SQL Server Web Edition, do a full database tuning and then:
        • port the database to MySQL if the customer want that route – or
        • move to SQL Server Express (less work and thus cheaper).
    • Last item is simple, check the current status of known vulnerabilities for each product at You may be surprised that a lot of known defects are not fixed in either product....
      • The issue of how each product gets patched is an important issue... will vulnerability patches be applied as they become available, or will it depend on some human to remember to do it?

    Hope that helps explain the significant differences from my perspective. I’m usually working with the upper end of database applications and thus SQL Server is most common. I have used MySQL on some occasion but usually I have gotten frustrated by features missing in it. For example, I use XML columns a lot and MySQL is still playing catch up (IMHO).

    To use a car analogy – is price important? is having a 5 star crash rating important? is speed control, electronic stability systems or anti-lock break important? Is ground clearance important? It is those types of questions that decides the issue often – not the upholstery or the stereo system that the car comes with…

    Saturday, February 5, 2011

    Final Comments on “Microsoft SQL Azure: Enterprise Application Development”(2010) by PACKT

    This is my final comments on the book below (click to go to publisher site).


    First, I’m likely a tough reviewer having been a professional technical writer for Microsoft since the mid 1990’s and still doing that occasionally. Second, I’m a pedagogue (ex-teacher for those that are vocabulary challenged) and tend to read stuff at several levels – including suitability for teaching or mentoring.

    The first question is what type of book is this? This book will be a useful book on my bookshelf because it touches enough area in sufficient depth to serve as a cookbook for first recipes. The problem is that it try to span too many target audiences and as a result does not make it in any area well.

    Is it a Cookbook?

    The number of items covered and the crispness of the coverage suggests that it is. The problem is that if I compare it to the classic Cookbooks from O’Reilly, it is both too shallow and too sparse. It’s more a collection of recipes clipped from ‘Women’s Journal’ (or should I say, Microsoft articles and blog posts?). There is a place for that, because it has a linear structure that wandering across lacks.

    Is it an Enterprise Book?

    Definitely not – there was zero coverage of issues that would be of interest to a SQL Azure serious enterprise application. One key example: there is nothing about tuning indexes – and for SQL Azure that can be critical. I will give a simple example, this morning I tuned a single heavy used query that was looked like it should be run well, 4-6 indices on all of the tables involved etc. The Database Engine Tune Advisor did it magic and did a 94% improvement on it. For SQL Azure (because of billing) could mean the difference between a $1000/month billing and a $60/month billing. Wait a minute… would it be in Microsoft’s Interest to provide easy tools to do this… it would lose $940 of monthly revenue (just loose change)….

    The simplest way to see the deficiency is to look at a book like APress’s

    Pro SQL Server 2005 and see what is not touched upon. Looking at APress’s offering Pro SQL Azure, I see chapters such as:

    • Designing for High Performance
    • SQL Azure Design Considerations
    • Performance Tuning

    Those essential enterprise topics are missing. QED

    Is it a “Learning SQL Azure” Book?

    It likely comes closest to this but for the fact that there is very sparse guidance to the learner. The collection of recipes without guidance leaves may leave too many learners frustrated instead of assured by the last page.

    Is it a VBNet, C# or is a PHP Book?

    It tries to do all three resulting in a thick tome that will only be partially read by most developers. Creating a tome for each language would likely be a better approach so the book would have greater depth – however, the real issue is how much time a book title starting with “Microsoft SQL Azure” should spend in any specific language? IMHO less than 20% of the book/chapters, ideally 10-15%.

    Is it worth buying?

    If you are neither a beginner nor responsible for enterprise implementation on SQL Azure, I would say that it’s definitely worth considering. You will likely do a lot of skimming of content and then read carefully the sections that are relevant to your existing style. It may serve well as a stepping stone tome but it’s useful life is likely to be short but it would likely pay for itself by the time savings it provides.

    Next week, I will do a review of another one of PACKT’s books – stay tune!

    Friday, February 4, 2011

    Teaching old SQL dogs new cloud tricks – Part 3

    I’m continuing onwards with my review of .


    So far my biggest grip is that IMHO this is not an “Enterprise” book I would suggest Microsoft SQL Azure: Introduction to Application Development” – with that name, I would give the book good ratings as such (80%ile – better than 4 out of 5 similar books). With the existing time, it disappoints against what I expected.

     Chapter 6: SSIS and SSRS Applications Using SQL Azure

    The author tried to kludge a solution to a problem in this chapter without doing analysis or coming up with a good solution.  A sharded solution for security is clumsy at best, there are better solutions for column level security. The issues of update and remedying  inconsistencies arising from sharding are neither raised nor addressed.  A simpler solution given that the end deliverable was a Microsoft Access database would be to just do pass through tables to the two SQL data bases and do an appropriate join in Microsoft Access.

    The second part of this chapter deals with PHP and MySQL – I would have excluded both from the book and write a second book focused on PHP, MySQL and Azure. C#, VBNET tend to be part of one camp and PHP tend to be a different camp – trying to include everyone in the book scope is one way to make no one happy. Having said that, I ended up by the end of the chapter that we are just hashing and rehashing how to make connections.  For a newbie, that may be useful. For someone that has been using ODBC for almost 20 years, it redundant and not informative.

    Chapter 7: Working with Windows Azure Hosting

    This is another one of these really does not belong in this book chapter on first look – if you open a SQL Server 2008 book do you find chapters on writing Silverlight applications using SQL Server? No you don’t (or at least not unless Silverlight in mentioned in the title). The use of Asp.Net instead of MVC was a bit of a disappointment especially since REST is often mentioned in earlier chapters. I actually was expecting to see a GRID implementation with code for the basic CRUD activities shown – what I got was just connecting up the classic AspNet control for user registrations.  A better solution would be to illustrate the use of OpenId or Microsoft Live logins which are becoming the norm – standalone registrations are becoming very old-school.

    Bottom line: what is covered is done well – but the decision of what to cover was poor.

    Chapter 8:  Database Applications on Windows Azure Platform Accessing SQL Server Databases

    Starting this chapter I hit a bit of the directionless discussion that I see more and more, and growing to hate,

    “Both data and applications can stay either on the cloud or in-house. Businesses may want to store only data or part of their data on the cloud and keep their business applications in-house. On the other hand, they may want to keep both applications and data on the cloud. “[P.285]

    A bunch of options are stated but no explanation of the factors that should be considered in making the decision. Making the right decision is critical for enterprise applications; enumerating choices often results in random decisions. When the book is written for newbies, it’s even more essential to mentor them into the best paths.

    The chapter continues to give sufficient examples of how to connect using a multitude of approaches but without any comparison of the benefits and disadvantageous between methods – for example the pro and cons of using Stored Procedures with LinqToSQL. I wonder if it no longer politically correct to express opinions…


    On the plus side, the extensive examples does make it a nice quick reference book to get up to speed using whatever method you are most biased to.

    Chapter 9: Synchronizing SQL Azure

    The use of sync is interesting in that publication and subscriptions are not dealt with or mentioned. The author cites (with insufficient emphasis) the issue of having to order the tables so that foreign keys and referential integrity is not violated. What is missing is a TSQL code snippet to list out the tables in the right sequence.  Another issue is that of cyclical foreign keys (which happens in serious enterprise databases) which is neither mentioned nor is advise given.


    The treatment of synchronization is good on the basics. There were topics of interest that were not discussed that I would be interested in such as:

    1. Impact, behavior and risk of doing a sync over a 50 kbps connection with a 10GB database while the SQL Azure database is under load.
    2. Syncing a single SQL Server 2008 R2 database with a collection of SQL Azure 10GB databases that has the data partitioned
    3. Any issues that could arise if 100 SQL Compact databases are being concurrently sync

    The above are issues that will arise with the serious enterprise use of SQL Azure.

    Chapter 10: Recent Developments

    This touches  on a ton of changes since  the book first draft was completed and allow a quick effective getting up to speed…

    • SQL Azure updates
    • SQL Azure security
    • Using SQL Azure Firewall API
    • SQL Azure with MS Access 2010 and MS Excel
    • OpenOffice Access to SQL Azure
    • Accessing SQL Azure with non-.NET Framework Languages
    • OData Service for SQL Azure
    • Consuming SQL Azure data with PowerPivot
    • SQL Azure with WebMatrix
    • More third-party tools to SQL Azure
    • Managing SQL Azure databases with the Houston Project (CTP1)
    • Data Application Component and SQL Azure

    The unfortunate thing is that this information will likely be stale in 8 months.


    That’s it for the chapters – I off to sleep on it and then I’ll write up a bottom line summary on the book tomorrow.

    Teaching old SQL dogs new cloud tricks – Part 2

    I’m continuing onwards with my review of .


    For those that are interested, Microsoft is offering a 30-Day Pass:


    USA Developers: Windows AzurePlatform 30-Day Pass
    We're offering a Windows Azure platform 30-day pass, so you can put Windows Azure and SQL Azure through their paces. Use promo code MSDNT1. No credit card required. With the Windows Azure platform you pay only for what you use, scale up when you need capacity and down when you don't.

    One Caveat: Microsoft’s “accept this license page” is case sensitive for your first and last name --- I suspect poor quality controls of their contractors…

    Chapter 3: Working with SQL Azure Databases from Visual Studio 2008

    For a book published in 2010 to not use the latest edition of Visual Studio available (2010) is a little disappointing. What is confusing is that on p.106, we suddenly jumped to VS 2010 Express – suggesting that the technical editing needs to be tougher.


    The structure diagram on page 96 is informative, although I would be inclined to include JAVA, Perl and other popular languages besides PHP because all of them supports ODBC which is the fulcrum factor. The author did not mention a third alternative access method, FreeTDS.


    The chapter slips back into a “Hello World” tone explaining things like intellisense which is disappointing for a book asserting Enterprise content..

    The author did many many pages on using SQLConnectionStringBuilder – again a “Hello World” style of explanation…


    The editing of some of the graphics is sloppy as shown below:

    During a blur is a quick and dirty and can often backfire – for example, we have Password=”??????;” the last “;” was left clear – does this mean that the password need to have a {;} on it always? Earlier graphics blur over the {“} leaving it unclear if the password needs to be quoted.


    By the time that I reached the Summary on page 133, I started to get the feeling that the title to the book should be renamed, dropping the “Enterprise” and adding “Introduction to”. Let us see what the next chapter holds.


    Chapter 4: SQL Azure Tools

    One thing struck me quickly in starting the chapter was that LINQ had not been cited. Entity Framework is. Given that a large number of C# developers have adopted LINQ, it is a bit confusing to not see any mentioned yet [It shows up later on p. 286  for just 10 pages – about the same number of pages given to the SQLConnectionStringBuilder].


    This chapter proceeds to do a long litany of tools without comments to direct the user to what tools are recommended and why. In general a chapter like this should describe no more then THREE(3) tools in detail and then cite alternatives (and why they never made the cut).  Dumping a long list on the user is unfair – instead of getting up to speed fast, they will be wandering and wondering around in the wilderness of tools trying to figure out what they should use.


    The summary says it all:

    In this chapter, we looked at tools that can be used with SQL Azure, which includes tools from Microsoft, third party, and open source. Microsoft's tight integration of SQL Server 2008 R2 with two versions of Visual Basic as well as its Management Studio-related tools such as SSMS, Import/Export, and Data; SyncFramework; SSMS; scripting support for SQL Azure; SQLCMD; BCP Utility; IIS7 Database Manager and ODATA service are described, some of them with examples. Several third-party tools such as those from SQL Azure Migration wizard, SQL Azure Explorer, SQL Azure Manager, Cerebrata, DBArtisan, Red Gate, and ToadSoft are also described.
    In the next chapter, we will be using some of the tools mentioned in this chapter to migrate data from an on-site server to SQL Azure.

    We looked at tools, we did not provide guidance or clear recommendations….


    Chapter 5 Populating SQL Azure Databases

    This chapter starts with the promise of being better focused [p.179]:

    However, we will be focusing on the following topics:
    • Using SQL Server Management Studio with scripts
    • Using the SQL Server Import and Export Wizard
    • Using SQL Azure Migration Wizard
    • Migration from MySQL to SQL Azure using SQL Server Migration Assistant 2008 for MySQL
    • Using SqlBulkCopy 

    Personally, I would that it be trimmed to just three. The MySQL stuff could be deferred until the chapter on PHP later in the book. Walking through the examples, I would focus on the SQL Azure Migration Wizard as being the best guidance to the user. The other methods can allow issues to occur that may not be apparent and may result in lengthy troubleshooting to resolve.

    By the end of the chapter I was pleased with the coverage and would recommend it for an “Introduction to” title. My expectations of this book have changed to “Introduction…” from “Enterprise”.


    Tomorrow it looks like we may get into more interesting stuff, including sharded data which (IMHO) is a kludge to get some security into SQL Azure because more elegant methods available in SQL Server 2008 via SQLCLR are not available….  stay tune!

    Wednesday, February 2, 2011

    Teaching old dogs some new tricks… SQL Server to SQL Azure…

    I have been involved with SQL Server since before the first beta versions went out – I was working as a consultant to Microsoft’s Internal Technology Group and we were the bleeding edge folks in those days. I was heavily involved with stressing and performance analysis (given the nick name on my door of “Dr.Science” because of my statistical analysis) and we had the joys of getting up to two different builds a day from the Dev Divisions when we encountered issues.


    Today, I have several projects that are likely needed to be cloud-supportable in the near future so it’s time that I get up to speed and hope they don’t re-invent the technology before I need to build commercial systems on it. Two books have come across my desk recently that on first read appear ideal. The first one is on SQL Azure and is not focused on the “Hello World” style of book often seen. A second aspect is that it’s a new-kid-on-the-block publisher, and often they do a better job then the old folks who find a particular successful formula and proceed to create a factory of cloned topics. Many authors that I know personally complain about the dropping rates for tech books in the US and I see some of them dropping writing in favor of consulting (or even blogging!!). The publisher is based in the UK and India and may represent the next generation of top-notch technical books – because the rates for top author is more competitive than with US authors. This is the modern reality – offshoring of technical books is becoming a reality.


    Concerning SQL Azure: If the book was not first published in 2010 (or later), forget it – SQL Azure has evolved too much in the last year.


    The book title page is below (click to go to site):


    In this series of blogs, I will both review the chapters and share observations.  I am not a complete novice with SQL Azure – but everytime that I tried it, I stopped because it was just not far enough along to risk commercial development on – in my last post, I will give my current feeling on this.


    Chapter 1 Cloud Computing and Microsoft Azure Services Platform

    A good crisp introduction that describes well the chaos in Cloud Computing. There’s a bunch of competing products and a lot of jargon – J.K. does a nice job is summarizing the key aspects.

    The table on page 16 is a nice checklist for deciding on what cloud platform. In my pending projects, a key question is this choice :

    • Using SQL Server 2008 on Amazon, or
    • SQL Azure

    What is lacking is the % of SQL Server 2008 features that are available on SQL Azure. 90% of features of Visual Studio is a good bit of information (which I will return to in later posts). The top table on page 19 is a nice statement of a performance impacting fact – at least SIX different versions in 2010


    … a rate of change that would be unacceptable in many development environment.


    Some minor points of disagreement/nit-picking:

    • First, I dislike embedded URLs in the text--- they belong in footnotes.  I hope the editors of future tombs will change this practice.
    • Second, tables are not captioned (see table on page 16) --
    • There’s a few hanging references: example p.24 “SQL Azure was updated with SU1.”  What is SU1? This was the first reference to it and it’s not defined as “Service Update” until page 352
    • P.30 Too much irrelevant details describing the hardware and software
    • Exercise 1.1 should be moved to an appendix, it disrupts the flow too much (and give a taste of a “Hello World” book)
    • “Windows Azure Platform that provides support for a relational database in the Cloud” – being involved with Relational Databases since 1980 and a follower of  C.J.Date [if you don’t know C.J. and claim to be a RDBS person it would be equivalent to claim being a nuclear physicist and not knowing who Niels Bohr is], I would take exception on the term “relational” being appropriate…

    Chapter 2 SQL Azure Services

    Good, but rather pro-forma, presentation. The absence of Dynamic DNS support, and the absence of the ability to use a client certificate instead of an IP to get through the fire wall is not brought out (i.e. best case writing….). Some key points are not adequately called out, for example, p.89 reads:


    Instead of something like:

    Note: Unlike SQL Server, unless a table has a clustered index you will not be able to insert values. A clustered index must be created before an insert operation is allowed on the table.

    The Note part should be boxed and emphasized, the need to have a clustered index on a table is a significant change from SQL Server 2008.


    Some minor points of disagreement/nit-picking:

    • Figures are not captioned (see page 52)
    • No discussion of how to deal with dynamic DNS (many small firms do not pay the extra to get a fixed IP – if one happens to be available. None of my available ISPs provides the option of a fixed IP. I do have a DNS entry on which points me always to the right IP).
    • TSQL code examples (for example p.91) should be using NCHAR and NVARCHAR (the web is multiple languages)– Worst yet, it uses the NTEXT and IMAGE type (like TEXT ) which is obsolete!

    Tomorrow I will go through 2 more chapters… My current rating (based on the first two chapters) is that this book runs around the 75%ile. Out of 4  random books on this topic, it’s better than three of them.