Showing posts from May, 2016

Theory about Test Environments

Often my career has faced dealing with an arbitrary environment to test in. This environment preceded my arrival, and often was still there at my departure with many developers became fatalistic towards this arbitrary environment.  This is not good.   The Rhetorical Goal Recomposed “We use our test environment to verify that our code changes will work as expected” While this assures upper management, it lacks specifics to evaluate if the test environment is appropriate or complete. A more objective measurement would be: The code changes perform as specified at the six-sigma level of certainty. This then logically cascades into sub-measurements: A1: The code changes perform as specified at the highest projected peak load for the next N year (typically 1-2) at the six-sigma level of certainty. A2: The code changes perform as specified on a fresh created (perfect) environment  at the six-sigma level of certainty. A3: The code changes perform as specified on a

The sad state of evidence based development management patterns

I have been in the development game for many decades. I did my first programs using APL/360 and Fortran ( WatFiv ) at the University of Waterloo, and have seen and coded a lot of languages over the years (FORTH, COBOL, Asm, Pascal, B,C, C++, SAS, etc).   My academic training was in Operations Research – that is mathematical optimization of business processes . Today, I look at the development processes that I see and it is dominantly “fly by the seats of the pants”, “everybody is doing it” or “academic correctness”. I am not talking about waterfall or agile or scrum. I am not talking about architecture etc. Yet is some ways I am. Some processes assert Evidence Based Management , yet fails to deliver the evidence of better results. Some bloggers detail the problems with EBM.  A few books attempt to summarize the little research that has occurred, such as "Making Software: What Really Works and Why we Believe It"   As an Operation Research person, I would define the o

Mining PubMed via Neo4J Graph Database–Getting the data

I have a blog dealing with various complex autoimmune diseases and spend a lot of time walking links at . Often readers send me an article that I missed.    I thought that a series of post on how to do it will help other people (including MDs, grad students and citizen scientists) better research medical issues.   Getting the data from Pub Med I implemented a simple logic to obtain a collection of relevant articles: Query for 10,000 articles on a subject or key word Retrieve each of these articles and any articles they referenced (i.e. the knowledge graph). Keep repeating until you have enough articles or you run out of them!! Getting the bootstrapping list of articles A console application that reads the command line arguments and retrieves the list. For example, downloader.exe Crohn’s Disease which produces this URI

Microservices–Do it right!

In my earlier post, A Financially Frugal Architectural Pattern for the Cloud ,  I advocated the use of microservices. Microservices are similar to REST, a concept or pattern or architectural standard, unlike  SOAP which is standards based. The modern IT industry trend towards “good enough”,  “lip-service” and “we’ll fix it in the next release”.  A contemporary application may use relational database software (SQL Server, Oracle, MySql) and thus the developers (and their management) would assert that their is a relational database system. If I move a magnetic tape based system into tables (one table for each type of tape) using relational database software – would that make it a relational database system? My opinion is no – never!!!   Then what makes it one? The data has been fully normalized in the logical model. Often the database has never been reviewed for normalization  despite such information being ancient (see William Kent, A Simple Guide to Five Normal Forms in Relational