Sunday, October 15, 2017

Paperspace.com for Windows 10 Desktops in the cloud

I recently needed a clean Windows 10 desktop for testing purposes. I couldn't use Docker, Vagrant, VirtualBox or any of the other local-to-my-computer VM solutions. I tried the Amazon Workspaces solution but it had the typical UX of 30 questions I don't know the answer to. I just wanted a desktop that could download from the Internet.

Paperspace.com to the rescue!

Within 5 minutes I had a Windows 10 desktop in a browser and the hardest questions where how much CPU and ram I wanted. The cost was incredibly inexpensive too.

The test only took 10 minutes and I destroyed the cloud machine after that.

This was the easiest, fastest, and cheapest solution to the problem.

Monday, September 18, 2017

SQL Server AAD Authentication Error

Are you seeing this error from your ADO.NET Code Trying to Connect To Your SQL Server via Active Directory Authentication:

Exception=System.Data.SqlClient.SqlException (0x80131904): Failed to authenticate the user NT Authority\Anonymous Logon in Active Directory (Authentication=ActiveDirectoryIntegrated). Error code 0x800703FA; state 10 Illegal operation attempted on a registry key that has been marked for deletion. 

The issue might be that you are running a windows service or a scheduled task with a user that has logged off. In Windows Server the registry hive for the current user is loaded when the user logs into the machine, and unloaded when they log off. This means that if you service account is running under a user that is logged off then the registry hive will not be available. ADO.NET AAD in .NET 4.6.1 uses the registry hive of the running user -- which could be the user of a service.

To solve this problem you need to tell Windows Server not to unload the registry hive for users when they log off.

{6230289B-5BEE-409e-932A-2F01FA407A92}


The Asp.net Core 2.0 Environment for NodeJs Developers

NodeJs & .Net Core

NodeJs and the Asp.net ecosystems have used very different paradigms until recently. This article will show you where they are similar – the 10,000 feet view – when using the Asp.net Core 2.0 environment.

With NodeJs, as the developer you might have chosen to build a web server or a cli tool.  Your favorite interactive development environment (IDE) might just be a terminal window or Google Chrome DevTools.

With Asp.net (not .Net Core), Microsoft provided the web server (IIS) and the IDE (Visual Studio). You can develop many different applications including web sites and cli tools. With .Net Core 2.0, the focus is on portable code, and the result feels much closer to NodeJs.

For this article, I'm going to ignore all issues that would make the comparison apples to oranges and instead focus on issues that make the comparison apples to apples.

Moving forward, any reference to Asp.net Core 2.0 will be just .Net Core. NodeJs 6.9.1 will be just Node. The example project name will be kittens so you can see how it is used when it is relevant.

Text Only

The files of both .Net Core and Node are both text files. A project has a single configuration file. The top of each code file references the libraries that are used in the file.

In .Net Core 1.0, the project file was a package.json, but in .Net Core 2.0, the project file is a *.csproj file. The package file for .Net Core is created with the cli of "dotnet new...". The XML of the .Net Core file will be familiar to .Net developers.

<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.0</TargetFramework>
<RuntimeIdentifiers>win10-x64;osx.10.11-x64;linux-x64;ubuntu.14.04-x64</RuntimeIdentifiers>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" />
</ItemGroup>
</Project>

In Node, the project file is the package.json created by the NPM cli of "npm init" with the result of a JSON object containing information about the project. I added expressjs to the package so the dependencies object would have an entry.

{
  "name": "kittens",
  "version": "1.0.0",
  "description": "",
  "main": "kittens.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "express": "^4.15.4"
  }
}

In their own way, each of the package files configure the project but in very different ways. The .Net Core file is focused on the .Net Core version and final built assets of the project – things the compiler and dependencies need to run.

The Node package.json file lists all dependencies to pull in, as well as meta data about the project – it is the single location a developer touches to control the project as a whole, including building and testing. I usually have a long list of items in the Node package.json scripts section for building, testing, and running.

Execution 

Both Node and .Net Core applications are built via terminal/command line. Both are executed in the context of the parent application: dotnet or node.

While Node needs a single point of entry of a javascript file, such as kitten.js, .Net Core uses the project name (<name>.csproj).

> node kittens.js

> dotnet kittens.dll

Dependencies are added to Node projects with the npm or yarn package managers. The dependencies are placed in a folder named node_modules.

> npm add <packagename>

For .Net Core dependencies, use the NuGet package manager. There isn't a single list of dependencies in a human readable format that is opened/viewed by developers on a regular basis.

> dotnet add package <packagename>

Both npm and dotnet allow you to specify versions but it is optional. Npm and yarn have been dealing with package interdepencies and version conflicts. I believe both are now able to avoid version conflicts of interdepencies.

Deployment 

Both platforms allow you to 'git clone' then install the package dependencies and run.

.Net Core uses a folder containing many files depending on the execution platform (windows, mac, linux).  For Node, just install and run a single file. Node will then find the rest of the files.

> dotnet restore && dotnet ./<path-to-bin-directory>/kittens.dll

> npm install && node kittens.js

Serverless 

The reason I mention serverless is you may not want to deal with the execution environment details.
Node has a variety of host platforms and open-source projects to run serverless.

Currently, the best way to run .Net Core serverless right now is Azure Functions. While there are other platforms such as AWS Lamdba that can run .Net Core code, those platforms may be slightly behind (or vastly behind) release .Net Core platform ability.

Docker 

Docker containers are a great way to develop and deploy applications. I always look for owner created containers. If it works for them, it will work for me.

Here are the owner created containers for each:

node
microsoft/aspnetcore

For Node, the specific owner container isn't that interesting because there are a million ways you may want to run Node and a different container may get you much closer to your needs in terms of dependencies and tools.

For .Net Core, the owner container is probably the best bet for the short-term while the project is still new-ish.


Sunday, May 7, 2017

Agile Manifesto -- Some 16 years later

"In 2001, 17 individuals gathered in the Wasatch mountains of Utah to find common ground around Agile. After much skiing, talking, relaxing, and eating, they arrived at four common values that led to the development of the Agile Manifesto."[source] [Wikipedia]

Going to AgileManifecto.org, we see just 14 people listed, what appears to be their feeling on what promised so much hope some 15 years ago.



  1. Mike Beedle 
  2. Arie van Bennekum
  3. Alistair Cockburn
    1. " “Agile has become overly decorated. Let’s scrape away those decorations for a minute, and get back to the center of agile. The center of agile is … ” [2015]
  4. Ward Cunningham
    1. " "I have seen my ideas diluted as they diffused through the industry". You said "I’d much rather move to the next idea than struggle to keep the last idea pure." [2011]
  5. Martin Fowler
  6. Jim Highsmith - 
    1. Don’t “Control” Agile Projects
    2. Agile Bureaucracy: When Practices become Principles
  7. Andrew Hunt The Failure of Agile (2015)
  8. Ron Jeffries
    1. Do you want Crappy Agile? [2016] " it encourages you to track “metrics”. Do you want Crappy Agile? That’s how you get Crappy Agile, at least far too often."
    2. "Scrum is good, when done as intended. Otherwise it can be oppressive and dangerous to developers. Let's study: Defense Against the Dark Arts of Scrum." Dark Scrum
  9. Jon Kern
    1. "The only thing that has me mildly torn--and only mildly, because "to each his own" and "what-evs" come to mind--is the craze surrounding Scrum. Scrum, scrum, scrum, scrum. Part of me is happy that there are legions out there turning the sod under, trying to sow the agile seeds via Scrum. Another part of me thinks it is like Corn-to-Ethanol. A waste of energy for the dumbfounded among us, spurred on by lots of lobbying." [2011]
    2. Have not posted on his blog about agile since 2011
  10. Brian Marick "In his keynote at the Agile Development Practices conference, Brian Marick described values missing from the Agile Manifesto. His view is that the Manifesto was essentially a marketing document, aimed at getting business to give agile a chance." [2008]
    1. "As Agile moves into bigger companies and into less adventurous ones, those undocumented values are getting watered down. If that continues, I’m afraid Agile will be this decade’s fad that changes nothing."
  11. Robert C. Martin (aka Uncle Bob) moved on to "Clean Code" 
  12. Ken Schwaber owns Scrum.org
  13. Jeff Sutherland CEO of Scrum Inc
  14. Dave Thomas 

    Agile is Dead (Long Live Agility) - PragDave

Missing are: [source]

Question - Agile is good if selling it is your bread and butter

Many agile signatures appear to have become disillusioned by how Agile is commonly being adapted in the industry. Some people have stopped writing and posting on it. Others have proposed derivatives of it in the hope of getting back to core values. Others are making a good living from preaching the gospel of agile.

My view?  Simple, I started using RAD back in 1979 and been using the concepts (without the dogma) for the last 37 years.  The best implementation have been where the product owner/customer was very technical and detail orientated and put in the needed hours. The worst cases were scrum was magically expected to make management easy and not require their heavy engagement or hours  with the products they owned or manage.

I like the concepts and practice them instinctively, I hate the "rote-application of the manifesto" 

Monday, May 1, 2017

Running Tests in Docker for Front-end Developers (ng2+)

Unit tests (karma/jasmine) & End-to-end tests (protractor)

If you are comfortable with Docker and Angular 2, all you need is the Dockerfile. If you are new to testing in Docker, this short article will bring you up to speed. If you need more help getting up to speed with Docker, begin with my previous article: Docker for Angular 2 devs.

Installation 

In order to develop Angular 2+ in a container, you need to install Docker on a computer or VM. Docker takes a bit of memory and can take a lot of space so the biggest box you can give it is best. Once Docker is installed, start is up.

Make sure all your docker commands are run from the folder that contains the Dockerfile.

Dockerfile 

Docker works by reading the description of a Dockerfile (or several in conjunction), to build out an image. Once the image is running, it is called a container.

This particular Dockerfile is based on a Docker image that already has headless chrome, markadams/chromium-xvfb-js:7. There are several Dockerfiles at Docker hub for headless chrome. If this base image or my additions do not work, feel free to play around with other Dockerfiles. You will find references/websites at the bottom of this article.

What Does this Dockerfile do 

This Dockerfile installs the angular2-webpack-starter (and all dependencies). I use it as a development environment that I connect to and run the tests myself as I develop. I'm using the angular2-webpack-starter as my base test that the Dockerfile has all the needed dependencies to complete both the unit tests and the end-to-end tests successfully.

If the starter can run both types of tests, it should be fairly straightforward to add your own repository to the container for development and testing. The container has the full stack for this: Git, NPM, Yarn, Node, and Angular+, ng-cli, Webpack, Karma, Jasmine, Protractor, http-server.

Why use the angular2-webpack-starter?

I wanted an Angular 2+ project that used as much of the same stack as my current projects that also had unit tests and e2e tests. In order to validate I have the right Dockerfile configuration, all I have to do is run both types of tests in this repository. If they succeed, my own tests for my own projects should succeed as well.

Can I see it working?  

Yes, I have a short 4 minute video showing it from building the docker image to the final e2e test run. I've cut out 11 minutes of Docker building the image as there is a lot of installation of dependencies.



Using as a Development Environment 

I use this container as my main development & testing environment. I share my local repository folder into the container with the –v param and expose a small range of ports to allow for several front-end and api servers to run at once.

The angular2-webpack-starter runs on port 3000 so I open 3000-3005. This allows you to use your host computer to access the website (http://localhost:3000) as long as the port isn't in use on your host machine.

What are the important parts of the Dockerfile?  

There are definitely packages and settings in the Dockerfile you may not use, but there are a few you need to keep.

Protractor needs to know where the Chrome bin directory is. If you remove that environment variable, Protractor will not be able to run the tests. Protractor also needs the default-jre.

Getting karma/jasmine tests working in a container using headless chrome seems to be easy compared to protractor e2e tests. If you change the file, always use the protractor e2e test run as the verification that the change did no harm.

Git Commit Hash 

Since the repositories for both my Dockerfile and the angular2-webpack-starter will change over time, I'm noting the git commit hashes of the versions I used.

https://github.com/AngularClass/angular2-webpack-starter
commit e9521a42  - Sat Apr 22 19:27:33 2017 -0400

https://github.com/dfberry/DockerFiles/blob/master/headless-chromium/Dockerfile
commit 38b7554c Sun Apr 30 13:23:17 2017 -0700

References 

https://github.com/mark-adams/docker-chromium-xvfb
https://hub.docker.com/r/yukinying/chrome-headless/
https://hub.docker.com/r/justinribeiro/chrome-headless/

Thursday, October 20, 2016

Docker for Angular 2 devs

Docker is a Virtual Environment
Docker containers are great for adding new developers to existing projects, or for learning new technologies without polluting your existing developer machine/host. Docker allows you to put a fence around the environment while still using it.

Why Docker for Angular 2?
Docker is an easy way to get up and going on a new stack, environment, tool, or operating system without having to learn how to install and configure the new stack. A collection of docker images are available from Docker Hub ranging from simple to incredibly complex -- saving you the time and energy.

Angular 2 examples frequently include a Dockerfile in the repository which makes getting the example up and running much quicker -- if you don't have to focus on package installation and configuration.

The base Angular 2 development stack uses Node, TypeScript, Typings, and a build system (such as SystemJs or Webpack). Instead of learning each stack element before/while learning Angular 2, just focus on Angular 2 itself -- by using a Dockerfile to bring up a working environment.

The repositories for Angular 2 projects will have a package.json file at the root which is common for NodeJs/NPM package management projects. The Docker build will install the packages in the package management system as part of the build. The build can also transpile the typescript code , and start a static file web server -- if the package.json has a start script.

In order to get a new development environment up and a new project in the browser, you just need to build the Dockerfile, then run it. Running these two commands at the terminal/cli saves you time in find and learning the Angular 2 stack, and then building and running the project.

The Angular 2 Quickstart
For this article, I use the Angular 2 Quickstart repository including the Dockerfile found in the repository.

I use a Macintosh laptop. If you are using a Windows-based computer/host, you may have more or different issues than this article.

Docker via Terminal/Cli
I prefer the code-writing environment and web browser already installed and configured on my developer laptop/host. I configure the Docker container to share the hosted files. The changes are reflected in the container – and I run the Angular 2 project in watch mode so the changes immediately force a recompile in the container.

Viewing the Angular Project in a Browser
Since the Angular 2 project is a website, I access the container by the port and map the container's port to the host's port – so access to the running Angular project is from a web browser on the host laptop with http://localhost:3000.

Install Docker
Before you install Docker, make sure you have a bit of space on the computer. Docker, like Vagrant
and VirtualBox, uses a lot of space.

Go to Docker and install it. Start Docker up.

Check Docker
Open a terminal/cli and check the install worked and Docker started by requesting the Docker version


docker –v
>Docker version 1.12.1, build 6f9534c 

If you get a docker version as a response, you installed and started Docker correctly.

Images and Containers
Docker Images are defined in the Dockerfile and represent the virtual machine to be built. The instantiation of the image is a container. You can have many containers based on one image.

Each image is named and each container can also be named. You can use these names to indicate ownership (who created it), as well as base image (node), and purpose (xyzProject).

Pick a naming schema for your images and containers and stick with it.

I like to name my images with my github name and the general name such as dfberry/quickstart. I like to name the containers with as specific a name as possible such as ng2-quickstart.

The list of containers (running or stopped) shows both names which can help you organize find the container you want.

The Angular 2 Docker Image
The fastest way to get going with Docker for Angular 2 projects is to use the latest node as your base image -- which is also what the Angular 2 quickstart uses.

The image has the latest node, npm, and git. Docker hub hosts the base image and Node keeps it up to date.

Docker's philosophy is that the containers are meant to execute then terminate with the least privileges possible. In order to make a container work as a development container (i.e. stay up and running), I'll show some not-best-practice choices. This will allow you to get up and going quickly. When you understand the Docker flow, you can implement your own security.

The Docker Images
Docker provides no images on installation. I can see that using the command


docker images 

When I build the nodejs image, it will appear in the list with information about the image.



For now, the two most important columns are the REPOSITORY and IMAGE ID. The REPOSITORY field is the image name I used to build the image. My naming schema indicates my user account (dfberry) and the base image or purpose (node). This helps me find it in the image list.

The IMAGE ID is the unique id used to identify the image.

The Dockerfile
In order to create a docker image, you need a Dockerfile (notice the filename has no extension). This is the file the docker cli will assume you want to use. For this example, the Dockerfile is small. It has the following features:
  • creates a group
  • creates a user
  • creates a directory structure with appropriate permissions
  • copies over the package.json file from the host
  • installs the npm packages listed in the package.json
  • runs the package.json's "start" script – which should start the website

For now, make sure this is the only Dockerfile in the root of the project, or anywhere below the root.


# To build and run with Docker:
#
#  $ docker build -t ng-quickstart .
#  $ docker run -it --rm -p 3000:3000 -p 3001:3001 ng-quickstart
#
FROM node:latest

RUN mkdir -p /quickstart /home/nodejs && \
groupadd -r nodejs && \
useradd -r -g nodejs -d /home/nodejs -s /sbin/nologin nodejs && \
chown -R nodejs:nodejs /home/nodejs

WORKDIR /quickstart
COPY package.json typings.json /quickstart/
RUN npm install --unsafe-perm=true

COPY . /quickstart
RUN chown -R nodejs:nodejs /quickstart
USER nodejs

CMD npm start

The nodejs base image will install nodejs, npm and git. The image will just be used for building and hosting the Angular 2 project.

If you have scripts that do the bulk of your build/startup/run process, change the Dockerfile to copy that file to the container and execute it as part of the build.

Build the Image
Usage: docker build [OPTIONS] PATH | URL | -

In order to build the image, use the docker cli.


docker build –t <user>/<yourimagename> .
Example $: docker build –t dfberry/ng-quickstart .


If you don't want to annotate the user, just leave that off.


docker build –t <yourimagename> .
Example $: docker build –t ng-quickstart .

Note: the '.' at the end of the string is the url/location of the Dockerfile. I could have used a Github repository url instead of the local folder.

In the above examples, the REPOSITORY name is 'ng-quickstart'. If you don't use the –t naming param, your image will have a name of <none> which is annoying when they pile up on a team server.

The build will give you some feedback to let you know how it is going.


Sending build context to Docker daemon 3.072 kB 
Step 1 : FROM node:latest 

... 

Removing intermediate container 2cb50f334393 
Successfully built 1265b22b5b90

Since the build can return a lot of information, I didn't include the entire response.

The build of the quickstart takes less than a minute on my Mac.

The last line gives you the IMAGE ID. Remember to view all docker images after building to check it worked as expected.


docker images

Run the Container
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...] 

Now that the image is built, I want to run the image to see the website.

If you don't have an Angular 2/Typescript website, use the ng2 Quickstart.

Run switches
The run command has a lot of switches and configurations. I'll walk you through the choices for this container.

I want to name the container so that I remember the purpose. This is optional but helpful when you have a long list of containers.


--name ng2-quickstart 

I want to make sure the container's web ports are matched to my host machine's port so I can see the website as http://localhost:3000. Make sure the port isn't already in use on your host machine.


-p 3000:3000 

I want to map my HOST directory (/Users/dfberry/quickstart to the container's directory (/home/nodejs/quickstart) that was created in the build so I can edit on my laptop and the changes are reflected in the container. The /home/nodejs/quickstart directory was created as part of the Dockerfile.


-v /Users/dfberry/quickstart/:/home/nodejs/quickstart 

I want the terminal/cli to show the container's responses including transpile status and the file requests.


-it 


The full command is:


docker run -it -p 3000:3000 -v /Users/dfberry/quickstart:/home/nodejs/quickstart --name ng2-quickstart dfberry/ng-quickstart

Notice the image is named dfberry/ng-quickstart while the container is named ng2-quickstart.

Run the "docker run" command at the terminal/cli.

The container should be up and the website should be transpiled and running.


At this point, you should be able to work on the website code on your host with your usual editing software and the changes will reflect in the container (re-transpile changes) and in the web browser.

List Docker Containers
In order to see all the containers, use


docker ps -a

If you only want to see the running containers, leave the -a off.



docker ps



At this point, the easy docker commands are done and you can work on the website and forget about Docker for a while. When you are ready to stop the container, stop Docker or get out of interactive mode, read on.

Interactive Mode (-it) versus Detached Mode (-d)
Interactive Mode means the terminal/cli shows what is happening to your website in the container. Detached mode means the terminal/cli doesn't show what is happening and the terminal/cli returns to your control for other commands on your host.

To move from interactive to detached mode, use control + p + control + q.

This leaves the container up but you have no visual/textual feedback about how the website is working from the container. You can use the developer tools/F12 in the browser to get a sense, but won't be able to see http requests and transpiles.

You are either comfortable with that or not.

If you want the interactive mode and the website transpile/http request information, don't exit interactive mode. Instead, use control + c. This command stops and removes the container from Docker, but doesn't remove the image. You can re-enter interactive mode with the same run command above.

If you are more comfortable in detached mode, where the website gives transpiles and http request information via a different method such as logging to a file or cloud service, change the docker run command.

Instead of using –it as part of the "docker run" command, use –d for detached mode.

Exec Mode to run commands on container
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]

When you want to connect to the container, you the same -it for interactive mode but with "docker exec."  The end command tells the docker container what environment to enter in the container -- such as the bash shell.


docker exec –it ng2-quickstart /bin/bash

You can log in as root if you need elevated privileges.


docker exec –it –u root ng2-quickstart /bin/bash 

The terminal/cli should now show the prompt changed to indicate you are now on the container:


nodejs@faf83c87c12e:/quickstart$  

When you are done running the commands, use control + p + control + q to exit. The container is still running.

Sudo or Root
In this particular quickstart nodejs docker container, sudo has not been installed. Sudo may be your first choice and you can install. Or you could use the "docker exec" with root. Either way has pros and cons.

Stopping and Starting the Container
When you are done with the container, you need to stop it. You can stop, and restart it as you need by container id or name.


docker stop ng2-quickstart 
docker stop 7449222ec26b 

docker start ng2-quickstart 
docker start 7449222ec26b 

Stopping the container may take some time – be patient. Mine takes up to 10 seconds on my Mac. When you restart the container, it is in detached mode. If you really want interactive mode, remove the container, and use docker run again with –it.

Cleanup
Make sure to stop all containers when they are not needed. When you are done with a container, you can remove it


docker rm –fv ng2-quickstart 

When you are done with an image, you can remove that as well


docker rmi ng-quickstart 

Stop Docker
Remember to stop Docker when you are done with the containers for the moment or day.

Monday, September 5, 2016

An interesting Interview Question: Fibonacci Sequence

Write a function to calculate the nth Fibonacci Sequence is a common interview question and often the solution is something like

 

int Fib(int n)

{

   if(n < 1) return 1;

   return Fib(n-1) + Fib(n-2);

}

 

The next question is to ask for n=100, how many items will be on the stack. The answer is not 100 but actually horrible! It is closer to 2^100.

take the first call – we start a stack on Fib(99) and one on Fib(98). There is nothing to allow Fib(99) to borrow the result of Fib(98).  So one step is two stack items to recurse.  Each subsequent call changes one stack item into 2 items.   For example

  • 2 –> call [Fib(1), Fib(0)]
  • 3 –> calls [ Fib(2)->[Fib(1), Fib(0)], Fib(1) –> Fib(0) ]
  • 4 –> calls [ Fib(3)->[[[ Fib(2)->[Fib(1), Fib(0)], Fib(1) –> Fib(0) ]], Fib(2)->[Fib(1), Fib(0)], Fib(1) –> Fib(0) ]

Missing this issue is very often seen with by-rote developers (who are excellent for some tasks).

 

A better solution is to cache the values as each one is computed – effectively creating a lookup table. You are trading stack space for memory space.

 

Placing constraints on memory and stack space may force the developer to do some actual thinking. A solution that conforms to this is shown below

 

  private static long Fibonacci(int n) {
        long a = 0L;
        long b = 1L;
        for (int i = 31; i >= 0; i—)  //31 is arbitrary, see below

        {

            long d = a * (b * 2 - a);
            long e = a * a + b * b;
            a = d;
            b = e;
            if ((((uint)n >> i) & 1) != 0) {
                long c = a + b;
                a = b;
                b = c;
                }
           }
        return a;
    }

 

The output of the above shows what is happening  and suggests that the ”31”  taking the log base 2 of N can likely be done to improve efficiency

image

for 32:

image

for 65

image

for 129

image

 

What is the difference in performance for the naive vs the latter?

I actually did not wait until the naive solution finished… I aborted at 4 minutes

image

The new improved version was 85 ms, over a 3000 fold improvement.

Take Away

This question:

  1. Identify if a person knows what recursion is and can code it.
  2. Identify if he understands what the consequence of recursion is and how it will be executed(i.e. think about what the code does)
    1. Most recursion questions are atomic (i.e. factorial) and not composite (recursion that is not simple)
  3. Is able to do analysis of a simple mathematical issue and generate a performing solution.