Thursday, February 18, 2016

Extending a linux web dashboard

Adding pm2 status to linux-dash

linux-dash is a light-weight, open-source web dashboard to monitor your linux machine or virtual machine. You can find this package on Github.

The dashboard reports many different aspects of our linux installation via shell scripts (*.sh). This allows the dashboard to be light-weight, and work on most linux machines. The website displays running charts, and tables. The web site can be node, php, or go. For the node webserver, the only dependencies are express and websocket.

Extending linux-dash

You may have a few extra services or programs running on your linux installation that you would like to display on linux-dash . I use pm2, a process manager. Adding a table to display the pm2 status information was very easy -- even if you are not familiar with client-side Angular directives or server-side Node.JS or server-side shell scripts.

The naming convention and templating allows us to focus on the few components we need to build without struggling on the glue between them.

pm2 Dashboard Design

The 'pm2 list' command shows a table with information on the command line.

We want to show this in the linux-dash website on the applications tab in its own table.

In order to do that, we need:
  1. a new shell script - to capture the results of running "pm2 list" and return json
  2. changes to glue - to find script and display as table

Installing linux-dash

If you do not have linux-dash installed, you will need to get it. Clone it from github. Make sure scripts have execute status and the webserver is started with SUDO privileges.

Writing the server-side Shell script

This section applies to services with a snapshot or single point in time.

If you have not written a shell script before, no need to worry. There are plenty of examples of shell scripts at /server/modules/shell_files. The final output of the shell script needs to either be an empty json object such as {} or an array of values such as [{},{},{}]. The values will be key/value pairs (1 key, 1 value) which will diplay as a 2xn grid of information.

The second choice is a table (array of key/value pairs) with more columns which is what we need.

pm2 list output

The command I usually run at the command line is "pm2 list" -- the response shows each process with uptime, status, and other information in a table.

We need to know which lines to ignore (1-3, 6, 7) and which to include (only 4 and 5).
Make sure each line of your output is accounted for as either ignored or parsed. While I ignored the header and footer, perhaps your data should be included.

The shell script needs to be able to read each row into a meaningful json object such as:


The script has 3 sections. The first section sets the command to the variable 'command'. The second section executes the command and sets the returned text (the command line table) to the variable 'data'. The third section is in two sections.

The first section (a) executes if the 'data' variable has any length. The second section (b) returns an empty json object if the 'data' variable is empty.

Most of the work is in section 3.a with the 'awk' command. The first line pipes the 'data' variable through tail, only passing lines 4 or greater to the next pipe which is head. Head takes all the lines except the last 2 and pipes the results to awk.

The rest of 3.a is working through each column of each row, getting the values $6 means the sixth column. Columns include column break characters of '|' so make sure to include them in the count.

If you are watching the trailing commas, you may be wondering how the last one is removed. Bash has a couple of different ways, I'm using the older bash version syntax which is ${t%?}.


#1: set text of command
command="pm2 list"

#2: execute command

#3: only process data if variable has a length 
#this should handle cases where pm2 is not installed
if [ -n "$data" ]; then

    #a: start processing data on line 4
    #don't process last 2 lines
    json=$( echo "$data" | tail -n +4 | head -n +2 \
    | awk   '{print "{"}\
        {print "\"appName\":\"" $2 "\","} \
        {print "\"id\":\"" $4 "\","} \
        {print "\"mode\":\"" $6 "\","} \
        {print "\"pid\":\"" $8 "\","}\
        {print "\"status\":\"" $10 "\","}\
        {print "\"restart\":\"" $12 "\","}\
        {print "\"uptime\":\"" $14 "\","}\
        {print "\"memory\":\"" $16 $17 "\","}\
        {print "\"watching\":\"" $19 "\""}\
        {print "},"}')
    #make sure to remove last comma and print in array
    echo "[" ${json%?} "]"
    #b: no data found so return empty json object
    echo "{}"

Make sure the script has execute permissions then try it out on your favorite linux OS. If you have pm2 installed and running, you should get a json object filled in with values.

At this point, we are done with the server-side code. Isn't that amazing?

Naming conventions

The client-side piece of the code is connected to the server-side script via the naming convention. I called this script on the server in the server/modules/shell_files directory. For the client-side/Angular files, you need to use the same name or Angular version of the same name.

Client-side changes for Angular

The Angular directive will be pm2 and used like:


Add this to the /templates/sections/applications.html so that the entire file looks like:


Since the pm2 directive is at the end, it will display as the last table. Notice I haven't actually built a table in html, css, or any other method.

I just added a directive using a naming convention tied to the server-side script. Pretty cool, huh?

Routing to the new Angular directive

The last piece is to route the directive 'pm2' to a call to the server for the '' script.
In the /js/modules.js file, the routing for simple tables in controlled by the 'simpleTableModules' variable. Find that section. We need to add a new json object to the array of name/template sections.

    name: 'pm2',
    template: '<table-data heading="P(rocess) M(anager) 2" module-name="pm2" info="pm2 read-out."></table-data>'

It doesn't matter where in the array the section is added, just that the naming convention is used. Notice the name is 'pm2' and the template.module-name is set to the same value of 'pm2'.

If I wanted a simple table of 2 columns instead of 9 columns, the json object would look like:

    name: 'pm2',
    template: '<key-value-list heading="P(rocess) M(anager) 2" module-name="pm2" info="pm2 read-out."></key-value-list>'

The key-value-list changes the html display to a 2xN column table.


With very little code, you can add reports to linux-dash. You need to write a shell script with execute permissions that outputs a json object for the server-side. For the client-side you need to create a directive via adding its syntax to the appropriate section template. Then add a route to the modules.js file. The biggest piece of work is getting the shell script to work.

Now that you know how to create new reporting tables for linux-dash, feel free to add your own code to the project via a pull request.

Friday, February 5, 2016

Prototyping in MongoDB with the Aggregation Pipeline stage operator $sample

Prototyping in MongoDB with the Aggregation Pipeline stage operator $sample

The World Map as a visual example

In order to show how the random sampling works in the mongoDB query, this NodeJS Express website will show the world map and display random latitude/longitude points on the map. Each refresh of the page will produce new random points. Below the map, the docs will display.

Once the website is up and working with data points, we will play with the query to see how the data points change in response.

The demonstration video is available on YouTube.

Setup steps for the website


This article assumes you have no mongoDB, no website, and no data. It does assume you have an account on Compose. Each step is broken out and explained. If there is a step you already have, such as the mongoDB with latitude/longitude data or a website that displays it, skip to the next.
  1. get website running, display map with no data
  2. setup the mongoDB+ ssl database
  3. get mock data including latitude and longitude
  4. insert the mock data into database
  5. update database data types
  6. verify world map displays data points


When the website works and the world map displays data points, let's play with it to see how $sample impacts the results.
  1. understand the $sample operator
  2. change the row count
  3. change the aggregation pipeline order
  4. prototype with $sample

System architecture

The data import script is /insert.js. It opens and inserts a json file into a mongoDB collection. It doesn't do any transformation.

The data update script is /update.js. It updates the data to numeric and geojson types.

The server is a nodeJs Express website using the native MongoDB driver. The code uses the filesystem, url, and path libraries. This is a bare-bones express website. The /server/server.js file is the web server, with /server/query.js as the database layer. The server runs at This address is routed to /public/highmap/world.highmap.html. The data query will be made to from the client file /public/highmap/world.highmap.js.

The client files are in the /public directory. The main web file is /highmap/world.highmap.html. It uses jQuery as the javascript framework, and highmap as the mapping library which plots the points on the world map. The size of the map is controlled by the /public/highmap/world.highmap.css stylesheet for the map id.

Step 1: The NodeJS Express Website

In order to get the website up and going, you need to clone this repository, make sure nodeJS is installed, and install the dependency libraries found in the package.json file.

Todo: install dependencies

npm install

Once the dependencies are installed, you can start the web server.

Todo: start website

npm start

Todo: Request the website to see the world map. The map should display successfully with no data points.

Step 2: Setup the Compose MongoDB+ Deployment and Database

You can move on to the next section, if you have a mongoDB deployment with SSL to use, and have the following items:
  • deployment public SSL key in the /server/clientcertificate.pem file
  • connection string for that deployment in /server/config.json
Todo: Create a new deployment on Compose for a MongoDB+ database with an SSL connection.

While still on the Compose backoffice, open the new deployment and copy the connection string.

Todo: Copy connection string

You will need the entire connection string in order to insert, update, and query the data. The connection string uses a user and password at the beginning and the database name at the end.

You also need to get the SSL Public key from the Compose Deployment Overview page. You will need to login with your Compose user password in order for the public key to show.

Todo: Save the entire SSL Public key to /server/clientcertificate.pem.

If you save it somewhere else, you need to change the mongodb.certificatefile setting in /server/config.json.

You will also need to create a user in the Deployment's database.

Todo: Create new database user and password. Once you create the user name and user password, edit the connection string for the user, password, and database name.

connection string format


connection string example


Todo: Change the mongodb.url setting in the /server/config.json file to this new connection string.

    "mongodb": {
        "data": "/data/mockdata.json",
        "url": "mongodb://,",
        "collection": "mockdata",
        "certificatefile": "/clientcertificate.pem",
        "sample": {
            "on": true,
            "size": 5,
            "index": 1

Step 3: The Prototype Data

If you already have latitude and longitude data, or want to use the mock file included at /data/mockdata.json, you can skip this step.

Use Mockeroo to generate your data. This allows you to get data, including latitude and longitude quickly and easily. Make sure to add the latitude and longitude data in json format.

Make sure you have at least 1000 records for a good show of randomness and save the file as mockdata.json in the data subdirectory.

Todo: Create mock data and save to /data/mockdata.json.

Step 4: Insert the Mock Data into the mockdata Collection

The insert.js file converts the /data/mockdata.json file into the mockdata collection in the mongoDB database.

Note: This script uses the native MongoDB driver and the filesystem node package. The Mongoose driver can also use the ssl connection and the $sample operator. If you are using any other driver, you will need to check for both ssl and $sample.

The configuration is kept in the /server/config.json file. Make sure it is correct for your mongoDB url, user, password, database name, collection name and mock data file location. The configuration is read in and stored in the privateconfig variable of the insert.js script.

The mongos section of the config variable is for the SSL mongoDB connection. You shouldn't need to change any values.


var MongoClient = require('mongodb').MongoClient,  
  fs = require('fs'),
  path = require('path');

var privateconfig = require(path.join(__dirname + '/config.json'));
var ca = [fs.readFileSync(path.join(__dirname + privateconfig.mongodb.certificatefile))];
var data = fs.readFileSync(path.join(__dirname +, 'utf8');
var json = JSON.parse(data);

MongoClient.connect(privateconfig.mongodb.url, {
    mongos: {
        ssl: true,
        sslValidate: true,
        sslCA: ca,
        poolSize: 1,
        reconnectTries: 1
}, function (err, db) {
    if (err) {
    } else {
        db.collection(privateconfig.mongodb.collection).insert(json, function (err, collection) {
            if (err) console.log((err));

Todo: Run the insert script.

node insert.js
If you create an SSL database but don't pass the certificate, you won't be able to connect to it. You will get a sockets closed error.

Once you run the script, make sure you can see the documents in the database's mockdata collection.

Step 5: Convert latitude and longitude from string to floats

The mock data's latitude and longitude are strings. Use the update.js file to convert the strings to floats as well as create the geojson values.

var MongoClient = require('mongodb').MongoClient,  
  fs = require('fs'),
  path = require('path');

var privateconfig = require(path.join(__dirname + '/config.json'));
var ca = [fs.readFileSync(path.join(__dirname + privateconfig.mongodb.certificatefile))];

MongoClient.connect(privateconfig.mongodb.url, {
    mongos: {
        ssl: true,
        sslValidate: true,
        sslCA: ca,
        poolSize: 1,
        reconnectTries: 1
}, function (err, db) {
    if (err) console.log(err);
    if (db) console.log("connected");
    db.collection(privateconfig.mongodb.collection).find().each(function(err, doc) {       
        if (doc){
            console.log(doc.latitude + "," + doc.longitude);
            var numericLat = parseFloat(doc.latitude);
            var numericLon = parseFloat(doc.longitude);
            doc.latitude = numericLat;
            doc.longitude = numericLon;
            doc.geojson= { location: { type: 'Point', coordinates : [numericLat, numericLon]}}; // convert field to string
        } else {

Todo: Run the insert script
node update.js
Once you run the script, make sure you can see the documents in the database's mockdata collection with the updated values.

Step 6: Verify world map displays points of latitude and longitude

Refresh the website several times. This should show different points each time. The variation of randomness should catch your eye. Is it widely random, or not as widely random as you would like?

Todo: Refresh several times

The warning of the $sample behavior says the data may duplicate within a single query. On this map that would appear as less than the number of requested data points. Did you see that in your tests?

How $sample impacts the results

Now that the website works, let's play with it to see how $sample impacts the results.
  1. understand the $sample code in /server/query.js
  2. change the row count
  3. change the aggregation pipeline order
  4. prototype with $sample

Step 1: Understand the $sample operator in /server/query.js

The $sample operator controls random sampling of the query in the aggregation pipeline.
The pipeline used in this article is a series of array elements in the arrangeAggregationPipeline function in the /server/query.js file. The first array element is the $project section which controls what data to return.


var aggregationPipeItems = [
        { $project: 
                last: "$last_name",
                first: "$first_name",
                lat: "$latitude",
                lon:  "$longitude",
                Location: ["$latitude", "$longitude"],
        { $sort: {'last': 1}} // sort by last name

The next step in the pipeline is the sorting of the data by last name. If the pipeline runs this way (without $sample), all documents are returned and sorted by last name.

The location of $sample is controlled by the pos value in the url. If pos isn't set, the position defaults to 1. If it is set to 1 of the zero-based array, it will be applied between $project and $sort, at the second position. If the code runs as supplied, the set of data is randomized, documents are selected, then the rows are sorted. This would be meaningful in both that the data is random, and returned sorted.

Note: In order for random sampling to work, you must use it in connection with 'rows' in the query string.

We will play with the position in step 3.

Step 2: Change the row count

The count of rows is a parameter in the url to the server, when the data is requested. Change the url to indicate 10 rows returned.

Todo: request 10 rows, with sorting applied after

Step 3: Change the aggregation pipeline order

The aggregation pipeline order is a parameter in the url to the server. You can control it with the 'pos' name/value pair. The following url is the same as Step 2 but the aggregation pipeline index is explicitly set.

Todo: request 10 rows, with sorting applied after

Note: Only 0, 1, and 2 are valid values

The results below the map should be sorted.

If the $sample position is moved to the 0 position, still before the sort is applied, the browser shows the same result.

Todo: request 10 rows, with sorting applied after

But, however, if the $sample is the last item (pos=2), the entire set is sorted, then 5 rows are selected. The results are no longer sorted.

Todo: request 10 rows, with sorting applied before

Note that while the documents are returned, they are not in sorted order.

If they are in sorted order, it isn't because they were sorted, but because the random pick happened that way on accident, not on purpose.

Step 4: Prototype with $sample

The mongoDB $sample operator is a great way to to try out a visual design without needing all the data. At the early stage of the design, a quick visual can give you an idea if you are on the right path.

The map with data points works well for 5 or 10 points but what about 50 or 100?

Todo: request 500 rows

The visual appeal and much of the meaning of the data is lost in the mess of the map. Change the size of the points on the map.

Todo: request 500 rows, with smaller points on the map using 'radius' name/value pair


The $sample aggregation pipeline operator in mongoDB is a great way to build a prototype testing with random data. Building the page so that the visual design is controlled by the query string works well for quick changes with immediate feedback.

Enjoy the new $sample operator. Leave comments about how you have or would use it.