Wednesday, December 6, 2017


I'm looking into the Toptal software developers Network for consulting projects, and I have to say it looks interesting.  With my background in Security, DevOps, Application Development, and Infrastructure, I think it will be a good place to find new projects.

So why do I think I would be a good fit for TopTal projects?

To begin with, I love working with computers and have an extensive background in that area.  My degree is in CSE, and just basic stuff like knowing Java, C/C++, JavaScript, Python, and a slew of other languages for both coding and scripting are all good points.  My continued education with AWS Certification and Certified Ethical Hacker (CEH) training also show how I like to learn new things.  And research into DevOps methodologies, artificial intelligence, and robotics is important to extending my capabilities beyond just basic computing.

We will see how it goes with TopTal.  I will keep you posted here on any relevant updates!

RingCentral Professional Sign Up Bonus: FREE rollover minutes

Tuesday, September 12, 2017

BlueBorne Bluetooth Vulberabilities

Security researchers have discovered eight vulnerabilities -- codenamed collectively as BlueBorne-- in the Bluetooth implementations used by over 5.3 billion devices. Researchers say the vulnerabilities are undetectable and unstoppable by traditional security solutions. No user interaction is needed for an attacker to use the BleuBorne flaws, nor does the attacker need to pair with a target device. They affect the Bluetooth implementations in Android, iOS, Microsoft, and Linux, impacting almost all Bluetooth device types, from smartphones to laptops, and from IoT devices to smart cars. Furthermore, the vulnerabilities can be concocted into a self-spreading BlueTooth worm that could wreak havoc inside a company's network or even across the world. "These vulnerabilities are the most serious Bluetooth vulnerabilities identified to date," an Armis spokesperson told Bleeping Computer via email. "Previously identified flaws found in Bluetooth were primarily at the protocol level," he added. "These new vulnerabilities are at the implementation level, bypassing the various authentication mechanisms, and enabling a complete takeover of the target device."

Wednesday, December 28, 2016

An Alexa Skill for Integration with Weather Underground

When using Alexa with my Echo Dot, I found that the integrated weather forecast was not that accurate for my location.  Where I live, we have a significant microclimate, and getting the accurate forecast for my location is best done through Weather Underground.  I have my own personal weather station which provides a good forecast, and I wanted to link that in to my Alexa Flash briefing.  Unfortunately, there doesn't appear to be a Skill for that at this time, so I went through the process of creating my own and wanted to share details of that here.

The basic steps to set this up are as follows:
  • Get a Weather Underground API key to get the forecast and alert data in JSON format
  • Setup an Amazon Lambda function to read the WU API and translate it into a format that Alexa can understand for the Flash Briefing
  • Create an Amazon API interface to call the Lambda function
  • Setup the Alexa Skill for a Flash Briefing item which uses the API interface just setup
  • Turn on the skill on your Amazon Echo Dot (or other Alexa device)

Get a Weather Underground API key 

Sign for if you haven't already, and login to generate an API key.  The process is simple, and free for under 500 calls per month.  Just be sure and sign up for the "Cumulus" plan to get both forecast and alerts.  Once you have the API key, you'll be using two of the API calls to get the information for Alexa.  These are:

where xxxxxxxxxxxxxxx will be your WU API key.  

For the actual location, you will want to replace q/pws:KVTSTARK3.json with the refined location for you via weather underground.  To get this, go to the weather underground home page and look at the full forecast for your location.  If you have a personal weather station, you can just substitute KVTSTART3 with your PWS station id.  If not, then look at the URL of the full forecast, which will be something like this:

and then replace the part of the url including and after the q (q/zmw:05487.1.99999?sp=KVTBRIST11 in the above example) with q/pws:KVTSTARK3 where KVTSTARK3 is your PWS station id.

Test out the urls in your browser to make sure you're getting back valid JSON, and then you're ready to move on to the next step.

Setup an Amazon Lambda function

Amazon Lambda functions provide quick and easy ways to implement snippets of code that are only executed when called.  They are efficient ways to handle API implementations without having a full-blown server running, and are only charged on the per-usage basis.

The assumption here is that you've already setup an AWS account.  From the AWS console, go to the Lambda service (  You'll want to create a blank lambda function without any triggers at this time.  Call it something like getWUForecast and use the Node.js 4.3 runtime.    For the inline code, you can use the following code snippet:

'use strict';

console.log('Loading function');

exports.handler = (event, context, callback) => {

    var http = require('http');
    var alertsurl = ""
    var forecasturl = ""

    // Get alerts first
    http.get(alertsurl, function(res) {
        var rawData = "";
        res.on('data', (chunk) => {
            rawData += chunk;
        res.on('end', () => {
            var alert = JSON.parse(rawData);

            // Now do the forecast portion
            var obj = doForecast(http, forecasturl, alert, function(obj) {
                callback(null, obj);
    }).on('error', function(e) {
        console.log("Got error: " + e.message);
        context.done(null, 'FAILURE');

// Handle forecast API call
function doForecast(http, url, alert, callback) {

    // Add in the alert to the beginning of the flash message     
    var alertObj = null;
    if (alert.alerts.length > 0) {
        alertObj = {
                "uid": "00000000-0000-1000-0000-000000000001",
                "updateDate":  new Date().toISOString(),
                "titleText": alert.alerts[0].description,
                "mainText": alert.alerts[0].message,
                "redirectionUrl": ""
    // Get the info from Weather Underground
    http.get(url, function(res) {
        var rawData = "";
        res.on('data', (chunk) => {
            rawData += chunk;
        res.on('end', () => {
            var forecast = JSON.parse(rawData);
            // Put together all the next 4 forecast periods
            // TBD: Ideally we should check to make sure we have 4 items in the array
            var curForecast = "The current forecast for " + forecast.forecast.txt_forecast.forecastday[0].title;
            curForecast += " calls for ";
            curForecast += forecast.forecast.txt_forecast.forecastday[0].fcttext + " ";
            curForecast += "For " + forecast.forecast.txt_forecast.forecastday[1].title;
            curForecast += " " + forecast.forecast.txt_forecast.forecastday[1].fcttext + " ";
            curForecast += "For " + forecast.forecast.txt_forecast.forecastday[2].title;
            curForecast += " " + forecast.forecast.txt_forecast.forecastday[2].fcttext + " ";
            curForecast += "For " + forecast.forecast.txt_forecast.forecastday[3].title;
            curForecast += " " + forecast.forecast.txt_forecast.forecastday[3].fcttext + " ";
            // Setup the results for Alexa feed
            var forecastObj = {
                "uid": "00000000-0000-1000-0000-000000000002",
                "updateDate":  new Date().toISOString(),
                "titleText": forecast.forecast.txt_forecast.forecastday[0].title,
                "mainText": curForecast,
                "redirectionUrl": ""

            var obj = null;
            if (alertObj !== null) {
                obj = [alertObj, forecastObj];
            } else {
                obj = [forecastObj];
    }).on('error', function(e) {
        console.log("Got error: " + e.message);
    console.log('end request to ' + url);   

After you've got the inline code, you also need to setup some config info.  Under Role, select to Create new role from template(s).  Give the role a name, such as WULambaRole and choose Simple Microservice Permissions as the template.  Everything else you can leave as defaults and then Next and Create Function.

You're now ready to integrate the lambda function to the API.

Create an Amazon API interface

Go to the Amazon console and navigate to the API Gateway services.  From there select Create API and give it a name such as WeatherUndergroundAPI.  Select Create API to create it.  Now select the root of the API (/) and under Action, select Create Method.  Select a GET method, a Lambda region where you created your Lambda function (probably us-east-1 if you didn't specify anything different before).  Enter your Lambda function name you created above (getWUForecast in the example) and save it.

Once the saving is complete, click on the GET method and then the TEST button to test it out.  If all is well you'll get a 200 status code and a JSON response that is formatted for Alexa.

Now you'll need to deploy the API.  Click on Actions, Deploy API.  Give it a new stage name of Production and then Deploy it.  Make not of the Invoke URL which will be something like this:

Invoke URL:

You'll need that URL to link to the Alexa Skill.

Setup the Alexa Skill

Now you'll need an Amazon Developer Account to create the Alexa skill in.  Go to the developer Alexa skills page and Add A New Skill.  You'll want to select a Skill Type of Flash Briefing Skill API and then give it a name, such as Weather Underground Skill.  Next through until you get to the Configuration tab and enter a custom error message.  Something like "The Weather Underground Skill is not currently available".

Now click on Add A New Feed  and a preamble to describe the feed, such as "Here is your weather briefing for Starksboro, VT".  Name the feed WeatherUnderground with content type of Text.  Select a genre of Weather and then enter the API Invoke URL you have from above in the URL field.  Click on Save and it will validate the link and continue on.

Next on the Test tab, flip the switch to ON so that you can integrate it with your Alexa.  Next on through and Save your skill.

Don't worry about publishing the skill, as this is just setting it up for your own personal use.  If you publish it, you run the risk of using up your Weather Underground API calls pretty quickly, as everyone will be using your API key then.

Turn on the skill

Finally, you need to turn the skill on in Alexa.  Go to the Alexa and navigate to Skills and then Your Skills.  The new skill will show up in the list, just click on it and then enable it.  After it's enabled, you can go to Manage in Flash Briefing to turn it on and set the order it shows up.  When it's ready, you can just go to Alexa and say "Give me my Flash Briefing" and it should all work.

That's it.  Hope this helps you in setting up a simple Alexa skill and doing some integration to Weather Underground!

Tuesday, November 22, 2016

AMD Radeon™ Software AMDGPU-PRO for Ubuntu 16.10

Just downloaded the latest version of AMDGPU-PRO driver and ran into some issues installing this on Ubuntu 16.10.  When I tried to install, I got the following error message:

root@miner01:~/downloads/amdgpu-pro-16.40-348864# ./amdgpu-pro-install
tee: /etc/aptsources.list.d/amdgpu-pro.list: No such file or directory
Turns out, there seems to be a problem with the source_list function.  Editing amdgpu-pro-install and changing the function from:
function source_list() {
        local dir etc sourceparts
        eval $(apt-config shell dir Dir)
        eval $(apt-config shell etc Dir::Etc)
        eval $(apt-config shell sourceparts Dir::Etc::sourceparts)
        echo ${dir}/${etc}/${sourceparts}/amdgpu-pro.list
and editing the sourceparts line to this:

        echo ${dir}/${etc}/${sourceparts}/amdgpu-pro.list

seems to fix the problem.  Hopefully this will help someone else out in the future.

Monday, March 21, 2016

GDS Technologies - Energy from Water?

I've been watching GDS Technologies for a while now ( and I am intrigued with the idea of generating power using their system and water alone.  To me, it sounds too good to be true, although the underlying technology has been investigated for a while now.

Just recently, their website had the banner:

The first 2500 units of the GDS5000 will be released and ready for delivery by July 5, 2016

But as of today (March 21, 2016) they removed it from their website.  Does that mean the technology does not work, or is it just a delay in the manufacturing process?

The other thing that concerns me is their disclaimer:

Our generators are for emergency backup use only. For warranty purpose, maximum run time is 4 continuous hours per day.

It makes me wonder if it's just a glorified battery that runs for 4 hours and then is recharged?

I'm optimistic about energy generation and alternatives, but this company does raise a lot of questions...

Wednesday, October 29, 2014

Netflix Asgard 1.5 Deployments

With the upgrade for v1.5 of Asgard Netflix, the API for deployments has changed and not all of the old endpoints exist (specifically /cluster/deployment for one).  Because of this, we have had to upgrade our deployment plan to use the new APIs.  However, there does not seem to be a lot of documentation out there for the new APIs, so I thought I'd put together some information in hopes it might help others in the future.

The primary API endpoint for deployments in 1.5 is:


where <host> is the host, and <region> is the EC2 region, such as us-west-2.  So a full deployment endpoint might look something like this:

Steps for a deployment are:
  1. Prepare for a deployment
  2. Start a deployment

Prepare for a deployment

Endpoint: http://<host>/<region>/deployment/prepare?id=<asg>

This gets the ASG json information which can be used in the deployment process

Start a deployment

Endpoint: http://<host>/<region>/deployment/start

The deployment consists of "steps".  We've implemented the following:

- Create the new ASG (always starts out with 0 instances)
- Resize it to the appropriate # of instances
- Disable the old ASG
- Delete the old ASG

Each step is self-checking, so if it fails, none of the succeeding steps will execute.

Below is a sample python script to implement this:


import sys
import urllib2
import jsonbr /> import requests

version = '1.0'

print 'AMI Asgard Deployment Script V' + version

asgardhost = 'localhost:8080'
ec2region = 'us-west-2'
baseurl = 'http://' + asgardhost + '/' + ec2region + '/deployment'
notify = ''

if (len(sys.argv) != 3):
   print 'Syntax: <ASG Id> <AMI Id>'


print 'Asgard Host: ' + asgardhost
print 'EC2 Region: ' + ec2region
print 'ASG: ' + asgid
print 'AMI to Launch: ' + amiid

query = baseurl + '/prepare?id=' + asgid
f = urllib2.urlopen(query)
deflcjson =

deflc = json.loads(deflcjson)

deflc['lcOptions']['imageId'] = amiid

deflc['deploymentOptions'] = {
    "clusterName": asgid,
    "notificationDestination": notify,
    "steps": [
      { "type": "CreateAsg" },
      { "type": "Resize", "targetAsg": "Next", "capacity": deflc['asgOptions']['minSize'], "startUpTimeoutMinutes": 41},
      { "type": "DisableAsg", "targetAsg": "Previous" },
      { "type": "DeleteAsg", "targetAsg": "Previous" },

posturl = baseurl + '/start'
headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
response =, data=json.dumps(deflc), headers=headers)

print response
print response.text

Also, be sure you've implemented Eureka and healthchecks for all services.  Asgard waits for both a Eureka UP and a positive healthcheck.

Monday, November 11, 2013

Information Analysis #1

Category: Cell phones

Information Analysis is the study of information and what is represents.  You might also know it under the terms of data mining, information gathering, intelligence gathering, or intelligence assessment, just to name a few.

This blog entry focuses on a method for un-anonymizing celluar usage based on an actors tendency to use multiple devices for legitimate vs. illegitimate activities.

Scenario: The primary actor (PA) runs a legitimate business and uses cell phone 1 (CP1) for daily communication of business.  This phone is registered to PA.  This takes place in multiple locations through the city where PA lives.  PA also runs an illegal money laundering business and uses his legitimate business to cover for it.  In order to keep the two separate, he uses a prepaid cell phone (CP2) for all money laundering business.  The assumption is that by using a prepaid cell phone, it will not be able to be linked back to him.

Task: Given that we suspect the PA is conducting illegal activities, we want to be able to tie those activities back to PA2.

Solution: The solution lies in the use of cellular towers and the logging of information related to phone calls.  Assuming that cell phone activity for a given tower or set of towers can be obtained, a cross reference algorithm is devised which will link the activity of the PA between cell towers and phones.

For example, during any particular day, the PA makes calls on CP1 which is covered by cell sites 449, 2132, and 474.  Since we know that CP1 is registered to the PA, we can track these activities:

Date/Time Cell Site Originating Number Destination Number Duration (minutes)
2013-11-10 08:02:33 449 802-310-1234 817-467-3311 5
2013-11-10 08:32:18 2132 802-310-1234 802-846-2111 5
2013-11-10 10:12:12 2132 802-310-1234 202-233-3232 5
2013-11-10 13:11:22 474 802-310-1234 802-355-2314 5
2013-11-10 17:30:01 449 802-310-1234 603-453-1234 5

Now to identify the CP2 usage, we focus on the cell sites that have been used by the PA during this day.  If we plot out the cell site usage, time, and location we can create a probability map of activity for the PA with the probability that they are within range of a certain cell site at any given time.  This is based solely on a linear distribution of probability between any two sites based on the progression of time, and the straight-line connectivity between the cell sites.

Once we have the probability map defined, then we can take a look at all of the other cell phone calls from those cell sites during the time period and prescribe a probability that one of those phones is CP2.

What the algorithm takes into consideration is based on the probability of time that the PA is within access to any one cell site, the phone calls from cell numbers to that site and that time will be assigned a probability.  If we look at the summed probabilities of the cell numbers across all the sites, then we can establish which other cell phones have a high probability of being CP2.

Once CP2 has been identified, a wiretap order can be executed to obtain the information required for prosecution.

Conclusion: This algorithm works best under the following scenarios:
  • The PA travels around enough to use multiple cell sites for both CP1 and CP2
  • The PA uses both cell phones multiple times during the day
  • The PA does not have both cell phones on all the time during the day
Of course, the alternative if both cell phones are on all the time during the day is to just cross-correlate registration of the cells phones with the local towers.  Any two that register at or near the same time have a high probability of being with the PA.

While not a 100% solution, this algorithm provides a high probability of locating multiple linked cell phones for specific scenarios.