Thus far, the vast majority of our blog posts have focused on the machine-to-machine opportunities the Internet of Things affords. Today I thought I would show a simple but powerful example of how easy it is to extend that connectedness to one of the tools we use every day--the web browser. And in so doing, I hope to illustrate some of the power of the wot.io data service exchange™.
As you probably already know, virtually all of the modern web browsers offer the ability to create plugins that extend the functionality of the browser. Today we will create a simple extension for Google Chrome that will send some metadata about the current web page off to the wot.io data service exchange™.
Here's a quick video demonstration:
A Peek at the Code
The heart of the extension is just a simple JavaScript file. In fact, the rest of the extension is really just the HTML and CSS for the popup and options pages.
Because one of the protocol adapters that wot.io supports is HTTP, we can implement our "send to wot.io" functionality with a simple AJAX request:
var send = function() {
chrome.tabs.query({active: true, currentWindow: true}, function(tabs) {
if (tabs.length) {
var tab = tabs[0],
xhr = new XMLHttpRequest(),
msg = JSON.stringify(["link", tab.url, tab.title, tab.favIconUrl])
xhr.onreadystatechange = function() {
if (xhr.readyState === 4) {
document.getElementById('status').textContent = 'Sent successfully'
setTimeout(function() {
window.close()
}, 750)
}
}
xhr.open('PUT', options.url)
xhr.setRequestHeader('Authorization', 'Bearer ' + options.token)
xhr.send(msg)
}
})
}
var buildUrl = function(options) {
return 'http://' + options.cluster + '.wot.io/' + options.cluster +
'/' + options.exchange + '/' + options.meta
}
Connecting People to All the Things...
Cool, right? And apparently useful, too, seeing as there are whole companies whose products do essentially what our simple extension does—create a web-accessible RSS list of bookmarks.
But how does that relate to IoT?
So glad you asked.
Remember back in the last section when I said how convenient it was that wot.io offers adapters for protocols like HTTP? What I didn't point out at the time was that any data resource on the wot.io data service exchange can be referenced via any of those protocols (subject to the authorization permissions of the user's access token, of course).
This means that if my data topology contained a resource whose messages are sent through a device management platform like ARM mbed Device Server, ThingWorx, or InterDigital's oneMPOWER Platform, sending data from the web browser to one of their connected devices would be as simple as changing a single value in the settings dialog. Same thing with devices or applications connected to a connectivity platform like PubNub.
And of course, any of the other 70+ data services on the wot.io data service exchange™ also get modeled as simple named data resources, making it as easy to send data interactively from a web browser to NGData's Lily Enterprise Hadoop platform as it is to send it to business logic in scriptr or to device connected to one of the aforementioned device management platforms.
Connecting All the Things to People...
But that's not even all! Because wot.io adapters are bi-directional, we could have just as easily selected the Web Socket protocol instead of HTTP for our Chrome extension's implementation. In that case, we could have still configured it to send data to the wot.io exchange just as before, but we could have also configured it to receive data from the exchange.
Whether that was data directly from devices or data that had been transformed by one or more data services, the possibilities are limited only by the logical data topology and your imagination.
Powerful Abstractions
The point of this post is hardly to claim that a toy browser extension rivals a polished product like Pocket. And of course, it could have just as easily been a web application as a browser extension. Nor was this post even intended to claim that sending IoT data to and from a web browser is novel.
The point of this post is to show how little effort was required to connect a hypothetical, real-world application to and from literally any of the connected data streams that we model on our wot.io data service exchange™ because they are all accessible through unique URLs via any of the numerous supported protocols.
Let that sink in, because it's really powerful.
In the past two parts, we constructed a Docker container suitable for deploying within the context of the Data Service Exchange, and then published the MNIST training set over the wot.io Data Bus. In the third part, we will retrofit a model to accept the training data from the Data Bus. The basic architecture we would like for our data flow looks like this:
We will load the MNIST training data via the training.py we created in the last part, and send it to a mnist bus resource. Then using bindings we will copy that data to both model1 and model2 resources, from which our two models will fetch their training data. The programs model1.py and model2.py will consume the training data, and print out estimates of their accuracy based on the aggregated training set.
This architecture allows us to add additional models or swap out our training data set, without having to fiddle with resource management. As the models can be deployed in their own containers, we can evaluate the effectiveness of each model in parallel, and discard any that we are unhappy with. When dealing with large amounts of training data, this distribution methodology can be especially handy when we are uncertain of the degree to which the choice of training data and test suite influences the apparent fitness of the model.
The code in training.py creates the mnist resource. The branch connecting mnist to model1 is created programmatically through the model1 code:
The act of creating the binding, and then consuming from the bound resource sets up the remainder of the top branch in the diagram. A similar bit of code occurs in our second model:
As the wot.io Data Bus uses software defined routing, the code will ensure that this topology will exist when the programs startup. By asserting the existence of the resource and the bindings, the under the hood configuration can abstract out the scaling of the underlying system.
In the consume_resource event, we invoke a train callback which runs the model against the training data. For each of the models the training code is largely the same:
vs.
The behavior of each is as follows:
- receive an image and it's label from the bus
- convert the image into a flattened array of 28*28 32 bit floating points
- scale the 8bit image to a range of [0-1]
- convert the label to a one hot vector
- save the image data and label vector for future accuracy testing
- run a single training step on the image
- every 100 iterations, test the accuracy of the model and print it to stdout
from outside the docker container, we can inspect the output of the model by invoking the docker logs command on the model's container to see the output of the program. As long as the RUN command in the docker file was of the form
RUN python model1.py
all of the output would be directed to stdout. As the model programs work as wot.io Data Bus consumers, and never exit, these commands are properly demonized from a Docker perspective and do not need to be run in a background process.
We could further modify these models to publish their results to another bus resource or two, by adding a write_resource method into the training callback, making the accuracy data available for further storage and analysis. The code for doing so would mirror the code found in training.py for publishing the mnist data to the bus in the first place. This accuracy data could then be stored in a search engine, database, or other analytics platform for future study and review. This capability makes it easy to run many different models against each other and build up a catalog of results to help guide the further development of our machine learning algorithms.
All of the source code for these models is available on Github and correspond the the tensorflow tutorials.
This post is part 3 of our series on connecting gaming devices with the wot.io data service exchange™.
Storing Gameplay Event Data with wot.io
In the last post we saw that after retrofitting an open source chess application to connect via PubNub, we could route gameplay messages from connected gaming devices to the MongoDB No-SQL datastore. Even when the messages were generated or transformed by other data services like scriptr, the logic required to store messages in MongoDB remained loosely coupled to the implementation of any single data service.
Now that we have taken our IoT event data and stored it in a suitable datastore, let's look into how we might start analyzing and visualizing it with another wot.io data service.
Custom Reporting with Pentaho
Pentaho is one of a number of wot.io data services specializing in extracting value from data at rest like the event data now captured in our MongoDB datastore.
Pentaho is an excellent choice for modeling, visualizing, and exploring the types of data typically found in IoT use cases. And it's ability to blend operational data with data from IT systems of record to deliver intelligent analytics really shines in the context of the wot.io data service exchange™ where multiple datastores and adapters to different enterprise software systems are not uncommon.
Just as you might imagine in a connected gaming use case, we wanted to create reports showing gameplay statistics for individual users, as well as aggregate reports across the entire gaming environment. Rather than write about it, have a look at this video:
One long-overdue lesson that the Internet of Things is teaching younger engineers is that there are a whole host of useful protocols that aren't named HTTP. (And don't panic, but in related news, we have also received reports that there are even OSI layers beneath 7!)
Since we have posted about wot.io's extensive protocol support before (for example, here and here), today I thought I'd share a quick video demonstrating that protocol interoperability using MQTT. Enjoy.
Photo Credit: "Networking" by Norlando Pobre is licensed under CC BY 2.0
Data Service Providers
In part 1 of this series, we went over the various ARM devices that were combined with open data from the London Datastore to represent the connected device side of our demo. In part 2, we described how to use employ one or more device management platforms like Stream Technologies IoT-X Platform or ARM mbed Device Server to manage devices and send their sensor readings onto the wot.io data service exchange™.
Now, let's see how we can route all that valuable device data to some data service providers like scriptr, ThingWorx, and Elasticsearch to extract or add business value to the raw IoT data streams.
Dataflow Review
Recall that back in part 1 we started with the MultiTech model car, modified to include a MultiConnect® mDot LoRaWAN module with an accelerometer sensor. The sensor sent the accelerometer data to a MultiTech MultiConnect® Conduit™ gateway using the Semtech LoRa™ low-power, long-range wireless RF interface. The Conduit was registered with Stream's IoT-X platform.
Since wot.io is fully integrated with IoT-X, making the device data available on the wot.io exchange where it could be sent to any of the data services was as easy as setting up the data routes.
scriptr for Business Logic
Part of the reason for measuring the accelerometer readings in the smart vehicle was to detect if it has been involved in an accident. Some of the numerous and obvious opportunities for such intelligence include insurance, emergency response dispatch, traffic routing, long term traffic safety patterns, etc.
However, in order to translate the raw sensor readings into that business intelligence, we have to determine whether there was a sufficiently rapid deceleration to indicate an accident.
Of the many wot.io data services that could be employed, scriptr is an excellent choice for embodying this type of business logic.
scriptr is a cloud-based, hosted JavaScript engine with a web-based Integrated Development Environment (IDE). Since we can route wot.io data streams to specific scriptr scripts, we can use it to write our simple deceleration filter:
Notice that the script receives our messages containing the raw X,Y,Z-plane acceleration readings. After parsing these parameters, we do a simple check to determine whether any of them exceed a given threshold. If so, we return a cheeky message back onto the wot.io data service exchange.
Notice that the the message we returned is a simple JSON object (although it could have been anything--XML, plain text, or even binary data). Furthermore, it does not contain any information about the destination. It simply returns a value.
That is, our script does not need to know where its response will be routed. Indeed, it may be routed to multiple data services! Loosely coupling data services together in this fashion makes for a much more flexible and resilient architecture.
bip.io for Web API Automation
Next, we chose to route any warning messages returned from our scriptr script to a bip.io workflow (known as a "bip") that we named tweet
so that we could notify the appropriate parties of the "accident". Although we called it tweet
, bip.io bips can easily perform complex workflows involving any of its 60+ data service integrations (known as "pods").
For the demo, we kept our bip simple, consisting of only twitter and email pods. But you can readily imagine how, given the conditional logic and branching capabilities of bip.io, much more complex and interesting workflows could be created with ease. For example, device events could be stored and visualized in keen.io, sensor data could be appended to a Google spreadsheet involving complex functions and charts, or SMS messages could be composed into a template and texted via SMS through Twilio.
Since we authenticated the Twitter pod through our wot.io developer account @wotiodevs, whenever the data from the accelerometer is determined by scriptr to have exceeded the safety thresholds, we can see our tweets!
ThingWorx as an Application Enablement Platform
ThingWorx is a full-featured IoT platform that enables users to collect data from a variety of sources and services and build out applications to visualize and operate on that data.
In our case, we took the real-time location data originating from the mobile devices being managed by Stream's IoT-X and ARM's mbed Device Server platforms and routed them through the wot.io data service exchange to our visualization, or mashup application, in ThingWorx.
We also routed traffic camera and traffic sign data from the London Datastore through wot.io and into the same ThingWorx mashup.
To make the data useful, in our mashup we included a Google Map widget and then in real-time, we plot each mobile device, camera, sign with a different icon based on their current locations.
Users can interact with any of these data points: clicking on a camera icon, for example, will display an image of what is actually being captured by that traffic camera at the given intersection. Below, I selected a camera that is located on the River Thames and has the Tower of London with Big Ben in its view!
While it's fun to sight see around London, in a Smart City, we can also imagine ways to use these cameras and digital signs to help us efficiently move assets (usually vehicles!) through a congested downtown area. For example, if we zoom into a traffic heavy portion of London, we can view the camera feeds and digital roadsigns in an area. Here, we can see that this sign's text currently displays a message that this route is closed for resurfacing.
And the camera in the area even shows how traffic cones are being set up to move traffic away from the roadwork!
And since we already know that with wot.io, messages can be routed to multiple data services as easily as to a single service, displaying the messages on a map is hardly the end of the story. Just as one trivial example, imagine correlating the timing and text and locations of digital signs with the resulting traffic disruptions to optimize how best to announce construction work.
Elasticsearch & Kibana
Finally, we also routed the telemetry messages from all those cellular, satellite, and LPRN-based mobile devices embedded in vehicles traveling around the city of London through wot.io and into an instance of Elasticsearch and Kibana to create a real-time heatmap visualization of the number of managed devices by geographic region.
Elasticsearch is a powerful, distributed, real-time search and analytics engine. Traditionally applied to numeric or textual data (as we have discussed previously), Elasticsearch also shines in geospatial indexing scenarios as well. In this case, our histogram is colored based on the number of devices currently reporting in each geographic subregion.
Conclusion
As London and other major cities begin to connect, open, and share all the data their IoT devices collect, wot.io allows for the creation and extraction of real, actionable business value from that data.
Whether through its many options for device management platforms from the likes of ARM and Stream Technologies, or its support for any of the hardware devices through numerous protocol adapters, or its support for web-based data feeds like the London Datastore, or its ability to flexibly route data to multiple data services like scriptr, bip.io, ThingWorx, or Elasticsearch, wot.io is clearly the data service exchange™ for connected device platforms.
London Knightsbridge Street Photo Credit: By Nikos Koutoulas [CC BY 2.0], via Flickr
Training Data
In the last part, we created an environment in which we could deploy a Tensorflow application within the wot.io Data Service Exchange (DSE). Building on that work, this post will cover distributing training data for our Tensorflow models using the wot.io Data Bus.
The training data set we're going to use the the MNIST Database maintained by Yann LeCun. This will allow us to build on Tensorflow's tutorials using the MNIST data:
And as this is not going to be a Tensorflow tutorial, I highly recommend you read all three at some point. Let's look at how we're going to use this data.
Architecture of a Solution
The system that we're going to build consists of a number of components:
- a Training Data Generator
- two Production Data Sources
- four Machine Learning Models
- three Consumer Applications
Between the components, we will use the wot.io Data Bus to distribute data from each of the training data set and the production data sources to the different models, and then selectively route the model output to the consumers in real time. Due to the nature of the wot.io DSE, we can either build these applications inside of the DSE security context, or host the applications externally going through one of the authenticated protocol adapters. For purposes of this article, we will treat this design decision as an exercise up to the reader.
For my sample code, I'm going to use the AMQP protocol adapter for all of the components with the wot-python SDK. This will make it easy to integrate with the Tensorflow framework, and will make it possible to reuse code explained elsewhere.
Training Data Generator
The first component we need to build is a Train Data Generator. This application will read a set of data files and then send individual messages to the wot.io Data Bus for each piece of training data. The wot.io Data Bus will then distribute it to each of our machine learning models.
As our ML models will be built in Docker containers in the wot.io DSE, we can treat each instance of a model as a disposable resource. We will be able to dynamically spin them up and down with wild abandon, and just throw away our failed experiments. The wot.io DSE will manage our resources for us, and clean up after our mess. The Training Data Generator will allow us to share the same training data with as many models as we want to deploy, and we don't have to worry about making sure each model gets the same or similar data.
We can do our development of the application inside of a container instance of the wotio/tensorflow container we made in the last tutorial.
docker run -i -t wotio/tensorflow
This will drop us in a bash prompt, which we can then use to develop our training data generator. Next we'll setup an isolated Python environment using virtualenv so that while we're developing our solution we don't pollute the system python. It will also make it easier to capture all of the dependencies we added when creating a new Dockerfile.
virtualenv training
We can select this environment by sourcing the training/bin/activate file:
. training/bin/activate
We'll build the rest of our application within the training directory, which will keep our code contained as well. You can checkout the code from GitHub using:
git clone https://github.com/wotio/wot-tensorflow-example.git
The MNIST data in contained in a couple of gzipped archives:
- train-images.idx3-ubyte.gz
- train-labels.idx1-ubyte.gz
You can think of these files a pair of parallel arrays, one containing image data, and then an identifier for each image in the other. The images contain pictures of the numbers 0 through 9, and the labels take on those same values. Each training file has a header of some sort:
Image data file
Label data file
The goal will be to load both files, and then generate a sequence of messages from the images selected at random, and sent with the label as a meta-data attribute of the image data. The models will interpret the messages with meta-data as training data, and will invoke their training routine on the message. If the message doesn't have a meta-data label, it will instead be run through the model it will forward the result to the consumer with the most likely label attached in the meta-data field. In this way, we can simulate a system in which production data is augmented by machine learning, and then passed on to another layer of applications for further processing.
To read the image file header we'll use a function like:
And to read the label file header we'll use:
Both of these functions take a stream, and return a tuple with the values contained in the header (minus the magic). We can then use the associated streams to read the data into numpy arrays:
By passing in the respective streams (as returned from prior functions), we can read the data into two parallel arrays. We'll randomize our output data by taking the number of elements in both arrays and shuffling the indexes like a pack of card:
With this iterator, we are guaranteed not to repeat any image, and will exhaust the entire training set. We'll then use it to drive our generator in a helper function:
Now we come to the tricky bit. The implementation of wot-python SDK is built on top of Pika, which has a main program loop. Under the hood, we have a large number of asynchronous calls that are driven by the underlying messaging. Rather than modeling this in a continuation passing style (CPS), the wot-python SDK adopts a simple indirect threading model for it's state machine:
Using this interpreter we'll store our program as a sequence of function calls modeled as tuples stored in an array. Start
will inject our initial state of our finite state machine into a hidden variable by calling eval
. Eval
prepends the passed array to the beginning of the hidden fsm deque which we can exploit to mimic subroutine calls. The eval
function passes control to the _next
function which removes the head form the the fsm deque, and calls apply on the contents of the tuple if any.
The user supplied function is then invoked, and one of 3 scenarios can happen:
- the function calls
eval
to run a subroutine - the function calls
_next
to move on to the next instruction - the function registers an asynchronous callback which will in turn call
eval
or_next
Should the hidden fsm deque empty, then processing will terminate, as no further states exist in our finite state model.
This technique for programming via a series of events is particularly powerful when we have lots of nested callbacks. For example, take the definition of the function step
in the training program:
It grabs the next index from our randomized list of indexes, and if there is one it schedules a write to a wot.io Data Bus resource followed by a call to recuse. Should we run out of indexes, it schedules an exit from the program with status 0.
The write_resource
method is itself defined as a series of high level events:
wherein it first ensures the existence of the desired resource, and then sends the data to that resource. The definition of the others are too high level events evaluated by the state machine, with the lowest levels being asynchronous calls whose callbacks invoke the _next
to resume evaluation of our hidden fsm.
As such, our top level application is just an array of events passed to the start
method:
By linearizing the states in this fashion, we don't need to pass lots of different callbacks, and our intended flow is described in data as program. It doesn't hurt that the resulting python looks a lot like LISP, a favorite of ML researches of ages past, either.
A Simple Consumer
To test the code, we need a simple consumer that will simply echo out what we got from the wot.io Data Bus:
You can see the same pattern as with the generator above, wherein we pass a finite state machine model to the start
method. In this case, the stream_resource
method takes a resource name and a function as an argument, which it will invoke on each message it receives from the given resource. The callback simply echoes the message and it's label to stdout.
With this consumer and generator we can shovel image and label data over the wot.io Data Bus, and see it come out the other end. In the next part of this series, we will modify the consumer application to process the training data and build four different machine learning models with Tensorflow.
One of the early inspirations for the wot.io Data Service Exchange was the need to deploy and evaluate multiple machine learning models against real time data sets. As machine learning techniques transition from an academic realm to enterprise deployments, the realities of operational costs tend to inform design decisions more than anything else, with the key forcing function becoming percentage accuracy per dollar. With this constraint in place, the choice of model often becomes a search for one that is "good enough", or which model provides adequate accuracy for minimal operational cost.
To make this concept more concrete, we can build a simple distributed OCR system using the wot.io data bus to transmit both training and production data. The wot.io Data Service Exchange currently provides access to services like facial and logo detection services through Datascription, and visual object recognition and search through Nervve. But for this demonstration, we will connect a demo application written using Google's TensorFlow machine learning library. This will allow us to demonstrate how to build and deploy a machine learning application into the wot.io Data Service Exchange. As TensorFlow is released under the Apache 2 license, we will also be able to share the code for the different models we will be testing.
Getting Started With Python
The wot.io Data Service Exchange supports a wide range of languages and protocol bindings. Currently, we have library support for JavaScript, Erlang, Python, Java, C/C++, and Perl. Since TensorFlow is written in python, our demo application will use the wot-python bindings. These bindings interface with the AMQP protocol adapter for the wot.io data bus, and model the data bus's resource model on top of the AMQP interface. To install the bindings, we'll first create a virtualenv environment in which we'll install our dependencies:
Linux
- virtualenv wotocrdemo
- source wotocrdemo/bin/activate
- pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.5.0-cp27-none-linuxx8664.whl
- git clone https://github.com/wotio/wot-python
- cd wot-python
- python setup.py install
Mac OS X
- virtualenv wotocrdemo
- source wotocrdemo/bin/activate
- pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
- git clone https://github.com/wotio/wot-python
- cd wot-python
- python setup.py install
This will create a virualenv environment which will contain tensorflow and the wot-python bindings for local development. While this can be useful for testing, in the production deployment we will use a Docker container for deployment. The wot.io Data Service Exchange can deploy docker container and manage their configuration cross data centers and cloud environments. As the wot.io Data Service Exchange has been deployed in Rackspace, IBM SoftLayer, and Microsoft Azure, it is useful to be able to produce a production software artifact that works across platforms.
Creating a Dockerfile
We will use the Linux version as the basis of creating a Docker container for our production application, but it can be useful. To start with we'll base our Dockerfile upon the sample code we make available for system integrators: https://github.com/wotio/docker-example. To build this locally, it is often useful to use VirtualBox, Docker and Docker Machine to create a Docker development environment. If you are using Boot2Docker on Mac OS X, you will need to tell docker machine to grow the memory requirement for the VM itself:
docker-machine create wotio -d virtual box --virtualbox-memory "12288" wotio
As the default 1GB isn't large enough to compile some of tensorflow with LLVM, I had success with 12GB, YMMV. Once you have each of these installed for your platform, you can download our sample build environment:
- git clone https://github.com/wotio/docker-example
- cd docker-example
- docker-machine create wotio -d virtualbox --virtualbox-memory "12288"
- eval $(docker-machine env wotio)
- make tensorflow
This will build a sequence of docker container images, and it will be on top of the wotio/python that we will install TensorFlow. At this point you'll have a working container suitable for deploying into the wot.io Data Service Exchange. In the next blog post we'll build a sample model based on the MNIST data set and train multiple instances using the wot.io Data Bus.
Recall that in a previous post, we discussed a collection of ARM-based devices and open data sources that comprised the basis for wot.io's demonstration at Mobile World Congress 2015 last Spring. Today we will continue our deeper look into the technology behind that demonstration by examining the device management platforms that were employed.
Managing Devices with Stream Technologies IoT-Xtend™
In a large organization or municipality, the issue of just simply managing all of the connected devices is usually the first challenge one encounters before all of the resulting data can be collected and analyzed.
This is where Stream Technologies comes in. Their IoT-Xtend™ Platform is an award winning connected device management platform designed to monitor, manage and monetize device endpoints, manage subscriptions, and provide robust billing and Advanced Data Routing. Xtend provides Multi Network Technology capability in one comprehensive platform. Xtend serves and supports complex multi-tenant and multi-tiered sales channels. Its web-based user interface can be used to view which devices are actively transferring data and allow for the management, routing, and metering of data.
Previously we described how we created embedded applications to run on each of the demonstration devices and could then connect to our device management platform.
In fact, what we were doing was leveraging the extensive device integration capabilities of Stream and their IoT-Xtend™ Platform. Specifically, Stream has integrated numerous cellular, satellite, LPWA, and Wi-Fi devices into their platform. In cases like our demonstration, where an integration did not already exist, creating a new integration was simply a matter of sharing the schema to which our messages would conform, and the communication protocol it would be using.
So the notification messages being sent from the devices to IoT-Xtend looked something like this for devices containing GPS sensors (like the u-blox C027):
{
"sim_no": SIM_ID,
"quality_idx": QUALITY,
"gps_time": TIME,
"dec_latitude": LAT,
"dec_longitude": LON,
"number_sat": SATELLITES,
"hdop": HDOP,
"height_above_sea": HEIGHT
}
When the device is powered on, it connects to Stream using its LISA-C200 CDMA cellular modem and begins to send it's location data from the GPS receiver. Because its SIM card has been provisioned and managed in IoT-X, the device telemetry data is received by and made visible in the IoT-X web-based user interface.
Connecting Stream IoT-Xtend™ to the wot.io Data Service Exchange
wot.io and Stream have fully integrated the Stream IoT-Xtend™ device management platform with the wot.io data service exchange™. This means that notification and telemetry data from the managed devices can be routed to and from any wot.io data services for analysis, visualization, metering and monitoring, business rules, etc.
In part 3 of this series, we will explore a few of the many data services demonstrated at Mobile World Congress.
ARM mbed Device Server
Of course, wot.io is all about choice and selecting the best-of-breed services fit for specific needs. As such, one might be interested in exploring one of the other device management platforms on the data service exchange.
One such option is ARM mbed Device Server. We have already written extensively about our close integration of ARM mbed Device Server to the wot.io data service exchange.
Whether your ARM mbed Device Server needs are to bridge the CoAP protocol gap, combine it with other connectivity or device management platforms, manage device identities and authorizations, send complex command-and-control messages to devices, or simply subscribe to device notifications, or to host production-scale deployments and handle your data service routing needs, wot.io has you covered.
Connecting Stream IoT-Xtend™ with ARM mbed Device Server
In addition to a direct integration between IoT-Xtend™ and wot.io, the devices managed in Stream Technologies IoT-Xtend™ platform can also be integrated with other device management platforms. In particular, at Mobile World Congress we demonstrated a configuration in which devices were registered in both the Stream IoT-Xtend™ and ARM mbed Device Server platforms.
Real-world IoT solutions will often involve multiple device management platforms. In such situations, it is often desirable to consolidate the number of interfaces used by operations staff, or to leverage the unique strengths of different platforms. For example, an organization or municipality deploying a smart city initiative may elect to use IoT-Xtend™ for its SIM-management and billing-related capabilities while standardizing on ARM mbed Device Server for device provisioning and security. Or as another example, perhaps they would like to standardize on ARM mbed Device Server, but a vendor or partner uses another device management platform like IoT-Xtend™.
wot.io provides enterprise customers with the powerful ability to interoperate between and across device management platforms, sharing both device data and command and control messages with data services in either direction.
Next Time...
In our final post in this series, we will discuss some of the data services that were used to analyze, visualize, and manipulate the Smart City data coming from the devices managed by IoT-Xtend™ and ARM mbed Device Server, and the London Datastore project.
wot.io's Container as a Service (CaaS) not only enables the deployment of applications such as data services in public or private cloud infrastructure, but also enables deploying a data service directly onto a laptop. This is achieved using the power of Docker and the wot.io Thingworx container.
Docker has built Docker Toolbox to simplify the install on your laptop. Using Docker Toolbox, we can pull and run the wot.io Thingworx container, which we created for provisioning Thingworx instances for the LiveWorx hackathon in 2015. This enables you to deploy a full featured Thingworx instance on your laptop.
The video here demonstrates installing Docker Toolbox, logging into Docker Hub, downloading and running the wot.io Thingworx container. The video has been sped up, but the overall process was approximately 5 minutes.
You saw in the the video a great example of deploying a containerized version of Thingworx on a laptop. As Docker containers are portable, the same container can just as easily be deployed in the cloud as it was on my laptop. To do so, simply spin up a cloud VM running Docker, login to Docker Hub, and pull and run the wot.io Thingworx container.
Last Spring, wot.io teamed up with a number of partners including Stream Technologies and ARM at Mobile World Congress 2015 to demonstrate an IoT Smart City solution combining data from live vehicles moving about the London area with data from the London Datastore.
We already posted the following video, which provides a good overview of what we presented at the event, but we wanted to take this opportunity to do a deeper dive to describe some of the technology behind the demo.
So this will be part 1 of a series of posts where we explore the assembly of an interoperable Smart City solution powered by wot.io and its data service exchange™.
Toward Smarter Cities in the UK
The population in the city of London, UK, is exploding and is expected to reach 9 million people before New York City. In light of that prediction, the governments in London and the United Kingdom have begun to lay out plans to utilize digital technologies to become a Smart City in an effort to help stem and even solve many of the challenges that arise from such a massive and rapid population increase.
In support of that vision, the Greater London Authority established the London Datastore initiative. According to its website the London Datastore was created
as a first step towards freeing London’s data. We want everyone to be able access the data that the GLA and other public sector organisations hold, and to use that data however they see fit—for free.
London's passenger and road transport system is among the most advanced in the world, and was one of the first smart services that London opened to developers as part of the London Datastore initiative. The result was an unprecedented volume of open data with which to develop smart city solutions.
As part of our smart city application, we were able to find a whole section in the datastore devoted to traffic and transportation. We built adapters to read from this feed into wot.io and route near-real-time data from traffic cameras and road signs to multiple data services.
Smarter Traffic Through Instrumentation
Just one of the many facets to a smarter city initiative is to learn, understand, and make decisions based around traffic patterns. Naturally, analysis and decision-logic require data. Following are a few examples of how wot.io partners are filling these needs with a wide array of ARM mbed-based hardware and software products.
Connecting Devices with Multitech
In order to demonstrate how detailed information about traffic accidents could be used to assist emergency services or even to otherwise manipulate traffic patterns in response, MultiTech placed its MultiConnect® mDots (inexpensive radios using the new Semtech LoRa™, low power, wide area RF modulation) inside a remote-controlled model car and drove it around the ARM booth. The car sent sensor info (including x-y-z plane accelerometer readings)
to a MultiConnect® Conduit™ gateway using the Semtech LoRa™, low-power, long-range wireless access (LPWA) RF technology in the European 868MHz ISM band spectrum. The Conduit packaged and then sent the sensor data to Stream’s award winning IoT-X platform.
Connecting Devices with u-blox
Another common requirement for smarter traffic in a connected city is detailed knowledge about the geo-location of devices embedded in vehicles.
The u-blox C027 is a complete IoT starter kit that includes a MAX-M8Q GPS/GNSS receiver and a LISA-C200 CDMA cellular module with SIM card.
As you can see from the photograph, we added an extended GPS antenna to help with satellite reception given that we were going to be using it from inside our urban office building location.
It was easy enough to use the web-based IDE on the ARM mbed Developer Site to build a lightweight embedded C application. The application simply reads the GPS data from the GPS/GNSS receiver on the device, and sends it to a TCP endpoint exposed by the Stream Technologies IoT-Xtend™ API using the cellular modem to connect to a local cellular network. Using the cell network for connectivity makes the system completely mobile, which is perfect for vehicles driving around a city.
Ultimately, the embedded application sends JSON messages to the Stream API looking something like the following:
{
"sim_no": SIM_ID,
"quality_idx": QUALITY,
"gps_time": TIME,
"dec_latitude": LAT,
"dec_longitude": LON,
"number_sat": SATELLITES,
"hdop": HDOP,
"height_above_sea": HEIGHT
}
Connecting Devices with NXP
Another ARM mbed device, the NXP LPC1768,
was used to demonstrate two-way communications with the wot.io data service exchange™. Ambient temperature was monitored through its on-board temperature sensor, analyzed by business logic, and sent back to the device in the form of specific commands to manipulate the device speaker and LED intensity.
Live Mobile Devices
Last, but certainly not least, the demonstration also included a number of cellular-, satellite-, and LPRN-based mobile devices embedded in vehicles traveling about the city of London in real-time. The devices were managed by the Stream IoT-x platform, and telemetry and geo-location messages were communicated to the wot.io operating environment through our WebSocket protocol-based streaming data adapter.
Next Time
Today we took a brief look at the diverse device and open data feed-based sources of smart city data that comprised the wot.io demonstration that we presented earlier this year in Barcelona.
Tune in next time for a closer look at how we managed those devices with device management platforms from ARM and Stream Technologies, and how their data was integrated onto the wot.io data service exchange™.
In a previous post we demonstrated how the wot.io operating environment can be used to translate between a number of different message protocols.
Today, we will build on that with a concrete example demonstrating how the protocol bridging capabilities and device management integrations of wot.io can actually be used to extend the capabilities of device management platforms to encompass those other protocols!
In particular, we'll show how to take devices that only speak MQTT, manage them with ARM mbed Device Server, which does not speak MQTT, all while maintaining the ability to route notification data to and from the usual full complement of data services on the wot.io data service exchange™. Just for grins, we'll even subscribe to the notification data over a WebSocket connection to show that the protocol conversions can be performed on the "inlet" or the "outlet" side of the data flows.
IoT/M2M Interoperability Middleware for the Enterprise
While the problem statement above may not sound all that interesting at first blush, consider that:
- the devices are speaking MQTT
- potentially, to a separate message broker
- MQTT is a stateful, connection-based protocol that runs over TCP/IP
- ARM mbed Device Server expects devices to speak CoAP*
- CoAP is a stateful, connectionless protocol that runs over UDP by default
- WebSocket is a stateful, connection-based protocol that runs over TCP/IP
- that gets upgraded from the stateless HTTP protocol that runs over TCP/IP
and it should start to become apparent pretty quickly how impressive a feat this actually is. (* the newest version of ARM mbed Device Server appears to also support HTTP bindings in addition to CoAP, but it's still a protocol impedance mismatch either way)
Because the real world is made up of multiple device types from multiple vendors. Making sense of that in an interoperable way is the problem facing enterprises and the Industrial IoT. It is also what separates real IoT/M2M solutions from mere toys.
the real world is made up of multiple device types from multiple vendors. Making sense of that in an interoperable way is the problem facing enterprises and the Industrial IoT. It is also what separates real IoT/M2M solutions from mere toys.
In fact, it would be extremely uncommon for enterprises not to have devices tied to multiple device management platforms. Managing them all separately in their individual silos and performing separate integrations from each of those device management platforms to data services representing analytics or business logic would be an absolute nightmare.
Let's see how wot.io's composable operating environment makes it possible to solve complex, enterprise-grade interoperability problems.
Building Automation and Instrumentation with B+B SmartWorx Devices
Recently, one of our partners were showing off how they had outfitted their office with a bunch of IoT devices. And of course (as always happens any time you get more than one engineer together), it wasn't long before we had their devices connected up to the wot.io operating environment so they could analyze with their IoT data to make it actionable.
The actual devices they were using were a collection of Wzzard Wireless Sensor nodes connected to Spectre Industrial Routers acting as gateway devices. These devices from wot.io partners B+B SmartWorx not only comprise a rock-solid, industrial-grade IoT platform, but they also speak MQTT—a common IoT/M2M device level communication protocol. As such, they were perfect candidates for our protocol interoperability demonstration.
Overview
The following diagram represents a high-level overview of our interoperability demonstration:
- device
[1]
is located outside our partner's engineering area, and has a thermocouple, voltmeter, and x,y,z-axis motion sensor - device
[2]
is located in our partner's server room, and has a thermocouple, voltmeter, and two analog inputs corresponding to ambient temperature and relative humidity - device
[3]
is located in our partner's front conference room and has a thermocouple, voltmeter, and two digital inputs corresponding to motion sensors
For demonstration purposes, we have only modeled a few of the many devices and sensors that are installed on our partners' premises. All three of these physical devices communicate with a local MQTT broker [4]
to which our MQTT adapter [5]
is subscribed. An example message looks something like
TOPIC: BB/0013430F251A/data,
PAYLOAD: {"s":6,"t":"2015-11-30T15:18:33Z","q":192,"c":7,"do2":false,"tempint":48.2}
In addition, we have simulated a fourth device, [6]
, just to demonstrate how the wot.io operating environment can also act as an MQTT broker [7]
in order to support scenarios where a separate broker does not already exist.
Irrespective of the originating device, sensor data from these devices is routed through a series of adapters that ultimately:
- model the data as logical resources on the wot.io message bus,
- register "virtual devices" in ARM mbed Device Server
[14]
to represent the original devices - close the loop by subscribing to notifications for the new "virtual devices"
- route data to wot.io data services like bip.io
Modeling Device Data
One of the more powerful capabilities of the wot.io operating environment is its ability to model device data streams as one or more loosely-coupled logical layers of connected data resources. These data resources form a directed graph of nodes and edges representing the sources/destinations as connected device data streams, and the processes (or data services) that operate upon them. One data resource is transformed by a data resource into another resource which can, in turn, serve as the input for one or more other data services.
A result of this architecture is that one can target any data resource as the source of any data service.
For our present demonstration, this means, for example, that we can represent the physical devices [1-4]
as "virtual devices" in a device management platform of our choosing—ARM mbed Device Server in this case—whether or not it even supports devices of that make and manufacturer.
In fact, at a high level we will perform the following mappings between different representations of the device topology:
- physical device data represented as MQTT topics of the form
BB/<deviceId>/data
are mapped to - logical wot.io data resources of the form
/bb/<deviceId>
which are further mapped to - logical wot.io data resources of the form
/bb-data/<deviceId>.<sensorType>.<virtualDeviceId>
which are registered and managed as - ARM mbed Device Server endpoints and resources of the form
<deviceId>, <sensorType>
which are then subscribed to and made available again to wot.io data services as - wot.io data resources of the form
/bb-stream/<deviceId>.<sensorType>
Now that's some serious interoperability. Let's zoom in even a little closer, still.
Managing Virtual Devices with ARM mbed Device Server
Recall that earlier we described significant impedance mismatch between the protocols involved in this interoperability exercise. Let's examine the other adapters involved and see how they can be cleverly combined to resolve our impedance issues and allow us to manage virtual devices in ARM mbed Device Server.
Picking up where we left off earlier in reference to our architecture diagram,
- adapters
[9]
,[10]
, and[11]
compose and send messages requesting the creation of new logical wot.io data resources. The adapter[9]
maintains or references a mapping of virtual CoAP endpoints (more on these later) provisioned for each device. Specifically, an example message sent to the HTTP request adapter[10]
might look like this
[
"request",
"POST",
"http://demos.wot.io/demos/bb-data/tempint.virtual7/tempint.virtual7.#",
"",
{ "Authorization": "Bearer <token>" }
]
- the same messages emitted from
[8]
that are used to create new resource bindings are also routed to a simple controller[9]
that composes a message for the CoAP adapter[13]
to send to ARM mbed Device Server[14]
. This CoAP message registers a virtual device and supplies a custom UDP context endpoint[15]
as the location of said virtual device. (Notice that in actuality, our virtual CoAP device is actually spread across several different adapters in the wot.io operating environment!) An example CoAP pseudo-message (since CoAP is a binary protocol, I'll save you the rawtcpdump
output) is basically
POST: /rd?ep=0013430F251A&et=BBSmartWorx%20Sensor&con=coap://172.17.42.1:40010&d=domain
BODY: </tempint>;obs
In order to maintain the registration, [13]
will also send CoAP registration update messages as required by ARM mbed Device Server once the initial registration has occurred.
With just these few adapters, we have successfully used CoAP to registered virtual devices to represent our real MQTT-based devices in ARM mbed Device Server. You can see they now appear in the endpoint directory of mbed Device Server's administration interface:
Subscribing to Virtual Device Notifications
Now that our devices have been virtualized in our device management platform of choice, we can treat it as any other fully integrated wot.io data service. That is, we could choose to subscribe to notifications for one or more of the device resources and route them to one or more data services.
- first, we would need to subscribe to mbed Device Server notifications by sending a message to the mbed Device Server adapter
[17]
. For our example, we just used acurl
call[16]
to the HTTP adapter[11]
for simplicity. - the mbed Device Server adapter
[17]
will subscribe to the indicated endpoint and resource - in response, ARM mbed Device Server
[14]
will send a CoAPGET
message to its registered endpoint (which you will recall is one of the CoAP adapters[15]
that were provisioned by[9]
and registered by[12]
). These CoAP messages between mbed Device Server[14]
and the CoAP adapter[15]
look something like this (again resorting to pseudo-code to convey the binary message details):
GET /tempint
OBSERVE=0, TOKEN: <token>
NB: observe=0
means observable is true! Also, notice that the device identifier is missing and only the resource name is specified. This is because back in [9]
, we mapped the stateful, UDP-based endpoint for a specific physical device to a specific virtual CoAP adapter [15]
--the one that is receiving this GET
request.
The response sent back to ARM mbed Device Server [14]
from the CoAP adapter [15]
would look something like this:
CODE: 2.05
TOKEN: <token>, OBSERVE=<observationSequence>
BODY: 42.1
- next, ARM mbed Device Server sends these notifications to its registered callback: namely, the HTTP adapter
[11]
- after we route the messages through one more simple transformation
[18]
to return the deviceId and sensorId metadata to the logical wot.io resource path, - we can consume the device notifications through the WebSocket adapter
[19]
using and/or route it on to other data services like bip.io[22]
for further transformation or analysis.
Routing to wot.io Data Services
Now that our notification data is once again represented as a wot.io data resource, we can route it to any of the services in the wot.io data service exchange™.
For example, if we create a "bip" in the web API workflow automation tool, bip.io we can pick off specific attributes and send them to any of a number of other 3rd party service integrations. For example, see the steps below to append rows to a Google Sheet spreadsheet where still more analysis and data interaction can occur. For example, column D in our spreadsheet contains a custom base64 decoding function written in JavaScript.
Conclusion
Today, we have demonstrated an extremely powerful capability afforded by the wot.io operating environment whereby we can combine a several protocol and data service adapters to extend device management services from IoT platforms like ARM mbed Device Server to devices that speak unsupported protocols like MQTT.
In a future post, we will build on this concept and show how we can turn any wot.io data resource into a "virtual device" managed by a device management platform—even ones that don't originate from devices at all!
It is through powerful capabilities like these that wot.io can truly claim to offer real-world IoT/M2M interoperability middleware for the enterprise.!
This post is part 2 of our series on connecting gaming devices with the wot.io data service exchange™.
Routing Gameplay Event Data with wot.io
In the last post we saw that after retrofitting an open source chess application to connect via PubNub, connecting data services was easy since wot.io already has a PubNub adapter. No further changes were required to the application source code to connect to wot.io and send game play events to its data services or connected devices.
In fact, the only thing left to do was decide which wot.io data services to use in conjunction with our gaming system.
Storing Game Statistics with MongoDB
The data format we created for representing game play events uses a JSON variant of Portable Game Notation, or PGN. An example move message might look something like this:
{
"gameId": "123",
"game": "continue",
"user": "wotiosteve",
"move": "e2-e4",
"moveSeq": 1
}
and a game ended message might look something like this:
{
"gameId": "123",
"black": "wotiosteve",
"white": "wotiojim",
"moves": [ [ "f3", "e6" ], [ "g4", "Qh4#" ] ],
"winner": "wotiosteve",
"gameResult": "0-1",
"game": "end",
"date": "2015.09.21"
}
Since we wanted to store every move made by every player in every game played across our system, we looked at the datastore options available in the wot.io data service exchange. For simplicity, we opted for the No-SQL flexibility of MongoDB and its native support for JSON documents like our game messages.
We chose the PGN format for our moves, but there are other formats that represent chess moves as well. Since MongoDB doesn't require you to define a fixed schema, we could easily add new formats in the future without requiring changes. Whether storing chess moves or IoT data from many different devices or device management platforms, this flexibility makes MongoDB a nice choice for storing data whose format can change over time.
Quick Review on Adapters
As a review, wot.io adapters can be configured to listen for messages on one bus resource and send any messages it may generate to some other bus resource. Typically, these resource paths are set declaratively and stored in the wot.io configuration service, where they are represented as environment variables that get passed into instances of the Docker containers which comprise running data services or adapters.
What that means is that while we can, of course, connect directly to our MongoDB datastore using any of the available drivers or clients, we can also interact with it simply by sending messages on the wot.io bus to and from a MongoDB adapter.
Inserting data into MongoDB
The wot.io adapter for MongoDB makes it possible to insert and query a MongoDB datastore by sending messages on the wot.io bus.
Since each MongoDB adapter can be configured to connect to a specific database and collection in a particular specific instance of the MongoDB data service, to insert a document, one need only route a JSON message that looks like this
[ "insert", {some-JSON-object} ]
to the adapter, et voila, the document will be asynchronously inserted into the configured MongoDB collection.
Reading data from MongoDB
Similarly, we can fetch data by routing a JSON message that looks like this
[ "find", {some-BSON-query} ]
to the same adapter, and the query result will be sent to the destination resource to which the adapter has been bound. (There are additional options for controlling the number of documents in the response, etc., but we'll keep it simple for this example.)
We can also send messages requesting aggregate queries. For example, the message our chess example sends to retrieve statistics for a given user looks like this:
[
"aggregate",
[
{
"$match": {
"gameResult": { "$exists": true },
"$or": [
{ "white": "wotiosteve" },
{ "black": "wotiosteve" }
]
}
},
{
"$project": {
"win": {
"$cond": [
{ "$eq": [ "$winner", "wotiosteve" ] },
1,
0
]
},
"lose": {
"$cond": [
{
"$and": [
{ "$ne": [ "$winner", null ] },
{ "$ne": [ "$winner", "wotiosteve" ] }
]
},
1,
0
]
},
"draw": {
"$cond": [
{ "$eq": [ "$winner", null ] },
1,
0
]
},
"user": "$user"
}
},
{
"$group": {
"_id": null,
"total": { "$sum": 1 },
"win": { "$sum": "$win" },
"lose": { "$sum": "$lose" },
"draw": { "$sum": "$draw" }
}
},
{
"$project": {
"user": { "$concat": [ "wotiosteve" ] },
"total": 1,
"win": 1,
"lose": 1,
"draw": 1,
"action": { "$concat": [ "statistics" ] }
}
}
]
]
Clearly, MongoDB query documents can get rather complex—but they are also very expressive. And wot.io makes it easy for other data services to interact with our MongoDB datastore.
Composable Data Services
What do we mean by making it easy for other data services to interact with our MongoDB datastore? Well, for example, we might employ a data service like scritpr which allows us to route messages to custom JavaScript logic endpoints.
Let's say that we have coded an algorithm to calculate the minimum number of moves that could have been used to beat an opponent. We can route our game-end messages (like the one shown above) at this data service.
Note that scriptr does not have to possess any MongoDB integration. Nor does the script have to even know that its response might be routed at MongoDB. The "insert"
message could just as easily be processed by a different data service—say, Riak. Or both for that matter!
This is what we mean when we say that wot.io's data routing architecture allows for loosely coupled, composable solutions.
Next Time...
Next time we'll take a look at how we can connect another wot.io data service, Pentaho, with our new MongoDB datastore to produce some custom reports.
As with so many industries, the Internet of Things is changing the face of digital signage. With so much real-time, contextual data available, and with the ability to apply custom business logic and analysis in the cloud and send signals back to connected devices for a closed feedback loop, is it any surprise that the retail and advertising markets are taking notice? According to the International Data Corp.:
Digital signage use in retail outlets will grow from $6.0 billion in 2013 to $27.5 billion in 2018
Companies are already anticipating this trend. For example, our partners at B+B Smartworx have designed an entire vertical digital signage solution centered around their Wzzard Intelligent Edge Node and Spectre Cellular/Internet Gateway devices.
In this post, we use the wot.io data service exchange™ to combine the device management capabilities of the AT&T M2X platform with publicly available NYC subway data and an instance of Elasticsearch to quickly build out an example end-to-end digital signage solution.
About the MTA
According to its website, the MTA, or
Metropolitan Transportation Authority is North America's largest transportation network, serving a population of 15.2 million people in the 5,000-square-mile area fanning out from New York City through Long Island, southeastern New York State, and Connecticut.
And since
MTA subways, buses, and railroads provide 2.73 billion trips each year to New Yorkers – the equivalent of about one in every three users of mass transit in the United States and two-thirds of the nation's rail riders. MTA bridges and tunnels carry more than 285 million vehicles a year – more than any bridge and tunnel authority in the nation.
all those eyeballs seemed like a logical opportunity for our imaginary IoT digital signage company.
MTA integration
In order for our digital signage company to maximize the revenue opportunity that these subway travelers represent, we realized that it would need to know when a given sign is approaching a given train station. That way, the ads displayed (and the advertising rates we charged!) could be contextualized for attractions at or near each stop in real time. In other words, we wanted to create an advertising-driven business model where we could sell ad-spots on our digital signs just as if they were commercials on television, displaying commercials for companies near specific train stops as the trains approached those stops.
Rather than spec out a costly greenfield hardware solution with additional sensors like GPS to enable the tracking of signs on the trains (especially considering the satellite reception to the subway trains would be far from ideal much of the time), we decided to support a hypothetical existing installed signage base and infer the train position based on the data available through the MTA Real-Time Data Feeds service. A data service which was also, conveniently enough, already available as a feed on the wot.io data service exchange.
Interested readers may want to peruse the GTFS-realtime Reference for the New York City Subway for all the gory details, but we're sure that since you're reading this blog you have already realized that composing a solution out of existing data service integrations is much better than writing each of them yourself from scratch!
(Of course, combining web-based data feeds like the MTA and combining them with IoT device data is nothing new to wot.io. For another example in the Smart City space, see how we combined traffic disruption, traffic sign, and traffic signal data from London Datastore with ThingWorx, ARM mbed Device Server, Elasticsearch, and several ARM-based devices from u-blox, NXP, and Multitech at Mobile World Congress 2015.)
About AT&T M2X
Now that we decided that we would be modeling our digital signs as a virtual combination of web-based data and actual device connectivity, we needed to select a device management platform to provision and manage the digital signs themselves.
AT&T's M2X platform provides time-series data storage, device management, message brokering, event triggering, alarming, and data visualization for industrial Internet of Things (IoT) products and services. As such, it seemed like a good fit to function as either a device management platform (DMP) or data service provider (DSP), to use wot.io terminology. Throughout the balance of this post, we will show how we used it as both.
About the wot.io AT&T M2X adapter
wot.io's ability to integrate to multiple device management platforms is certainly no surprise to readers of this blog. For example, we have recently seen posts using ARM mbed Device Server, oneMPOWER, ThingWorx, DeviceHive, PubNub, Firebase, and even bip.io, just to name a few!
In a similar vein, we created an adapter to integrate M2X with wot.io. In order to streamline the process of getting connected to M2X, we also made it available through the M2X developer website.
This means that we can connect and manage devices using AT&T's M2X platform and can allow them to communicate with other data services using the wot.io data service exchange--exactly the combination that we were after to allow us to model our digital signs as hybrid, or virtual devices that combine physical device data with other data feeds like the one from the MTA.
Technical Overview
With the major building blocks for our digital signage solution thus identified, we were able to sketch out an overview of the system architecture:
We decided to configure the GTFS adapter to fetch MTA data every 30 seconds using a scheduler. We would also provision a logical resource for each sign device to represent the current advertising lineup for that sign.
We were pleased to note that by modeling the data as virtual resources in the wot.io operating environment, we would be able to easily route data for specific devices or groups of devices to specific data services. For example, calculating advertising rates for different signs could be as simple as routing them to different "versions" of a data service implementing the pricing logic. In other words, the complex problem of dynamically changing the advertising lineups based on MTA updates occurring at different frequencies had been reduced to a series of simple routes and transformations of data streams within the wot.io operating environment.
While we were thinking along these lines, it occurred to us that we'd probably also want to be able to track and report on the number of times each ad spot was displayed and which sign on which train and at which location etc. It was easy to simply route all the same device notifications to be indexed in an instance of Elasticsearch. We also needed a datastore to house our advertising lineups for each sign as they were sold, so we opted for the [No-SQL])https://en.wikipedia.org/wiki/NoSQL) key-value store Riak from Basho. And we did all this without breaking or even affecting the rest of the solution.
Such is the power of the wot.io data modeling and routing facilities: it enables solution providers to decouple the logical model of the data, and the services operating on various subsets of that data, from the implementation of the data services themselves. Nice.
Digital sign provisioning application
For the purpose of our simple example, a pair of web applications have been created to illustrate how custom applications can be created around device provisioning and signage data visualization. In a real scenario, we'd probably leverage another wot.io data service like ThingWorx or JBOSS to build out our user interfaces. Or for that matter, we might want to use something like the iOS or Android libraries to build a mobile app, for example.
The example train provisioning application uses a Web Socket connection to the wot.io operating environment to listen for and display the trains currently in service based on the MTA data feed.
Using an interface like this one, an operator from our digital signage company could provision which of our signs (if any) reside on a given train
resulting in the following display:
indicating in this case that a sign with a device ID of 1464a49ea6f1862bf6558fdad3ca73ce
is located on train $4 1337+ UTI/WDL
. In order to accomplish this feat, the train provisioning application sent a message back to the wot.io operating environment over a Web Socket connection to request a train sign device be provisioned to the given train. A quick review of our architecture diagram above will show that in turn, that message was used to register the sign as a device in M2X.
Additionally, now that the device has been provisioned in M2X, any advertising lineups for this train and its next stop can be looked up in Riak. For example, in the following screenshot you can see that there are lineups provisioned for two signs on the 01 1450 SFY/242
train, and for one sign on the 01 1454 242/SFY
train.
M2X as a device management platform
Recall that we said that our train provisioning application sent a message requesting that we register a sign as a device in M2X. You can see from the following screenshot that a sign device was, indeed, registered successfully (the device ID we used in M2X is a composition of the train id and sign id)
Now, as new messages with updated advertising lineups are determined by the Request Signage Data
adapter and sent from the wot.io operating environment, we can see that they are, indeed, being received in the M2X interface for this sign:
Displaying the dynamic ads on our digital signs
In lieu of actual digital signs, we created a simple simulator application to demonstrate how the the advertising lineup from a sign on a given train changes as it approaches each station. Once again, the application leverages wot.io's Web Socket protocol adapter for near real-time notifications to keep the application's lineup synchronized just like a digital could.
For example, we can see that the lineup of ads that have been sold for the first sign on train #01 1454 242/SFY
is one list while stopped at station #106S
but changes once the train leaves the station and is approaching station #107S
M2X as a data service provider
Besides the obvious utility for managing and provisioning devices, as described above, the M2X platform can also be used as powerful data service.
For example, we could create dashboards for a simple overview of the advertising statistics for a given digital sign device
or we could display the number of ad impressions delivered, or number of distinct advertisers represented in a time-series graph
Of course, we have only scratched the surface in our simple example. There are many more features of the M2X platform. And they become even more powerful when combined with the wot.io data service exchange™.
Riak and Elasticsearch
We have discussed Riak and Elasticsearch in previous posts, so you can read more there to see details and other applications of those data services.
Flexibility and Future-proofing
This prototype of a dynamic signage system demonstrates how a complex solution for a real, Industrial IoT vertical can be composed from a relatively small number of existing data services and declarative routing logic in the wot.io operating environment. This approach shows the flexibility and choice of working with an exchange containing multiple data services. You add components as you need them based on their fit and functionality.
And new components can be added in the future with only the work needed to add the new service. Get a new source for ad inventory? Add the API or data service as a provider with an adapter. Want to manage how much inventory comes from different providers? Add a data service that allows you to control the flow of ads based on the rules you set.
The wot.io data service exchange gives you choice during your initial design and provides flexibility going forward allowing you to continue to get the most from your initial investment and still add new products down the road.
The majority of sample IoT applications available online, including many of our own, use JSON, XML or some other simple, human-readable, text-based format. It makes sense because it's easy to show the data at various parts of the application, it's easy to create new demo data, and it's easy to work with JSON data in server-side data services because it's a well-supported format for web-based services.
But in the real world of connected devices, not all IoT data will look like JSON or XML. System designers are already concerned about the bandwidth use of the oncoming wave of devices and are advocating for leaner, more compact formats. In his keynote at ARM TechCon 2015 earlier this month, Google's Colt McAnlis encouraged developers to be looking at lighter-weight binary protocols like FlatBuffers or protocol buffers.
And of course, an increasing number of IoT solutions will incorporate audio and video streams—either as primary or as additional sources of IoT data. For example, security monitoring systems or advanced sensors attached to machinery watching product output as part of an industrial quality control system both involve the collection, transmission, and analysis of audio streams.
The following video demonstrates how binary data can be transmitted across and manipulated within the wot.io operating environment.
Properly working with binary data isn't trivial, and it's an important aspect of an enterprise-caliber data routing system. wot.io adapters and its operating environment work independently from the payload its messages carry, and can therefore accept and route messages with payloads of arbitrary format. Allowing adapters to flexibly consider message payloads alternately as either opaque blobs or as transparent, structured data proves to be immensely valuable in real-world industrial IoT scenarios.
In fact, the wot.io data service exchange™ has a number of data services like Nervve and Datascription that provide search, transcription, and metadata extraction from binary audio and video streams. If you'd like to learn more about these and other data services, contact wot.io today!
In this webinar, wot.io hosts two of our partners, Sempercon, an IoT systems integrator, and Flowthings, a data service provider in the wot.io data service exchange™. Sempercon describes how they transformed Go2Power’s battery backup unit into a connected IoT system. Featured with wot.io's data service exchange™ and operating environment are Sempercon’s CirrusCon device management and real-time stream processing platform and a pair of wot.io data services: bip.io for flexible web API automation, and Circonus for powerful monitoring and analytics.
wot.io just returned from ARM TechCon 2015 presenting our data service exchange for connected device platforms. One of the major differentiation's for wot.io is our IoT platform interoperability. There are hundreds of IOT platforms, and new ones announced every week. Oracle announce yet another IoT platform at TechCon. In one talk there was a prediction that a home owner will have over 20 different IoT connected platforms in the home and car, that equates to 20 different apps for the home owner to deal with. wot.io offers the ability to aggregate and unify the IoT platforms, whether they be industrial, enterprise or home.
As a interoperability example we showcased ARM mbed device server and pubnub together with several data services to augment the aggregated connected device data. We recorded a video of how attendees can connect an ARM-based device to wot.io data services using PubNub. On the bottom of PubNub's IoT page and on their developer page you can see the breadth of libraries available. If you can get your ARM-based device connected using one of those, you can access bundle of data services we have made available.
When you go through the registration process, wot.io will provision accounts and some sample resources in bip.io and scriptr.io. After logging in, you can add your account details to activate PubNub and Circonus. Here's just one example of what we can do!
Today, we wanted to give you a peek under the hood and walk through an example of how wot.io and our partners make wot.io data services available in the wot.io data service exchange. In particular, we will show how to create a Riak data store cluster. Naturally, we will be doing this using Docker since Docker containers make up the building blocks of the wot.io operating environment™.
Riak is a key-value, No-SQL data store from Basho. And it's one of the data services available in wot.io's data service exchange™. Let's see how it is done.
Initializing a Riak cluster with Docker
Often times, the first step in getting a service to run as a wot.io data service is to create a Docker container for it. In this case, much of our work has already been done for us, as a suitable recipe is already available on the public Docker Hub.
For the purposes of this example, we will be instantiating two Docker containers, each with its own Riak instance. Once we confirm that they are running successfully as independents, we will join them together into a cluster.
To get started, we pull the latest Riak docker image from devdb/riak
repository on the public Docker registry.
$ docker pull devdb/riak:latest
It helps to pre-download the Riak docker image into the local Docker cache.
Starting up riak1 instance
Once the image has been stored in the Docker cache, we are now ready to kick off the first Riak instance.
$ mkdir ~/riak_storage; cd ~/riak_storage
$ docker -H unix:///var/run/docker.sock run --dns 8.8.8.8 --name riak1 -i -d -p 18098:8098 -p 18087:8087 -v `pwd`/data:/var/lib/riak -t devdb/riak:latest
The Riak initialization will take a few minutes to complete. Once it's finished, we will be able to check that the instance is empty using the handy HTTP REST interface that Riak exposes:
# querying data store on riak1
$ curl -s localhost:18098/buckets?buckets=true | jq .
{
"buckets": [] # 0 items
}
This shows that there are no buckets in the data store currently. That's ok. We'll populate the data store in a minute.
Starting up riak2 instance
Let's go ahead and instantiate the second Riak container as well
$ cd ~/riak_storage
$ docker -H unix:///var/run/docker.sock run --dns 8.8.8.8 --name riak2 -i -d -p 28098:8098 -p 28087:8087 -v `pwd`/data2:/var/lib/riak -t devdb/riak:latest
and confirm that it, too, is empty
# querying data store on riak2
$ curl -s localhost:28098/buckets?buckets=true|jq .
{
"buckets": [] # 0 items
}
Injecting data into riak1 data store
Now that both Riak instances are up and running, we are ready to populate one of the instances with some test data. Once again, we can use the curl
tool to place data on riak1 using the HTTP REST interface.
# populating with for loop
for i in $(seq 1 5); do
curl -XPOST -d"content for testkey-${i}" \
localhost:18098/buckets/testbucket/keys/testkey-${i}
done
Checking contents on riak1
Now that it has some data, querying riak1 should confirm for us that our POSTs had been successful
# querying data store on riak1
$ curl -s localhost:18098/buckets?buckets=true | jq .
{
"buckets": [
"testbucket" # 1 item
]
}
We found the Riak bucket named 'testbucket' that we created earlier. Showing what's inside 'testbucket' we can see:
$ curl -s localhost:18098/buckets/testbucket/keys?keys=true | jq .
{
"keys": [
"testkey-1",
"testkey-5",
"testkey-4",
"testkey-2",
"testkey-3"
]
} # 5 keys
Querying one particular key, we also have:
$ curl -s localhost:18098/buckets/testbucket/keys/testkey-5
content for testkey-5
Meanwhile, riak2 remains empty...
We can check that the data store on riak2 hasn't been touched.
# querying data store on riak2 again
$ curl -s localhost:28098/buckets?buckets=true|jq .
{
"buckets": []
}
So far, riak2 instance remains empty. In other words, so far we have two independent Riak data stores. But we wanted a Riak cluster...
Joining the two Riak instances into a cluster
We are now ready to join the two Riak instances, but before we do, we'll have to collect some information about them. We need to find the IP addresses of each of the containers.
To confirm the status of the Riak instances, we can check the member-status
of the independent instances. This command happens to also tell us the container IP addresses. We can run member-status
using the docker exec
command for riak1:
# checking member-status on riak1
$ docker exec riak1 riak-admin member-status
============================ Membership =============================
Status Ring Pending Node
---------------------------------------------------------------------
valid 100.0% -- 'riak@172.17.5.247' # 1 result
---------------------------------------------------------------------
Valid:1 / Leaving:0 / Exiting:0 / Joining:0 / Down:0
and again for riak2:
# checking member-status on riak2
$ docker exec riak2 riak-admin member-status
============================ Membership =============================
Status Ring Pending Node
---------------------------------------------------------------------
valid 100.0% -- 'riak@172.17.5.248' # 1 result
---------------------------------------------------------------------
Valid:1 / Leaving:0 / Exiting:0 / Joining:0 / Down:0
Noting the IP addresses (for riak1: 172.17.5.247 and for riak2: 172.17.5.248), we can proceed to join riak2 instance onto the riak1 instance. To do so, we will run run 3 Riak commands: riak-join
, riak-plan
and riak-commit
.
The riak-join
command will basically register the connection on the two machines.
$ docker exec riak2 riak-admin cluster join riak@172.17.5.247
Success: staged join request for 'riak@172.17.5.248' to 'riak@172.17.5.247'
The riak-plan
command will report the connection info.
$ docker exec riak2 riak-admin cluster plan
========================== Staged Changes ===========================
Action Details(s)
---------------------------------------------------------------------
join 'riak@172.17.5.248'
---------------------------------------------------------------------
NOTE: Applying these changes will result in 1 cluster transition
###################################################################
After cluster transition 1/1
###################################################################
============================ Membership =============================
Status Ring Pending Node
---------------------------------------------------------------------
valid 100.0% 50.0% 'riak@172.17.5.247'
valid 0.0% 50.0% 'riak@172.17.5.248'
---------------------------------------------------------------------
Valid:2 / Leaving:0 / Exiting:0 / Joining:0 / Down:0
WARNING: Not all replicas will be on distinct nodes
Transfers resulting from cluster changes: 32
32 transfers from 'riak@172.17.5.247' to 'riak@172.17.5.248'
And finally, the riak-commit
will save the changes.
$ docker exec riak2 riak-admin cluster commit
Cluster changes committed
Once you see this message, the two data stores will begin the cluster building process. The information on the two data stores will start to be synced.
Confirming the data stores are clustered correctly
Now we can check the cluster status. If we immediately run member-status
right after the riak-commit
, we will see that membership ring at this state:
$ docker exec riak2 riak-admin member-status
=========================== Membership ============================
Status Ring Pending Node
---------------------------------------------------------------------
valid 100.0% 50.0% 'riak@172.17.5.247'
valid 0.0% 50.0% 'riak@172.17.5.248'
---------------------------------------------------------------------
After distribution time
Since riak1 was populated with only our test entries, then it won't take long to distribute. Once the distribution is finished, the clustering will be completed. You will see:
$ docker exec riak2 riak-admin member-status
============================ Membership =============================
Status Ring Pending Node
---------------------------------------------------------------------
valid 50.0% -- 'riak@172.17.5.247'
valid 50.0% -- 'riak@172.17.5.248'
---------------------------------------------------------------------
Valid:2 / Leaving:0 / Exiting:0 / Joining:0 / Down:0
Checking contents on riak2
Now that the distribution has completed, listing the buckets from riak2 will show the cloned dataset from the data store riak1.
# querying the buckets (now) on riak2
$ curl -s localhost:28098/buckets?buckets=true|jq .
{
"buckets": [
"testbucket"
]
}
And querying the testbucket
shows our keys, as expected:
# querying the buckets (now) on riak2
$ curl -s localhost:28098/buckets/testbucket/keys?keys=true|jq .
{
"keys": [
"testkey-3",
"testkey-4",
"testkey-2",
"testkey-5",
"testkey-1"
]
}
And of course, querying one of these keys, we get:
# querying the key (now) on riak2
$ curl -s localhost:28098/buckets/testbucket/keys/testkey-5
content for testkey-5
Note that the results from riak2 are the same as that from riak1. This is a basic example of how Riak clustering works and how a basic cluster can be used to distribute/clone the data store.
Encapsulating the Riak cluster as a wot.io data service
Now that we have the ability to instantiate two separate Riak instances as Docker containers and join them together into a single logical cluster, we have all the ingredients for a wot.io data service.
We would simply need to modify the Dockerfile recipe so that the riak-join
, riak-plan
and riak-commit
commands are run when each container starts up. While this naïve mechanism works, it suffers from a couple of drawbacks:
- Each cluster node would require its own Docker image because the startup commands are different (i.e., one node's commands have riak1 as "source" and riak2 as "target", while the other node's commands are reversed).
- The IP addresses of the Riak nodes are hard coded, dramatically reducing the portability and deployability of our data service.
There are other details in making a data service production ready. For example, a production data service would probably want to expose decisions like cluster size as configuration parameters. wot.io addresses these concerns with our declarative Configuration Service for orchestrating containers as data services across our operating environment. To complete the system, we would also add adapters to allow any other wot.io service to communicate with the Riak service, sending or querying data. But these are other topics for another day.
Conclusion
Today we've taken a brief peek under the hood of creating a wot.io data service. Thankfully, most customers would never encounter any of the complexities described in this post, because wot.io or one of its partners have already done all the heavy lifting.
If you are interested in making your data service available on the wot.io data service exchange, check out our partner program and we'll help get you connected to the Internet of Things through an interoperable collection of device management platforms and data services.
This is the first in what will be a series of posts describing how to connect gaming devices with related data services in the wot.io data service exchange™.
A marriage made in heaven
It should come as no surprise that gaming devices, whether of the console, mobile, or even maker variety, more than qualify as things in the Internet of Things. Nor should it be surprising that engineers love to play games. (I mean, how else could the world would have come up with something like this)?
So in a way, it was only a matter of time before we got around to a project that combined the two.
It all started with a game
In his essay "The Morals of Chess" (1750), Benjamin Franklin wrote:
The Game of Chess is not merely an idle amusement; several very valuable qualities of the mind, useful in the course of human life, are to be acquired and strengthened by it, so as to become habits ready on all occasions; for life is a kind of Chess, in which we have often points to gain, and competitors or adversaries to contend with, and in which there is a vast variety of good and ill events, that are, in some degree, the effect of prudence, or the want of it.
As much as we'd like to claim that the genesis of our gaming application series was rooted in such lofty ideals, the truth of the matter is that we chose chess primarily because projects like Deep Blue have popularized the understanding that chess presents a worthy challenge to demonstrate the true power of computing. (And you know chess is cool if Dr. Who had to win a game to save the universe).
Having thus selected the game, we set about to take up that challenge by combining chess with Android devices and the power of the wot.io data service exchange™.
A quick search on GitHub uncovered an open-source implementation of a chess game designed for Android devices. "Yes, this should do quite nicely," we thought to ourselves.
Adding real-time connectivity with PubNub
Unfortunately, however, while the original version of the chess game does support networked game play between two different devices, all of the communications are sent to a central FICS server.
Since our plans include integration with various wot.io data services and potentially even other IoT devices like local LED lights or clock timers, we realized that we would need to modify the game to somehow send game play events separately to each of these devices and services. And then if we wanted to add new devices or services later, we would need to update all of the game applications on all of the Android devices. Ugh.
"That doesn't seem to fit in very well with our thinking about these Android devices as IoT things" we thought. What our connected gaming devices really needed was a pub/sub approach to connectivity.
In other words, it sounded like a perfect fit for PubNub, one of the wot.io's data services.
We already knew that PubNub had a great story for developing multiplayer online games. So we wondered how difficult it would be to make it work with our Android application. Of course, we should have known they had that covered with their list of over 70 different SDKs
It was a relatively straightforward exercise to replace the networking logic with the PubNub SDK. In fact, PubNub Presence even enabled us to add player presence information—a feature that the original game did not have. You can find the PubNub enabled version of the source code for the chess game on GitHub.
So in very short order we were able to take an existing game and connect it to PubNub's real-time data platform with presence information—and in the process, effectively replace an entire purpose-built FICS service in the cloud. Not bad.
Routing gameplay event data with wot.io
Of course, after connecting the devices via PubNub, the rest was easy since wot.io already has a PubNub adapter.
Wait...you mean there are no more changes required to the application source code to connect to wot.io, its data services, or connected devices? I thought the only thing we did was to build a connected device application that knows how to connect to PubNub?
Exactly. All we needed was the name of the PubNub channel we had the game using, and the pub/sub API keys for our account, et voila! Game play events were available on the wot.io data service exchange, ready to be easily routed to or from any of the myriad data services that wot.io provides.
Yes, that's right. Because the wot.io adapter for PubNub is bi-directional, not only can data services subscribe to the game play events being transmitted by the devices over PubNub, but they can also publish messages that can be sent back to the devices via PubNub as well.
Any ideas for interesting data services that could take advantage of such power suggesting themselves to you yet? Well they did to us, too.
Next Time...
But before we get ahead of ourselves, as a first step, next time we'll see how easy it was to store all of the game play events in a data store using the wot.io data service exchange™.
Chess Photo Credit: By David Lapetina (Own work) [GFDL or CC BY-SA 3.0], via Wikimedia Commons
wot.io™ is now part of the ThingWorx IoT Marketplace with an extension that integrates the ThingWorx® Platform with the wot.io data service exchange™. ThingWorx is a widely adopted IoT platform, allowing rapid development of IoT solutions and wot.io provides new data data services with interoperability across IoT/M2M platforms. Together, they are a perfect complement.
Here's some things that make wot.io's ThingWorx extension pretty cool:
wot.io can deploy, on demand, any number of Thingworx IoT application platforms, fully networked, as a containerized data service to a public cloud, a private cloud, or even a personal computer.
When ThingWorx is deployed as a data service in the wot.io data service exchange, wot.io’s ThingWorx extensions works as a "thing", allowing click and drag interoperability between ThingWorx any other IoT platform or application in the wot.io data service exchange.
wot.io + ThingWorx = some pretty awesome mashups & functionality
wot.io with ThingWorx enables app creators, hardware developers, system integrators and data service providers to deploy IoT projects seamlessly in the wot.io’s cloud-based operating environment. The new extension made it easy for our developers to create a mashup as one of our demos for ARM's TechCon 2015 conference. The screenshot below shows the mashup with various ThingWorx widgets, each populated with a data feed connected graphically with the wot.io extension. You can see a live version in the video on our ARM TechCon blog post.
You can find the wot.io ThingWorx extension listed in the ThingWorx Marketplace. More information about the extension, including a video demo, is available in the wot.io labs blog post on creating and using the extension. And if you are a ThingWorx user, contact us to get set up with access to a wot.io operating environment.
wot.io spent last week in Santa Clara, CA, attending ARM's TechCon 2015 conference and we had a bunch of new things to show. As an ARM mbed partner, we had a kiosk location in ARM's mbed zone, which this year was a huge booth located right in the middle of the expo floor. As a reflection of the growth of the mbed ecosystem, the booth had 4 kiosk areas with 4 partners each for a total of 16 mbed partners represented in the booth!
ARM mbed Device Server was a key part of our demo and it was exciting to see strong interest in our delivery of the ARM mbed Device Service in the wot.io data service exchange.
Our kiosk had 3 ARM partners who were also wot.io partners, so that worked out well. Two hardware partners, Atmel and u-blox were represented with hardware in our demo. One of our key data service provider demos was an integration with ForgeRock and they were in the booth right next to us as well.
The demo had a sample of our new integration with Informatica as well as well as examples of bip.io, scriptr.io, ThingWorx, and Circonus. The premise of the demonstration was to show interoperability between IoT platforms and creating actionable results from the resulting connected device data for the enterprise.
ARM has posted a short video of our demo from the show:
You can also see videos of other ARM partners from the mbed Zone this year.
If you're interested in more detail on the data services, here is another video the describes them more fully:
In addition to our presence on the expo floor, we gave a talk as part of the Software Developer's Workshop. Our talk was titled Empower Your IoT Solution with Data Services and as the title suggests, we demonstrated some of the wot.io data services that are particularly useful for engineers and developers as they design IoT solutions.
It was a great show and we'll have more details on the technology behind the demos soon. As always, if you have questions about wot.io or any of our data services, don't hesitate to contact us!
One of the key benefits wot.io provides is interoperability between different connected device platforms. One really interesting advantage of the wot.io architecture is that it allows developers to take both IoT data from connected devices as well as data from web-based feeds and combine them in interesting ways using dynamic data services like Firebase.
For this application, we're going to use Firebase's ability to store and sync data in realtime and subscribe to changes in their FAA (Federal Aviation Administration) dataset which shows the latest airport delay and status updates in realtime.
Separately, we'll create our own custom Firebase dataset, based off the FAA data, where we store points of interests around delayed airports and expand the scope of those interests based on the severity of the delay.
Making Lemonade from the Lemons of Others
It's no surprise that hotels want to predict demand to help them fill rooms to maximize their profits. When flights are delayed or canceled, an immediate opportunity is created for hotels near the affected airports to reach a collection of new hotel customers. When flights are delayed, travelers may miss their connections at hubs, or worse, have their flight canceled until the following day. The inconvenienced flyers will need to decide if they will wait it out at the airport or book a room in a nearby hotel.
If only the hotels knew about the delays and the dilemma facing the travelers, maybe they could sway them toward opting for a night's stay with some type of discount...
We'll show how we can combine data services in the wot.io operating environment to help predict when a person may be delayed so that hotels nearby with available rooms can connect.
But of course, it doesn't end with hotels. After all, wot.io is a data service exchange. Other parties like taxis and or Uber may be interested in the same information to predict a potential surge in ridership, and route vehicles to the area as necessary. Or discount travel services like Expedia or Kayak may wish to use the data to rate and sort hotels - enticing the same delayed flyers to use their services instead. Naturally, in addition to the delay information, the sorting algorithm for presenting these options to an end customer would typically employ custom business logic based on different revenue sharing rates, etc.—business logic that can, once again, be readily represented as composable data services like scriptr, for example.
Overview
Above is a block diagram giving an overview of the solution we'll be describing in the following sections.
Using Firebase as a Data Source
You can view the FAA data by pointing your browser to the FAA Firebase feed. You can see that at the time of this post, MSP (Minneapolis International) was experiencing a weather-based delay.
We can subscribe to notifications whenever the delay information changes and route the data through the wot.io data service exchange into a service like scriptr where we can run custom business logic.
Augment the Data with Scriptr
In this case, we wrote a quick script take the incoming airport notifications and fetch the geolocation data for that airport using the Google Maps API.
In addition to being useful on its own, we can further augment the now geo-located airport information with a list of nearby attractions using the Google's Places API. In fact, in this case we have modeled the type of business logic that a travel service like Expedia might use by targeting different types of places and offers based on the delay severity. So a short delay might target a discounted drink offer at a nearby bar, while a longer delay might result in an offer for an overnight stay at a nearby hotel.
A key point that should not be missed is that the location lookup script didn't have to know about the hotel lookup script. In other words, wot.io's data routing architecture allows us to decouple the flow of data from the execution of the data services. If tomorrow we decided that we also wanted to send the geotagged airport delays to another data service like Hortonworks Enterprise Hadoop for archival and analytics on how to optimize our travel company offerings for seasonal trends in travel delays, we wouldn't have to modify the logic in the airport script at all.
Using Firebase as a Data Store
Like many of the data services in the wot.io data service exchange, Firebase supports both read and write interactions. With our newly augmented data set that mashes up the FAA data with Google's location and point-of-interest data, we can use the wot.io Firebase adapter to route the data back into a brand new collection of our own in Firebase.
Custom Applications and wot.io
Many times, end-to-end IoT solutions will include custom user-facing applications in addition to the sorts of behind-the-scenes data services we have discussed thus far. Not a problem—to wot.io, they're just another data service!
As an example, we created a fictitious consumer-facing web application for the hotel and discount travel brands, transportation services, and restaurants and bars. This application can send and receive information to and from wot.io using Web Sockets for near realtime connections to our new dataset in Firebase as well as any other data services on the exchange. In this case, it's just a simple web application written using node.js. When delays are encountered and the business logic dictates that hotels should be offered, each hotel gets its own dynamically generated page with buttons for the customer to act on.
And of course, wot.io connectivity is hardly limited to web applications. Here is a native Android mobile application for taxi fleet managers to use for monitoring and deploying their vehicles based on airport delay information. Not only does this application illustrate how mobile applications can connect to the wot.io data service exchange as easily as web applications, but it also illustrates how multiple parties may use the similar data for different purposes.
Happy Travels!
Although we've "flown through this post" rather quickly (sorry, I couldn't resist), hopefully we've demonstrated how wot.io's composable data service architecture provides significant power and flexibility by connecting to multiple systems and providing data services that can manipulate IoT and web-based data in near realtime fashion. This simple demo also shows the value of pulling data from multiple sources, transforming it, and creating a new stream of data that becomes useful in brand new ways of its own.
So good luck on your next trip, and we hope you don't experience any delays!
I’m writing this from the plane while traveling home from the ARM TechCon 2015 conference. ARM has a vibrant community of IoT innovators, and ARM TechCon was a great event where wot.io were finalists for the Best IoT Product award and hoped for a repeat win to back up our 2014 Best of Show award. Congratulations to our partners at Atmel, though, for beating us out in the end. Of course, we like to think we helped nudge them ahead by featuring Atmel hardware as a part of our demonstration. We’ll get them next year, though ;-)
On the final day of the Expo, I happened to be chatting with some folks from Freescale—another wot.io partner and fellow ARM mbed Zone exhibitor, not to mention Best in Show and Reader’s Choice award winner! The folks from Freescale had seen how we were already routing sensor data from their devices to a complex array of data services, and wanted to know how difficult it would be for Freescale developers to harness a tiny sliver of that power—say to connect their devices to ARM mbed Device Server and another data service like bip.io—and most importantly, to get it working quickly and easily.
Unfortunately, the Expo hall was closing and we were packing up our respective booths; but I told him that what he asked should be easy since the work was really already done. I promised to pull together some sample code and instructions on the plane the next morning.
So that's just what I'll try to do in this post.
ARM TechCon Demo
The following is a block diagram showing the array of data services that we demonstrated at the event.
The full demo really deserves a post of its own. For now, I just want to outline the simplest possible way to hook up a Freescale device to ARM mbed Device Server and bip.io.
Connecting the FRDM-k64f to ARM mbed Device Server
In a previous post, we've already shown how to get a Freescale FRDM-k64f board running ARM mbed OS connected to an instance of ARM mbed Device Server. So just to keep things fresh, this time we'll start from an existing ARM example project.
The mbed OS Application
As expected, it was quite straightforward to update the existing example code to work with wot.io. (In fact, the code changes were quicker than the updates to the README!) You can find the source code on Github.
In source/main.cpp
we only needed to change the location of our mbed Device Server
const String &MBED_SERVER_ADDRESS = "coap://techcon.mds.demos.wot.io:5683";
and then, since our ARM TechCon demonstration server was not using TLS, we need to remove the certificate-related attributes and instead set the M2MSecurity::SecurityMode
to M2MSecurity::NoSecurity
when we create the register server object:
M2MSecurity* create_register_object() {
// Creates register server object with mbed device server address and other parameters
// required for client to connect to mbed device server.
M2MSecurity *security = M2MInterfaceFactory::create_security(M2MSecurity::M2MServer);
if(security) {
security->set_resource_value(M2MSecurity::M2MServerUri, MBED_SERVER_ADDRESS);
/*
security->set_resource_value(M2MSecurity::SecurityMode, M2MSecurity::Certificate);
security->set_resource_value(M2MSecurity::ServerPublicKey,SERVER_CERT,sizeof(SERVER_CERT));
security->set_resource_value(M2MSecurity::PublicKey,CERT,sizeof(CERT));
security->set_resource_value(M2MSecurity::Secretkey,KEY,sizeof(KEY));
*/
security->set_resource_value(M2MSecurity::SecurityMode, M2MSecurity::NoSecurity);
}
return security;
}
We should now be able to build our mbed OS application using yotta
. (see the README
for instructions). In my case, I think I'd better wait until I get off the plane before I start programming blinking devices with dangling wires to test this out, though.
Connecting to wot.io data service exchange
To view or otherwise transform or integrate our device data using http://bipio.cloud.wot.io, a popular web API automation tool available on the wot.io data service exchange, follow these simple steps:
- Sign up for a free account on bip.io if you do not already have one.
- Create a new workflow (called a "bip") and name it
freescale
. - Select "Incoming Webhook" as the trigger for this bip, as we will be instructing mbed Device Server to send it notifications via HTTP.
- For now, add a simple "View Raw Data" node and set its value to the "Incoming Webhook Object". This will allow us to see the messages being received from the device. Later on, of course, you can do much more interesting things with your workflow.
Setting a notification callback in mbed Device Server
Since we want to have mbed Device Server send device notifications to bip.io, we need to register a notifcation callback via the REST API. The general form of the API call is
curl -X PUT 'https://mds.example.com/notification/endpoint' -d '{"url":"http://callback.example.com/path"}'
to have notifications sent to "http://callback.example.com/path". But in our case, we will also need to supply some security credentials for the API call and some custom headers for the callback in order to make everything work for bip.io. In addition, once we have registered our callback, we need to subscribe to notifications for a particular resource. Recall that our dynamic button-press resource was identified as /Test/0/D
in main.cpp
. The final API calls have been captured in the script bipio_subscribe
for convenience:
#!/bin/bash
# Simple script to simulate a device sending
# sensor readings to a bip.io workflow automation
MDS_USER=freescale
MDS_PASS=techcon2015
MDS_HOST=techcon.mds.demos.wot.io
MDS_PORT=8080
BIPIO_USER=techcon
BIPIO_TOKEN="dGVjaGNvbjp3b3RpbnRoZXdvcmxk"
BIPIO_BIPNAME=test
BIPIO_ENDPOINT="https://$BIPIO_USER.bipio.demos.wot.io/bip/http/$BIPIO_BIPNAME"
BIPIO_HEADER_HOST="$BIPIO_USER.bipio.demos.wot.io"
BIPIO_HEADER_CONTENT="application/json"
BIPIO_HEADER_AUTH="Basic $BIPIO_TOKEN"
BIPIO_HEADERS="{\"Host\":\"$BIPIO_HEADER_HOST\", \"Content-Type\":\"$BIPIO_HEADER_CONTENT\", \"Authorization\":\"$BIPIO_HEADER_AUTH\"}"
echo "Sending subscription request to ARM embed Device Server..."
curl -X PUT \
-H "Content-Type: application/json" \
-d "{\"url\": \"$BIPIO_ENDPOINT\", \"headers\": $BIPIO_HEADERS }" \
"http://$MDS_USER:$MDS_PASS@$MDS_HOST:$MDS_PORT/notification/callback"
curl -X PUT \
-H "Content-Type: application/json" \
"http://$MDS_USER:$MDS_PASS@$MDS_HOST:$MDS_PORT/subscriptions/wotio-freescale-endpoint/Test/0/D"
echo -e "\nDone."
Now, with the FRDM-k64f board connected, we can run the bipio_subscribe
script to have notifications sent to our new bip. We can also view the "Logs" tab for our test bip to verify that notifications are being received, or the "View" tab of the "View Raw Data" to see the messages themselves.
That's it! That's all there is to it! Of course, we can modify our bip to do something interesting like manipulate the messages, send them as part of SMS or email messages, save them to Google documents or send them to any of the other 60+ integrations that bip.io offers.
Next Time
Next time, we'll tweak the demo a bit further to hook up to the newly announced ARM mbed Device Connector cloud service—a convenient tool to use during prototyping—and use our button-push events to interact with other services on the wot.io data service exchange.
I love composable data services.
One of the great tools available to developers who use the wot.io Data Service Exchange is the protocol adapter framework. The wot.io protocol adapters make it possible to take existing applications which speak a given protocol, and use them to generate new data products. The wot.io Data Bus allows for exchanging these data products across multiple protocols in real time, making it easy to share data across services and customers.
Currently, the wot.io Data Service Exchange protocol adapters have production support for HTTP, WebSockets, MQTT, CoAP, AMQP, TCP, and UDP. Labs also experimental support for JMS, ZeroMQ, STOMP, XMPP, DDSI-RTPS, and JDBC currently in development. In addition to the open standard protocols, the DSE also has application specific adapters for various databases, search engines, logging systems, and the like.
A sample applications
To see how these protocol adapters work in conjunction with applications, we'll create a simple node-red flow that will publish a stream of 5 messages per second across MQTT to the wot.io Data Bus. This feed with then be replicated and split into two separate output streams to clients using CoAP and WebSockets at the same time.
The node-red flow for our example application consists of five nodes:
- an inject node to start the flow of messages
- a function node which generate the message payload
- a delay node in a feedback configuration to generate 5 messages per second
- a debug node to display the output in the node-red console
- a MQTT node to send the messages to the Data Bus
The data can be read off the wot.io Data Bus using off the shelf applications such as coap-cli and wscat. While I happened to use command line tools for this application, it is easy enough to use standard libraries like the CoAP libraries provided by ARM in mbed OS, or even your favorite web browser.
To see this application in action, please watch the following video:
In the above video, I show the creation of three wot.io Data Bus resources:
- wot:/wot/narf
- wot:/wot/coap
- wot:/wot/ws
The names of these resources allow you to identify them across protocols. There is nothing intrinsically magical about any of these names, and any protocol can read or write to any of the resources. It is the URLs of each of the application connections:
- mqtt://token@host:port/account/resource
- coap://token@host:port/account/resource
- ws://token@host:port/account/resource
that determines how the data is translated between encapsulation layers. The creation of resource bindings as demonstrated in the video, also provides a means to duplicate and route data between resources. The wot.io data bus routing supports filtering payloads based on regular expression matches, as well as hierarchical pattern matches. These advanced features are accessible through the protocol adapters, as well as, through the wot.io command line tools.
Since each Data Bus resource is addressable through all of the protocol adapters and the application adapters, it is possible to mix and match data making it both available to consumers of a data product, as well as, making it available for internal storage, processing, and analysis through applications made available as part of the Data Service Exchange. Each Data Bus resource has individualized access controls, with each type of resource operation (reading, writing, creating, deleting, binding, etc.) being controllable through policy. These access controls allows developers using the wot.io Data Service Exchange to make available some or all of their data to interested parties. With the protocol adapter framework in place, the wot.io Data Bus makes it easy to provide your customers with their data through what ever protocol they choose for their application.
As we mentioned in a previous post about NGDATA and scriptr.io, we have a partnership with Critical Mention giving us access to their enriched real-time media stream containing transcribed content of broadcasts across radio and television in the US, Canada, Mexico, UK, and other countries. As we showed previously, this rich feed of data can be processed in many ways and the data stream itself transformed with data services into new useful feeds.
As another example of wot.io data services that can operate on the Critical Mention media feed, we routed the feed into a MongoDB instance. Working with one of our systems integration partners DataArt, we then set up transformations on the data in an instance of Pentaho. In addition to the transcribed text of the broadcasts, the messages in the feed have additional data including the country where the broadcast originated, the network, etc. We created Pentaho transformations based on this data and were able to quickly create graphs showing the frequency of countries in the feeds.
This is a great example of how wot.io can route high-volume data to multiple deployed data services for processing. It also provides a glimpse at the types of things that are possible with the Critical Mention feed. We captured some of the details in a video. Enjoy!
IoT implementations produce all different types and frequencies of data. Some generate low volume, high value data, and others generate very high volume data streams that become valuable when they are processed and analyzed. For these types of applications, the data processing solutions need to be able to store and analyze this large volume of data quickly and efficiently. One solution for this use case in the wot.io data service exchange is NGDATA and their Lily big data toolset.
At wot.io, we deliberately define IoT in the broadest sense when we think about data processing solutions. For us, any device in the field generating data is suited to an IoT data solution. These devices can be traditional IoT like sensors on a tractor or a machine in a factory or a shipping container, but they can also be a set-top box delivering media and processing user preferences to create a better user experience. One such stream of data is that compiled by our data exchange partner at Critical Mention where they process media streams in real-time and provide a rich data feed of activity across radio and television broadcasts. Although some may not consider this a typical sensor IoT scenario, this is exactly the type of high-volume data feed wot.io partner solutions are built to handle.
In one implementation, we worked with our data service partner NGDATA to offer a Hadoop and Solr based big data data service and then routed a sample Critical Mention data stream to it. We were then able to query the live data for a variety of information that users might find interesting like trending topics, brand mentions, and the times and frequencies select issues are discussed. Other partner services, like those provided by Apstrata now named scriptr.io, could also be applied to search and process the data from Lily. This video gives an overview of how we did it.
NGDATA's Lily toolset also has a set of user interfaces provided as part of the solution. You can get a feel for those tools below.
The examples in the video are designed and configured for banking, media, and telecom verticals, but you can imagine trending and alerting applied to the Critical Mention data product, or even industrial use cases where trending is monitored for tracked devices, machines, or vehicles out in the field.
This application of existing data services like NGDATA to IoT data streams, with the broadest definition of IoT, is what excites us at wot.io. The broad set of data services in our exchange bring both industry-standard and innovative solutions to IoT projects of all types.
wot.io is an authorized partner with Critical Mention to add value to the Critical Mention broadcast data stream. If you're interested in access to the Critical Mention data stream please contact us at: info@wot.io
In this post I connect Texas Instruments Sensortags to oneMPOWER™, a M2M/IoT device management platform and implementation of the oneM2M specification, developed by the OneM2M standards organization.
wot.io IoT middleware for the connected Enterprise
Here I build on previous work (part 1, part 2) done with TI Sensortags and the Beaglebone Black from beagleboard.org, to demonstrate how easy it is to combine data services in an IoT solution using wot.io.
As you will recall from those previous posts, I used the wot.io data service exchange™ to employ DeviceHive as a device management platform data service from which I routed device notifications through some transformation logic in scriptr; and on to a Nest thermostat integration in bip.io and monitoring & metering stripchart in Circonus.
While DeviceHive is an excellent, open-source option for device management, wot.io data service exchange is about choice and IoT platform interoperability.
Today we're going to demonstrate how to use an alternative device management platform with the wot.io data service exchange middleware and the oneMPOWER™ device management platform as a wot.io data service. The loose coupling of wot.io's routing architecture and data service adapters keep everything else working seamlessly, resulting in a powerful, composable IoT/M2M solution. While not demonstrated in this post, both DeviceHive and oneMPOWER could be deployed to work together in the wot.io data service exchange.
oneM2M & oneMPOWER
oneM2M represents an extensive set of entities and protocol bindings, designed to tackle complex device management and connectivity challenges in the M2M and IoT sector. Naturally, a full treatment of how the oneM2M system works is beyond the scope of this article, and I refer you to the oneM2M specifications if you want to learn more. For this demo, you'll want to refer to these in particular:
Additionally, you will soon find public code samples in github: [currently private for review]
One of the tools that InterDigital makes available to oneM2M developers is a client application designed to view the resource hierarchy in a given oneMPOWER system. We'll use it here to visualize state changes as we interact with the oneM2M HTTP bindings. At the top is a reference diagram of oneM2M entities, helpful to have at your fingertips. You can see events as they happen in the console window on top, and at the bottom is the resource viewer. Keep an eye there for resources to appear as they are created.
Note, the tool header lists MN-CSE, for a Middle Node, but we're working with an IN-CSE, an Infrastructure Node. These oneM2M designations are actually very similar—differentiated by their configuration to correspond to their roles in a large-scale oneM2M deployment. Don't worry about it for now, just watch the resource tree!
Application Entity Setup
For this demonstration, we will first create an Application Entity (AE) by hand, in the Common Services Entity (CSE) instantiated by the oneMPOWER server. In a full system, the devices or gateways would not typically be responsible for defining the full resource tree, so here we use curl
commands against the oneMPOWER HTTP REST interface endpoints. The message is sent as XML in the body of an HTTP POST, but per the specs you can use other encodings like JSON, too.
Note that all parts of the calls are important, with critical data represented in the headers, path, and body!
curl -i -X POST -H "X-M2M-RI: xyz1" -H "X-M2M-Origin: http://abc:0000/def" -H "Content-Type: application/vnd.onem2m-res+xml; ty=2" -H "X-M2M-NM: curlCommandApp_00" -d @payloadAe.xml "http://$IPADDRESS:$PORT/$CSE"
HTTP/1.1 201 Created
Content-Type: application/vnd.onem2m-res+xml
X-M2M-RI: xyz1
X-M2M-RSC: 2001
Content-Location: /CSEBase_01/def
Content-Length: 367
<?xml version="1.0"?>
<m2m:ae xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.onem2m.org/xml/protocols CDT-ae-v1_0_0.xsd" rn="curlCommandApp_00"><ty>2</ty><ri>def</ri><pi>CSEBase_01</pi><ct>20151030T221330</ct><lt>20151030T221330</lt><et>20151103T093330</et><aei>def</aei></m2m:ae>
The body of the POST contains this XML data, including the application ID for the AE:
<?xml version="1.0"?>
<m2m:ae xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.onem2m.org/xml/protocols CDT-ae-v1_0_0.xsd" rn="ae">
<api>myAppId</api>
<rr>false</rr>
</m2m:ae>
Verifying the AE
Next we'll perform a simple check to make sure that the Application Entity was properly configured in the CSE. We expect to get a reply showing what we configured for the AE, and no errors.
curl -i -X GET -H "X-M2M-RI: xyz1" -H "X-M2M-Origin: http://abc:0000/def" "http://$IPADDRESS:$PORT/$CSE/curlCommandApp_00"
HTTP/1.1 200 Content
Content-Type: application/vnd.onem2m-res+xml
X-M2M-RI: xyz1
X-M2M-RSC: 2000
Content-Length: 399
<?xml version="1.0"?>
<m2m:ae xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.onem2m.org/xml/protocols CDT-ae-v1_0_0.xsd" rn="curlCommandApp_00"><ty>2</ty><ri>def</ri><pi>CSEBase_01</pi><ct>20151030T221330</ct><lt>20151030T221330</lt><et>20151103T093330</et><api>myAppId</api><aei>def</aei><rr>false</rr></m2m:ae>
And you can see above, the ID myAppId
is there! It worked! We can also see it appear in the resource tree viewer, here shown as the green box labeled "def"
(a "foo"
name drawn from the create call above):
Create a Container
In order to store content in a CSE, you must first create a Container entity. This is just a named bucket into which your content instances will go. Here's the call to set up a container named curlCommandContainer_00
. The XML payload is more or less empty as the name implies, as we are not setting any extended attributes here.
curl -i -X POST -H "X-M2M-RI: xyz2" -H "X-M2M-Origin: http://abc:0000/$CSE/def" -H "Content-Type: application/vnd.onem2m-res+xml; ty=3" -H "X-M2M-NM: curlCommandContainer_00" -d @payloadContainerEmpty.xml "http://$IPADDRESS:$PORT/$CSE/curlCommandApp_00"
HTTP/1.1 201 Created
Content-Type: application/vnd.onem2m-res+xml
X-M2M-RI: xyz2
X-M2M-RSC: 2001
Content-Location: /CSEBase_01/def/cnt_20151030T221435_0
Content-Length: 407
<?xml version="1.0"?>
<m2m:cnt xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.onem2m.org/xml/protocols CDT-cnt-v1_0_0.xsd" rn="curlCommandContainer_00"><ty>3</ty><ri>cnt_20151030T221435_0</ri><pi>def</pi><ct>20151030T221435</ct><lt>20151030T221435</lt><et>20151103T093435</et><st>0</st><cni>0</cni><cbs>0</cbs></m2m:cnt>
And again, the viewer shows our container created successfully, in red. It's labeled by the resource identifier (also returned in the XML response we see above), and not by the resource name that we provided. (If you hover over the block you can verify the extra info is correct.)
Create a Content Instance
Now we're ready to get to the fun stuff, sending actual data from our devices! Before we go over to the device script, we'll run one more test to make sure we can create a Content Instance by hand.
Of note here is that each Content Instance needs a unique identifier. Here you can see its name specified by the request header X-M2M-NM: curlCommandContentInstance_00
. If you run the same command with the same name, it will fail, as the content instance already exists. This makes sure you can't accidentally erase important data.
curl -i -X POST -H "X-M2M-RI: xyz4" -H "X-M2M-Origin: http://abc:0000/$CSE/def/cnt_20151030T221435_0" -H "Content-Type: application/vnd.onem2m-res+xml; ty=4" -H "X-M2M-NM: curlCommandContentInstance_00" -d @payloadContentInstance.xml "http://$IPADDRESS:$PORT/$CSE/curlCommandApp_00/curlCommandContainer_00"
HTTP/1.1 201 Created
Content-Type: application/vnd.onem2m-res+xml
X-M2M-RI: xyz4
X-M2M-RSC: 2001
Content-Location: /CSEBase_01/def/cnt_20151030T221435_0/cin_20151030T221557_1
Content-Length: 417
<?xml version="1.0"?>
<m2m:cin xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.onem2m.org/xml/protocols CDT-cin-v1_0_0.xsd" rn="curlCommandContentInstance_00"><ty>4</ty><ri>cin_20151030T221557_1</ri><pi>cnt_20151030T221435_0</pi><ct>20151030T221557</ct><lt>20151030T221557</lt><et>20151103T093557</et><st>1</st><cs>2</cs></m2m:cin>
This is the content we sent in the body of the request, again as XML. You can see the data field in the con
element, which is the integer 22.
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<m2m:cin xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.onem2m.org/xml/protocols CDT-cin-v1_0_0.xsd" rn="cin">
<cnf>text/plain:0</cnf>
<con>22</con>
</m2m:cin>
And our content instance appears in the viewer as well, in the orange block:
And you can see the details in a pop-up. Notice the parentID
, and that it matches the container's ID from above. You can also see the data we sent at the bottom, the value 22:
Send Device Data
Running on the BeagleBoard device, we have a small Python script that communicates with the oneM2M HTTP REST interface to send periodic telemetry data to the oneMPOWER instance, and ultimately on to the wot.io bus via the wot.io oneMPOWER adapter. First, the header, where we import some libs and set our configuration: the CSE name, app name, and container name must match what's been configured in the oneMPOWER instance.
#!/usr/bin/env python
import time
import httplib
import os
command_temp = "python ./sensortag.py 78:A5:04:8C:15:71 -Z -n 1"
hostname = "23.253.204.195"
port = 7000
csename = "CSE01"
appname = "curlCommandApp_00"
container = "curlCommandContainer_00"
Next, we set up some simple helper functions to
- read the sensor data from the TI SensorTags connecting to our device via Bluetooth (see previous post for details),
- compose a Content Instance XML message, and
- send it to the HTTP endpoint.
Finally, we loop and sleep to generate time-series data. Simple!
def readsensor(cmd):
return float(os.popen(cmd).read())
def onem2m_cin_body(value):
message = """<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<m2m:cin xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.onem2m.org/xml/protocols CDT-cin-v1_0_0.xsd" rn="cin">
<cnf>text/plain:0</cnf>
<con>%s</con>
</m2m:cin>""" % value
return message
def send(value):
body = onem2m_cin_body(value)
headers = {}
headers['Content-Length'] = "%d" % len(body)
headers['Content-Type'] = "application/vnd.onem2m-res+xml; ty=4"
headers['X-M2M-NM'] = "my_ci_id_%s" % time.time()
headers['X-M2M-RI'] = "xyz1"
headers['X-M2M-Origin'] = "http://abc:0000/def"
path = "/%s/%s/%s" % (csename, appname, container)
con = httplib.HTTPConnection(hostname, port)
con.request("POST", path, body, headers)
res = con.getresponse()
print res.status, res.reason, res.read()
con.close
while True:
print "Reading sensor\n"
value = readsensor(command_temp)
print "Got %f - sending\n" % value
send(value)
print "Sleeping...\n"
time.sleep(30)
And now a quick example of the output as we run the above script. We see it read the SensorTag data as we have done in the past, assemble a content instance message, and send it via HTTP POST. Created content instances appear in the specified container, just as we saw above, and from there the telemetry flows back to the wot.io bus and on to other data services.
root@beaglebone:~# ./main.py
Reading sensor
Got 44.643921 - sending
201 Created <?xml version="1.0"?>
<m2m:cin xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.onem2m.org/xml/protocols CDT-cin-v1_0_0.xsd" rn="my_ci_id_1446511119.68"><ty>4</ty><ri>cin_20151103T003839_9</ri><pi>cnt_20151030T221435_0</pi><ct>20151103T003839</ct><lt>20151103T003839</lt><et>20151106T115839</et><st>9</st><cs>13</cs></m2m:cin>
Sleeping...
oneMPOWER Protocol Analyzer
It's worth noting that there's also a protocol analyzer available in the resource tree viewer, which is handy for debugging communication sequences. You'll see some of our requests represented below:
Ship your IoT Solution with wot.io Data Services
As you will recall from my previous post, we have now done everything necessary to
- use temperature readings originating from TI Sensortags that were
- sent over Bluetooth LE to a BeagleBone Black acting as a
- gateway device being managed by oneMPOWER in order to,
- after first being transformed by business logic housed in scriptr,
- control our Nest thermostat
- from a custom bip.io workflow, and finally to
- set up monitoring and alerting for the whole thing using Circonus
Whew! That's a mouthful! What a relief that wot.io's loosely-coupled architecture supports the DRY principle so that we only had to modify the third bullet. The rest of that complex data flow just continued to work just like before!
From data in motion to data at rest, and with an ever-growing selection of data service partners, wot.io has you covered, including enterprise-ready solutions like oneMPOWER. Ready for more? Head over to wot.io and dig in!
We're pleased to announce bip.io's new premium plans will boost productivity with new pods and actions, finer control over scheduling, generous bandwidth and priority support for your integrations.
bip.io is a free hosted platform for the open source bip.io server. If you're running your own bip.io server, you can continue to mount your servers into the hosted platform on the Free plan as you've always done.
As a special thanks to bip.io's supporters for contributing to our success, customers already using premium features have been automatically upgraded to the bip.io Basic plan, free of charge. The next time you log into bip.io, all features for this plan will be automatically unlocked.
In addition to the original Free and Enterprise licensing plans, 3 upgrade levels have been added for those who want to go the extra step, to power users, to bip.io pro's!
Premium Pods
You'll notice that some pods in bip.io have been marked as premium, requiring an upgrade.
Upgrading to any premium plan will automatically unlock all premium Pods, which will be instantly available. Premium users will automatically acquire new Pods as they become available.
Here's the full list as it stands today
Community Pods
Community Pods are the staple bip.io integrations you might already know and love.
Scheduling
Event triggers on the free plan have always run every 15 minutes, except when they are manually initiated. On a premium plan, you can now schedule triggers to run at any time or timezone, as frequently as every 5 minutes.
Bandwidth
Wow, your bips are busy!! We've taken the average real bandwidth that bips use in each plan monthly and doubled it. If you start regularly exceeding the monthly bandwidth for your plan - Great Job! We'll reach out to you with assistance upgrading.
Thank you!
Like any migration from a historically free platform to one that starts to charge for usage, there's bound to be a lot of concerns and questions. Reach us at hello@bip.io with whatever is on your mind.
Within the wot.io data service exchange (DSE), we often need to interface different web APIs to each other. It is not uncommon for 3rd party software to have limited support for secure connections. While we provide SSL termination at our secure proxy layer, there are many legacy applications which can only communicate via HTTP. You can envision the problem space are follows:
In order to address this connectivity space, we developed a HTTP/HTTPS relay that allows us to receive both types of requests and translate their security context. A request can be received via HTTP or HTTPS and be sent out via HTTP or HTTPS independent of the original request. By adding a simple rewrite rule to our proxy layer, we can specify the desired translation. This makes it easy for our developers to integrate software that lacks support for a security scheme to integrate with one that does.
Another interesting side effect of the relay is that we can use it to transparently proxy both request and response data to the wot.io data service exchange. Both the request data and the response data are made available on the exchange as resources. We can then integrate those data streams into any of the integrated data services.
With the wot relay, we could log all of the requests to a 3rd party web api made through the relay to a search engine like Elasticsearch, so that we can later study changes in usage by geography. At the same time, we could store the raw requests into a database like Mongodb for audit purposes, and use a data explorer like Metabase to create custom dashboards. We could also log the same data to a monitoring and analytics provider like Circonus and monitor the reliability of the 3rd party servers.
From the viewpoint of the HTTP client, the backend service be it http or https is available through the corresponding wot relay address. The developer of that application need not worry about changes in requirements for data retention, metering, monitoring, or analytics. The only change is that instead of calling the desired service directly, they would call it through a relay.wot.io or secure.relay.wot.io address. The only difference between the two addresses is that the secure.relay.wot.io only works with HTTPS servers, and will not relay to a HTTP server.
From a security standpoint, the client may connect to both the secure and insecure relay addresses through HTTP or HTTPS. In the case of TLS connects, we guarantee that we will validate the upstream certificates, and will refuse to relay the request should we not be able to verify the x509 certs of the upstream server. In this fashion, your application need only be able to verify the signature of the wot relay service itself, and not worry about validating the signatures of all of the potential upstream services you might wish to interrogate. In this way we preserve the chain of trust, and do not engage in any man-in-the-middle TLS tricks.
Having this layer of indirection, most importantly, opens up the full data set to inspection. Since each request and each response is forwarded to the wot data bus, any number of data services can be used to process both the requests and responses. This allows the application to be extended in new ways without having to rewrite the integration. The relay makes it easy to generate new valuable data streams and insights from your application behavior at the cost of a few milliseconds in additional request latency. As the wot data bus allows for filtering data both based on meta-data and content, it also allows you to expose the data in highly selective fashion to your customers, internal clients, and legal authorities.
Using the wot protocol adapters, we can also expose this data in real time to a wide variety of protocols: HTTP, WebSockets, TCP, UDP, MQTT, CoAP, XMPP, and AMQP. These protocol adapters provide a protocol specific interpretation of the wot data bus resources, complete with authentication and complex authorization rules. For example, you could subscribe to a wot data bus resource as an MQTT topic, or observe it as a CoAP resource. These protocol adapters also support bi-directional communication, allowing a developer to inject their own data back into the wot data bus as a sequence of data grams or as a stream of data. The flexibility of this system allows for easy cross communication between many different types of applications in different contexts.
One of the more interesting developments in the massive IoT ecosystem is the quickly growing base install of LoRaWAN devices. As wot.io partners Stream Technologies and MultiTech bring these new technologies to market, we are happy to be able to provide the data services infrastructure to support these new IoT connectivity options. Let's look at some of the options now available.
Networking Options
When people speak of the burgeoning proliferation of connected devices in the IoT ecosystem, one thing that is sometimes overlooked is considering the implications of which network type the actual device(s) run on. For many touted use cases, the more common networks are ill-suited for the task:
- WiFi takes up too much power and the range is too limited
- Cellular (3G/LTE) is too expensive per connected unit
- Cellular (CAT-0) is still a few years out.
- Bluetooth LE range is too limited.
LoRaWAN however, in tandem with some other 6LoWPAN networks like SigFox, are making strides to fill that void of being all of 1) low power 2) low cost and 3) long range. Perfect for any IoT application!
Demo Setup
Let's get started setting up our LoRa network. For this we're using:
- MultiTech Conduit Gateway, with a
- MultiTech MTAC LoRa Gateway Accessory Card installed
- MultiTech mDot modules, mounted on a
- MultiTech Developer Kit
The LoRa mDot uses an ARM processor and is ARM mbed enabled, so developing LoRa solutions using these ARM-powered devices much faster and more pleasant.
If you're using a USB-Serial cord as I am to connect to the DB9 port, you can use the following to see that the line is connected:
> ls /dev/tty.*
...
/dev/tty.PL2303-00001014
The tty.PL2303-*
listing above confirms that our Serial line is connected to our USB port.
You can also confirm that you are properly connected when the D7
LED is lit up on the MultiTech Developer Kit (UDK).
I'm using CoolTerm to send in the AT commands to the MultiTech LoRa mDot module which we have mounted on a MultiTech UDK.
AT
AT&F
AT+RXO=1
AT&W
AT+FSB=7
AT+NI=1,wotioloranetwork
AT+NK=1,<ENTER_PASSPHRASE>
AT&W
ATZ
AT+JOIN
AT+ACK=1
After that's confirmed, we simply drag&drop our compiled binary from the mbed online editor onto the device, and it flashes, connects, and starts sending data automatically!
We can now hop over to our MultiTech Conduit and use the Node-RED interface to see that data is flowing from our LoRa mDot into the Conduit. So let's take that data and pipe it into the wot.io Operating Environment.
From there, that LoRaWAN data can be combined with other data sources and is easily fed into a wide range of data services to truly unlock its value.
You can check out the full module source code over at the ARM mbed website. And check out other posts in the wot.io labs blog and our main website for information on data services for your IoT application.
“The Internet of Things” (IoT), and the amount of data from connected devices is in the early stages of tremendous growth over the next few years. A recent report from McKinsey estimates its potential economic impact could be up to $11.1 trillion by 2025. The impact of this projected growth is already making its way into the operations of many enterprises. While this number is staggering in its implication, enterprises have a lot of work ahead to create value from the IoT systems and the resulting wave of IoT system data. How many different connected devices or IoT systems are in your home now? Think about a mature Enterprise. The McKinsey report states “interoperability between IoT systems is critical. Of the total potential economic value the IoT enables, interoperability is required for 40 percent on average and for nearly 60 percent in some settings.” While it’s stated that interoperability will leverage the maximum value from IoT applications, are enterprises really ready for an IoT data from one or more IoT systems?
Some evidence would suggest not. In one use case brought to light by McKinsey, up to 99 percent of connected device data is not currently being used beyond simple operational awareness such as anomaly detection. Part of this problem can be attributed to closed IoT systems that don’t allow for interoperability between the local and cloud based IoT systems and the data service providers that can create actionable results for the Enterprise. Another part of the problem is caused by not having a solid solution for Big Data aggregation combined with a good Enterprise Application Integration strategy.
Here are a couple of questions enterprises need to take into consideration in order to succeed when deploying IoT platforms:
- How flexible is the enterprise in terms of working with multiple IoT systems providers and data services in an interoperable environment?
- Does the enterprise have access to Enterprise Application Integration (EAI) and Integration Platform-as-a-Service (iPaaS) solutions?
It’s fairly straightforward to connect device data from one IoT System to one data service provider for analysis and reporting, but the challenge comes in aggregating data from multiple IoT systems to be processed by multiple best-in-class data service providers to get the most out of your data. This is where the need for interoperability becomes very important. It’s difficult to scale your solution to it’s maximum potential when limited by closed systems or locked data.
There is technical prowess required to make IoT solutions work together, enterprises that once tried to consolidate their systems with one all-encompassing vendor are now embracing the interoperability of many specialty vendors to provide the best operational efficiency and accelerated deliverables. Before IoT Systems, many successful enterprises were already utilizing a mix of on-premise EAI platforms and cloud based iPaaS solutions. Major vendors offering EAI and cloud based iPaaS solutions have started to think about the integration of connected device data from multiple IoT and Machine-to-Machine (M2M) systems but have yet to complete the solution. If your enterprise wants to become a part of the IoT landscape, you need to have good answers to how you’re going to integrate multiple IoT platforms and create actionable results from IoT data.
To learn more, visit wot.io.
This past week, I managed to get some free time to dig into ARM mbed OS, and get a simple application talking to wot.io. The most obvious way to do this was to use ARM's own mbed-client interface to communicate with an instance of the mbed Device Server. As mbed Device Server (MDS) is already integrated into wot.io's Data Service Exchange, as soon as I had the mbed-client code integrated into an application, I should be able to get my data into an existing workflow.
Looking at the supported board options, and what I had laying on my desk already, I decided to use the Freescale FRDM-k64f, since it's Ethernet interface has a supported lwip driver. The board comes prepopulated with a FXOS8700CQ accelerometer and magnetometer, which has a very similar interface to the Freescale MMA8452Q that I've used in a few of my projects. While there is no obvious direct mbed OS support for the accelerometer in the FRDM-k64f hardware abstraction layer, we should be able to use the HAL support I2C interface to talk to it. ARM already provided a FXOS8700Q sensor driver under the Apache 2 license that can be easily ported to mbed OS.
Reading over the mbedos documentation, and yotta documentations, I managed to set up both a local development environment on my Mac, and I also setup a home lab development environment using a Docker container. The Docker container makes it easy to move my development environment from machine to machine, such as from home lab to office lab and back again.
Creating a new project using yotta is straight forward using the text based wizard:
Here I've created a project called accel and creates a skeleton of the module.json file which describes to yotta how to manage the dependencies for our applications. To install our own dependencies, we can use the yotta install to inject our dependency.
At the time of this writing this will modify the module.json file as follows:
If you were to do this at a later date, the version string saying ^1.1.15 will probably be different. ARM mbed OS is undergoing rapid development, and the file I generated just last week was ^1.1.11, almost a patch a day! This rapid development can be seen in all aspects of the system. On any given day, the yotta build command which actually compiles the binary application, will return different deprecation warnings, and occasionally entire libraries are in an unusable state. Generally the optimized builds will compile cleanly, but I have had problems with yotta build --debug-build failing to compile due to register coloring issues. That said, as ARM mbed OS leaves beta, I expect these issues will be resolved.
To setup the development environment for the Freescale FRDM-k64f, it is also necessary to select the appropriate target:
This configures our build environment to build for the FRDM-k64f using arm-none-eabi-gcc. A complete list of available target architectures can be obtained using the yotta search target command:
Switching which C++ compiler you use requires switching the full target. Switching targets will fetch the associated target dependencies as well, and as such it is important to build after you've selected your target. With the target set to frdm-k64f-gcc, we can build the source code for this project.
The behavior is as follows:
- initialize i2c
- initialize FXOS8700QAccelerometer
- initialize Ethernet adapter
- acquire a DHCP lease
- initialize an ipv4 tcp/ip stack
- pick a random port and
- create a M2MInterface object to connect to MDS
- create a M2MSecurity context for our MDS connection
- create a M2MDevice object to create the device's OMNA LWM2M resources
- create a M2MObject that will actually represent our device sensor tree
- create two M2MResources for our x and y axis (not following the OMNA LWM2M conventions)
- add our M2MObject and M2MDevice to a list to be used to update registrations
- setup a timer to update the registration every 20 seconds
- enable the accelerometer
- setup a timer to sample the accelerometer every 5 seconds
- setup a callback to setup the device and resource registrations
- start the main scheduler loop
To build the application, we can simply use the yotta build command to generate a accel.bin file in build/frdm-k64f-gcc/source/. This is the build artifact that is our application binary. The boot loader on the FRDM-k64f board knows how to load this file on reset. We can install this application by copying the accel.bin file to the USB storage device (on my Mac /Volumes/MBED).
Once the the application binary is installed, the device registers itself with mbed Device Server (MDS) running on our demo cluster. The data is then available to all of the services which request device notifications. The routing layer inside of the wot.io data service exchange ensures that only those users who have the rights to see the data from a given device can see it.
As the wot.io data service exchange supports multiple protocols, we can use an off the shelf command line client to read the data from the data service exchange. Just to quickly test getting this into a script, we can use wscat, a node module which speaks the RFC6455 WebSocket protocol:
The Internet of Things (IoT) is a hot industry, poised for substantial growth. In fact, International Data Corporation (IDC) projects the industry will grow from 655.8 billion in 2014 to 1.7 trillion in 2020. That’s over 1 trillion dollars in growth in just six years! Because IoT is such a hot topic in the tech industry, Tech in Motion in NYC hosted a meetup event on Tuesday, October 13th 2015, sponsored by Grind + Verizon to discuss all things IoT. wot.io Founder and CEO Tom Gilley was invited to speak along with key speakers from other IoT companies in New York City.
The speakers talked about the IoT problems their respective businesses are solving, as well as their perspective of the direction IoT is moving in. From sensors and wireless connectivity becoming common in household products, to triggering automatic ordering of household products when they get low, to the numerous types of wearable devices companies are working to create, it’s easy to see why IoT is getting so much attention. Dash Founder & CEO Jamyn Edis says “IoT is clearly a macro trend that is engaging professionals in the tech space. The question is, when does a broad trend become a real, day-to-day, revenue-driving opportunity for companies and people working at them? We are not yet there.”
When asked about the most pressing issue in IoT right now, Ted Ullrich, Founder of Tomorrow Lab, said “On the commercial product side, there in an open pasture for creating new devices, brands, and services. Wireless protocols like WiFi, Bluetooth, and Cellular are sufficient for these products for now, but planning an infrastructure for the future of 30+ billion things on the internet is a different story.” Quite right he is.
Since there are no dominant IoT standards, many companies have a mix of internal, closed IoT platforms with plans to adopt new platforms like Thingworx. Thingworx is a well known brand, and is good at what it does. The organizational result is a mix of IoT platforms that have interoperability issues. McKinsey Institute recently stated that “Interoperability is critical to maximizing the value of the Internet of Things. On average, 40 percent of the total value that can be unlocked requires different IoT systems to work together.” This is the big picture reason why wot.io exists: to create an interoperability environment where device management platforms can seamlessly and securely connect with enterprise applications for data analysis, reporting, automation and much more.
It’s no surprise to us this event was a success, with a packed room and over 500 people RSVP’d to attend. The audience was engaged, enthusiastic and asked plenty of questions, both during the event and after the talks were over. Where do you see the Iot industry heading? Leave your comments below.
*McKinsey Institute quote from the June 2015, “Unlocking the full potential of the Internet of Things”
Hardware Recap
In Part One of this demo, we took two Texas Instruments Sensortags, connected them using Bluetooth LE to a Beaglebone Black, ran a Node.js gateway to connect to DeviceHive1, and saw it all work. This is the diagram of our hardware setup, as completed at that point:
Fantastic! Our hardware works. Now we are going to hook some data services up using the wot.io data service exchange™, and do some fun stuff with it.
Data Services
Now let's expand it to include everything we'll do with the data services. We are going to use scriptr, Circonus, bip.io, and a Nest thermostat. Here's the plan:
- Send the data from DeviceHive to scriptr for processing
- Using scriptr, massage our data, and make some logs
- Send the data from scriptr to Circonus for graphing
- Send the data from scriptr to bip.io for alerting and control of the Nest thermostat
Message Flow Graph
Below is a diagram of the message flow. All the green lines are implemented using the wot.io data service exchange™ (which I also call the bus), connecting data service sources to data service sinks.
You'll notice that some of the scripts, bips, and graphs are named temperature, and others are named color. I have a confession - to save time, I just stuck with the default setup that comes out of the box with wot.io's Ship IoT initiative which converts temperature units and maps them onto the color spectrum for use with some Philips Hue bulbs like we saw in an earlier post. I just figured that since wot.io has so many data services, and I have so little time, why not just re-use what was already done? So, let's just agree to ignore the fact that scripts named color might no longer have anything to do with color. Maybe we're just coloring our data. Ok? Onward!
Scriptr
Our data's first stop after leaving DeviceHive is scriptr, so we'll start there. The scriptr.io data service offers a very fast way to create custom back-end APIs to process your data in the cloud using JavaScript. This enables fast productivity and power for your Internet of Things (IoT) and other projects, ever more so when tied to other data services via wot.io. All the messages come into a script called transform
, as defined by the wot.io bus configuration.
scriptr: transform
The first task we perform on our message stream is a data normalization step. You'd expect to see something like this in most real-world applications—a layer of abstraction that transforms incoming messages to a unified format for subsequent services to consume. This script will massage the incoming messages into this simple JSON structure, and remove bits that may no longer be relevant now that we are outside of the local network that the originating devices were using:
[ device_id, { key:value, ... } ]
for keys:
key: "name" | "value" | "units"
for values:
name: "temperature" | "humidity"
value: a floating-point number
units: "C" | "F" | "%RH"
For example, from this input message,
{"action":"notification/insert","deviceGuid":"ca11ab1e-c0de-b007-ab1e-de71ce10ad01","notification":{"id":1558072464,"notification":"temperature","deviceGuid":"ca11ab1e-c0de-b007-ab1e-de71ce10ad01","timestamp":"2015-10-15T20:28:33.266","parameters":{"name":"temperature","value":23.7104174805,"units":"C"}},"subscriptionId":"00000000-6410-4e1a-b729-000000000000"}
...we get this output message:
["ca11ab1e-c0de-b007-ab1e-de71ce10ad00",{"name":"temperature","value":23.7104174805,"units":"C"}]
Now we are ready to sink these normalized messages back onto the bus for further processing by other data services.
As the message flow graph above illustrates, messages from transform
will use the bus to fan out and sink into convert
and color
in scriptr, and also into bip.io and Circonus.
Here's our full transform code:
// Convert DeviceHive Notification to well known format of [<devicehive deviceId>, <devicehive parameters>]
var log = require("log"),
data = JSON.parse(request.rawBody).data,
payload = data && data[0];
log.setLevel("DEBUG");
log.debug("testraw: " + JSON.stringify(data[0]) );
if (payload && payload["deviceGuid"] && payload["notification"]["parameters"]) {
response = JSON.stringify([payload["deviceGuid"], payload["notification"]["parameters"]]);
log.debug("response: " + response);
return response
}
log.debug("Invalid Request: " + JSON.stringify(payload))
scriptr: convert
This is a utility set up to demonstrate data transformation and message decoration. We take messages from the incoming data source, parse out the type and units, and create a new data structure with additional information based on the incoming message. This data source will be sent in a message to whatever sink is configured.
A more complex implementation could take incoming data, perform lookups against a database, add semantic analysis, analyze for part-of-speech tagging, or do any number of other things. Complex message graphs composed of small, well-defined services let us build up behaviours from simple parts—much like the Unix philosophy when it comes to small command-line tools.
In this case, we convert Celsius to Fahrenheit, or Fahrenheit to Celsius, depending on what the incoming format is, and put both values into the resulting message. For humidity we simply pass along the value and label it as rh
for relative humidity.
switch (units) {
case "c":
// The incoming reading is in celsius. Convert to Fahrenheit
response.tf = temp && (temp * 9 / 5 + 32).toFixed(1) || "N/A";
response.tc = temp && temp.toFixed(1) || "N/A";
break;
case "f":
// The incoming reading is in Fahrenheit. Convert to celsius
response.tf = temp && temp.toFixed(1) || "N/A";
response.tc = temp && ((temp - 32) * 5 / 9).toFixed(1) || "N/A";
break;
default:
response.error = "unknown units";
}
These demonstration messages currently sink into Scriptr's logs, and can be used in future systems. Here's the result of a temperature message, and we can see the incoming ºC data was converted to ºF and logged:
Scriptr: color
Once again, this script was originally meant to control a Philips Hue lamp, but we've co-opted it to send data along to bip.io and control our furnace. (I've left in the color calculations if you're curious). It would be trivial to expand the message graph in bip.io to do the lamp control, I just didn't have the time to set it up. Aren't I quite the model of efficiency today?
// Unpack the parameters passed in
var log = require("log"),
timestamp = request.parameters["apsws.time"],
data = JSON.parse(request.rawBody).data,
reading = data && data[0],
deviceId = reading && reading[0];
if (reading && (reading[1] instanceof Object) && reading[1].name == "humidity") {
// we just drop humidity messsages here, as this is intended to control
// the thermostat settings later on and nothing else at this time.
return null;
}
var celsius = reading && (reading[1] instanceof Object) && reading[1].value;
// Convert temperature in range of 0C to 30C to visible light in nm
// 440-485 blue, 485-500 cyan, 500-565 green, 565-590 yellow, 590-625 orange, 625-740 red
// 300nm range, 30C range
var temperature = celsius < 0 ? 0 : (celsius > 30 ? 30 : celsius),
color = 440 + (300 * (celsius / 30));
// Populate response values or default to non-value
var response = {
time: timestamp,
temperature: celsius.toFixed(2),
color: parseFloat(color.toFixed(2)),
device: deviceId || "N/A"
};
log.setLevel("DEBUG");
log.debug("response: " + JSON.stringify(response));
return JSON.stringify(response);
Circonus Graphs
Circonus is designed to collect your data into graphs, dashboards, analytics, and alerts. While it is often used for DevOps or IT Operations style monitoring, we're showcasing how well it serves as a key component of an IoT solution. Today, we'll simply use it to graph our timeseries messages and send ourselves an alert if the data stops flowing. This could indicate a problem with the battery in the Sensortag, or that we are out of range. Use your imagination, the sky is the limit, and Circonus has a powerful feature set.
Checks
You can see the four device IDs here, and the checks that were set up as part of this demonstration message flow.
Graphs
As the metrics are colleted, Circonus tracks it and can create graphs and dashboards for you. There's only a bit of data shown in the graph here because I've only had it running for a few minutes.
There are some powerful analytics tools and alerts at your fingertips here. It's hard to show with the small amount of data, but you can use anomaly detection, trend prediction, and many other functions on your data. This is a simple sliding window moving average, which we could use to smooth out spurious temperature readings and prevent the furnace from turning on needlessly.
Alerts
Circonus maks it simple to notify you with an alert if the data stops flowing. This is essential for mission-critical systems.
bip.io Workflow
We've covered the details of creating a bip.io workflows elsewhere, and many of the details like endpoints, auth tokens, etc. are already taken care of for us automatically by the wot.io integrations and tooling.
Here in the dashboard we can see the four bips that are referenced in the above message flow graph. Each has the device ID embedded into the name, and the endpoint.
We'll have a look at two of them, both for the sensor ID ending in 00 (which is from the device MAC ending in :70, way back up the chain!). First, the alert.
alert bip
Here we see the overall message flow inside the alert bip. Incoming messages from the wot.io bus are processed by a math expression, a truthyness check, and if it all passes the criteria, an email alert is sent.
Here's the expression. It's basic; we are simply checking if the temperature is too low, which could indicate some problem with the heating system:
The truthy check looks at the result of the previous Calculate expression, and will trigger the following node in the graph if it's true:
And finally, we send an email alert, going to an address we specify, with the data embedded in it via template replacements:
Simple!
Nest Temp Control bip
Now we have a simple bip set up to take the incoming temperature message, calculate an error factor, and generate an offset temperature setting for the Nest thermostat. Unfortunately, Nest doesn't have an API call that lets us send the sensor temperature in directly. Granted, that's an odd use case, but they probably haven't heard of this cool idea yet ;)
With the lack of the sensor API, we need to get creative. We'll take the value from the sensortag, and calculate an error offset:
(desired_temp - sensed_temp)
Then we'll combine the error offset with the desired temperature:
(desired_temp - sensed_temp) + desired_temp
Here it is in the Math function in the bip, with a set point of 20ºC:
This will give us a new set point for the Nest, and we send it along in the bip as pictured above. This is a basic setup, and you would want to refine this for long-term use. I'd suggest adding hysteresis to prevent the furnace from turning on and off too rapidly when close to the set point, and calibrate yourself a PID control loop to smooth things out.
Wrap-Up
This concludes our writeup of what turned out to be a rather complex message flow graph. We started with a local network of devices, built a hardware and software gateway to get those devices out to a device management platform, connected that to the wot.io bus, and wired up some powerful tools whose depths we have only started to plumb.
Yet even with all the complexity and details that we covered, you can see how simple it is to compose behaviors using the wot.io data service exchange™. And that is the whole point: to get us quickly to a working system. And since it's based on a fully scalable architecture, your solution is ready to grow with you from prototype into production.
In other words, you can focus on Shipping your IoT!
See you next time!
Link to Part One of this series
-
DeviceHive, like wot.io, is a member of the AllSeen Alliance. ↩
At wot.io we're always working on interesting new things and as you can see here, we like to blog about them. With everything going on, people were asking for an easy way to be notified when new blog posts go up. The answer was to use one of our data services, bip.io, to watch the blog RSS feed and send email when a new post goes up. In this post, we'll explain how we did it and how you can use bip.io to set up a similar notification system for any blog that has an RSS feed.
What will you need?
1. Link to RSS feed of blog (usually blogurl.com/rss)
2. Free bip.io account
3. Free or premium Mailgun Account. (You can also use Mandrill)
Step 1: Sign up on bip.io (It's free!) or sign in if you already have an account.
Step 2: In this step, we'll create a new bip and add Syndication (or RSS) feed as an Event.
Click on Create a bip
Proceed to Select Event Source
Find Syndication in the list of available pods
Step 3: In this step we will configure Syndication Pod to 'listen' to the RSS feed.
Click on Subscribe To A Feed
In this example, we'll subscribe to a labs.wot.io feed, but the process is the same for most syndication feeds. Enter a Feed name and Feed URL
Click OK
Step 4: Add an Email Notification.
Click Add an Action
Select "Mailgun" from the available pods
You will be asked to authenticate the API Key from Mailgun if you are using it first time on bip.io. The API Key can be found on your Mailgun account by going to Mailgun Dashboard --> Click on Domain Name (Tutorial)
Choose the "Send an Email" action
Connect the incoming RSS feed with Mailgun Pod by dragging your mouse pointer from Syndication pod to Email pod. It will look like this
Step 5: Configuring Email
Double-click on Mailgun pod to open the configuration window
Enter details like From, Mailgun Domain Name and recipient address.
Next, configure the subject of email. bip.io offers various attributes to include in the text like Post Title, Summary, Author, Date etc.
Post Author is selected by default on bip.io.
Here's how my subject and Text email looks like -
The email body can hold HTML formatting and attributes.
Here, I have added attributes Title, Article Summary and Link. They can be separated by <br />
tags to add line breaks in the email.
Click OK.
We're all set! Go ahead and click Save to save your bip.
Now that it's running, here's how I see email notifications in my gmail inbox
More Pods and Actions
This is a simple bip, but it handles the complexity of parsing the incoming feed, making it easy for you to format the outgoing email message. Plus it handles all of the details of communicating with the Mailgun API. And there are many more things you can do with bip.io like adding some functions to watch for certain things in incoming messages, modifying the content before you send your email, or sending email to different people depending on the content. You can also add many more notification types including sending text messages (Twilio), posting to Slack, or even creating your own curated RSS feed.
Winter is Coming
As I write this, it is mid-October. For those of us who live in the northern lands, our thoughts are turning towards turning up the thermostats to fight off the chill.
Trouble is, some of these old houses aren't very well insulated, and they are filled with warm spots and cold spots. Thermostats regulate temperature based on the sensor inside them, and they are pretty good at it. But I was thinking, what sense does it make to have the thermostat regulate its own temperature, making it nice and comfortable where the thermostat sensor is? I want it nice and comfortable where I am, not where the thermostat is! And I move around. And I move around into cold spots in these old houses. I may be in the kitchen, or basement, or spare room, or sleeping. This needs to be fixed.
Tech to the Rescue!
Texas Instruments offers Sensortags, develpment kits for their CC2650 ultra low-power Bluetooth MCU chips. Combined with a BeagleBone Black (powered by an ARM Cortex-A8 CPU at 1GHz, the Texas Instruments Sitara AM3358BZCZ100), they will be the basis for our solution here.
We're going to connect these using DeviceHive - an open source device management platform developed by DataArt Solutions, and available as a data service on the wot.io data service exchange™. DeviceHive helps you manage devices after they are deployed—handling registration, commands sent to and data received from the device, and a number of other useful tools. From DeviceHive, we'll use the wot.io data service exchange to integrate with additional data services from our partner exchange and finish our system.
Prerequisites
If you want to follow along with this demo, here's what you'll need:
- One BeagleBone Black
- A USB Bluetooth 4.0 Low Energy dongle that works with Linux
- TI CC2541 SensorTag Development Kit (now superseded by the newer CC2650, but very similar)
- A USB to Serial-TTL cable that runs at 3.3V (Not 5!), like one from Adafruit, or from Digi-Key
- Our gateway demo code: https://github.com/wotio/shipiot-ti-sensortag-beaglebone-devicehive
- The bluepy Python library by Ian Harvey
- The DeviceHive Javascript Library
Hardware Setup
There are a number of ways you can connect to your Beaglebone Black for development. I hooked up a 3.3V USB to Serial adapter (remember, not 5V!) to the console port, an Ethernet connection for outbound network access and SSHing in, and USB for power. I've got a little USB power board with a switch on it from Digispark, which is very handy when doing USB-powered hardware stuff, and will save your USB ports some wear and tear. The Bluetooth dongle is connected to the Beaglebone Black's USB port.
Note: You'll need to have the Bluetooth dongle plugged in before you boot or it may not work. There are known issues with hot-plugging Bluetooth dongles on this board.
Here's what my setup looked like:
System Architecture
Here is the block diagram for what I'm planning, and the message flow. To keep things simple for now, we'll be talking about the data services in Part Two of this blog entry.
On the left, we have the two TI Sensortag boards. On each board we have the SHT21 sensor, and the TI CC2541 (or CC2650) controller. The SHT21 communicates via I2C. The CC2541 MCU speaks Bluetooth, and communicates with the bluepy utilities on the Beaglebone Black. The Node.js gateway applications uses those bluepy utils to poll the Sensortags.
In turn, the gateway uses Web Sockets to talk to the DeviceHive service on the wot.io data service exchange. It first handles device registration, which will tell the DeviceHive service there is a new device if it has not seen it before. Once registered, it creates a persistent Web Socket connection, periodically sends sensor readings, and listens for commands from DeviceHive. (We aren't using any of those commands for this demo, but it's easy to do using the wot.io DeviceHive adapter. It's especially powerful when those commands are generated dynamically by custom logic in a wot.io data service, say, in response to sensor readings from some other device!)
The wot.io DeviceHive adapter can then subscribe to receive the device notification data that the gateway sends to DeviceHive. We've configured wot.io to route those device messages to a number of interesting data services. But details on that will have to wait for Part Two!
Bluetoothing
First we need to talk to the Bluetooth Sensortags. I used the the bluepy Python library by Ian Harvey for this, as it includes a sample module that's already set up to speak Sensortag. Great time-saver!
If you are using the current CC2650 sensortags, it should work out of the box. If instead you have the older CC2541 Sensortags, in bluepy/sensortag.py
, you should comment out these lines:
262 #if arg.keypress or arg.all:
263 # tag.keypress.enable()
264 # tag.setDelegate(KeypressDelegate())
Those lines support features that aren't present on the older Sensortag, and it will error on initialization.
Some tools you should know about for working with bluetooth include hciconfig
and hcitool
. Also, gatttool
is useful.
You can bring up and down your Bluetooth interface just like you would with Ethernet, perform scans, and other things as well:
hci0: Type: BR/EDR Bus: USB
BD Address: 5C:F3:70:68:C0:B8 ACL MTU: 1021:8 SCO MTU: 64:1
DOWN
RX bytes:1351 acl:0 sco:0 events:60 errors:0
TX bytes:1333 acl:0 sco:0 commands:60 errors:0
root@beaglebone:~# hciconfig hci0 up
root@beaglebone:~# hciconfig
hci0: Type: BR/EDR Bus: USB
BD Address: 5C:F3:70:68:C0:B8 ACL MTU: 1021:8 SCO MTU: 64:1
UP RUNNING PSCAN
RX bytes:2201 acl:0 sco:0 events:97 errors:0
TX bytes:2022 acl:0 sco:0 commands:97 errors:0
We need to get the Bluetooth MAC address of the Sensortags we are using. From the console on the Beaglebone Black, we will use the hcitool
utility to get it. Here's the procedure, do it one at a time for each tag if you have multiple:
- Insert a fresh CR2032 battery into the Sensortag.
- Press the side button to initiate pairing; the LED will begin blinking rapidly (and dimly!)
-
On the Beaglebone console, initiate a Bluetooth LE scan:
root@beaglebone:~# hcitool lescan LE Scan ... 78:A5:04:8C:15:70 (unknown) 78:A5:04:8C:15:70 SensorTag
Once you get your tag's Bluetooth MAC address, you can hit control-C to cancel the scan.
Once you know the MAC address, you can use the tools included in bluepy
to easily talk to the tag. Try it out like this (first making sure the LED is blinking on the tag by pressing the switch, if it went to sleep):
root@beaglebone:~# ./sensortag.py 78:A5:04:8C:15:71 --all
Connecting to 78:A5:04:8C:15:71
('Temp: ', (24.28125, 20.36246974406589))
('Humidity: ', (24.686401367187493, 32.81072998046875))
('Barometer: ', (23.440818905830383, 979.6583064891607))
('Accelerometer: ', (-0.03125, 0.015625, 1.015625))
('Magnetometer: ', (-12.847900390625, 36.224365234375, 166.412353515625))
('Gyroscope: ', (-3.0059814453125, 3.082275390625, -0.98419189453125))
The sensor on the tag we'll be using for this demo is the SHT21 temperature and humidity sensor. We will use temperature to start, but we could easily expand our algorithms to take humidity into account, and adjust the heat accordingly. There are tons of other applications possible here, too!
Note also that I further modified the sensortag.py
script to give us raw numerical output for the temperature and humidity, separately, using the -Y
and -Z
flags. This made subsequent code simpler.
DeviceHive Gateway
DeviceHive lets devices register themselves with a DeviceHive server instance, and then send along data. There are mechanisms in DeviceHive to send data and commands back to the devices as well, which could be used to update firmware, or take actions based on processing in one or more data service providers.
In our github repo is a device gateway coded with Node.js, using DeviceHive's Javascript libraries and a websockets connection.
Demonstration
Here's a quick walk-through of the hardware setup, gateway code, and a demonstration of the Sensortags sending data through to the DeviceHive device management platform:
To Be Continued
That wraps up part one of this demo, which covers the hardware and device management setup. In the next installment, we'll look at the data services, and cook up the magic that will control our thermostat. Stay tuned!
UPDATE: Part Two is now published.
Monitoring is a vital tool when developing, optimizing and understanding the health of your application services and infrastructure. wot.io has several data monitoring services in our data service exchange™ and we deploy and use a few of these as part of our own monitoring system. In this blog we're outlining how we use these monitoring services with a tour of our virtual machines, message bus and third party data services in our data service exchange.
Our monitoring setup can be broken down into 3 basic parts:
- automated deployment
- historical metric collection
- host checks and alerting
Automated deployment
We use the power of docker and wot.io's configuration service for automated service deployment. Each newly deployed virtual machine (VM) automatically spins up with a default set of monitoring client containers.
- collectd: host metric collection
- dockerstats: container metric collection
- sensu-client: runs host, container and application checks
-
rsyslog-forwarder: forwards logs to a remote server.
not covered here
Historical Metrics
We use a Graphite server fronted by Tessera dashboards to collect and view our historical metrics. By default, we collect metrics for each host (collectd) and all of its running containers (dockerstats). Both containers send metrics to a Graphite server; which Tessera queries to populate its dashboards.
Let's take a look at our default dashboards that are generated when we provision a new VM. This is accomplished by posting a json dashboard definition to a Tessera host.
Default tessera dashboards
Tessera and collectd in action
Checks and alerts
The final piece of our monitoring system is Sensu. Sensu is written in Ruby, backed by RabbitMQ and uses Nagios-style checks to alert us when bad things happen; or in some cases when bad things are about to happen. By default sensu-client gives us a basic keep alive. We have added our own checks to notify us when other more specific problems arise.
wot.io checks:
- container checks: verifies that all the containers that are configured to run on that host are indeed running
- host checks: lets us know if we are running over 90% usage on cpu, memory or disk
- application checks: sensu-client will run all checks placed in the
/checks
dir of any container running on that host
We use the standard 4 Nagios levels:
- ok: exit code 0
- warning: exit code 1
- critical: exit code 2
- unknown: exit code 3
Ok, warning and unknown alerts are sent as emails and slack posts. We reserve critical alerts for big things like containers not running
and host has stopped sending keepalives
. Critical alerts go straight to PagerDuty and our on-call team.
Example sensu container check
Example sensu application check
As described above, we use these tools to monitor and collect data on our systems and also make them available to customers if they have additional needs for these data services. And the integration into our deployment system automatically launches the appropriate agents, which is essential when we deploy a large number of services as once, like we did for the LiveWorx Hackathon.
In my last blog post, I discussed a sample architecture for an IoT application:
where in the data is passed through a series of successive stages:
- Acquisition - receiving data from the sensor farm
- Enhancement - augmenting data in motion with data at rest
- Analysis - applying machine learning and statistics to the data
- Filtering - removing non-actionable data and noise
- Transformation - converting it into an actionable format
- Distribution - delivering to the end user or application
This architecture is based on a number of real world deployments that have been in production for more than a couple years. Each of these deployments share a number of problems in common relating to how the system architecture influences the tradeoffs between cost, throughput, and latency. These three factors are the most common real world constraints that must be taken into account when designing an IoT solution:
- Cost - the money, time, and mindshare sunk into the system
- Throughput - the volume of messages over time the system can handle
- Latency - the time it takes for data to translate to action
At wot.io, we have found it necessary to build new software test equipment to better model the behavior of our production systems. Most existing load testing and modeling tools do not deal well with highly heterogenous distributed networks of applications. Towards this end, we have produced tooling like wotio/ripple for modeling the behavior of data services:
In the above video, I simulated an application in which 1750 messages per minute, were generated in a spiky fashion similar to a couple real world systems we have encountered. Anyone who has seen a mains powered sensor farm come on after a blackout will recognize this pattern.
This is a typical pattern which results when the device designers assume that the devices will come online at random times, or decide to lockstep the message sending to a GPS clock. This acquisition phase behavior can be very noisy depending on the environmental characteristics.
The next step, we simulate some acquisition and enhancement phase activity of adding data to the data in motion by querying a database. To do this, we add a 10 second delay to each of the messages. The time shifted signal looks like:
The ripple software allows for simulating a delay ramp, wherein the delay increases over time based on the number of messages through the system as well. This can be invaluable for simulating systems that suffer from performance degradation due to the volume of data stored in the system. For this sample simulation, however, I've stuck with a fixed 10 second delay. Being able to simulate delays in processing can be invaluable when multiple streams of data must be coordinated.
Another common constraint one encounters is a cost vs throughput constraint. For example, you may want to license a software application that is restricted in the number of CPUs per unit price. The business may only be able to afford enough CPU licenses to account for sufficient throughput of the per minute volume, but not the instantaneous volume.
For these sorts of applications, we can simulate a maximum rate limit on the application. The ripple.c exchange above demonstrates the stretching of the input signal due to queueing that data between exchanges B and C. Here, we're simulating a 40 messages per second throughput limit. Theoretically, this system could process 40 * 60 = 2400 messages per minute, which is sufficient to handle our 1750 messages per minute load, but at a cost of adding latency:
Here we can see the impact of this queuing on the per message latency over time. The above graph shows about 4 minutes of messages, and the per message latency of each. The reason for this is the messages are enqueued due to not being able to process them as fast as they are coming in briefly:
This sawtooth graph is a result of feeding more data into the system than the rate limited process can remove it. This behavior results in highly variable latency across the lifespan of the application:
In this histogram of the 4 minute sample, you can see a spike around 10s of latency. This spike accounts for roughly 1/8th of all of the messages.The other 7/8ths of the messages however, range from 10s of latency to over 35s of latency. This variability in latency is a classic tradeoff that many IoT systems need to make in the real world. If you are expecting to act upon this data, it is important to understand how that latency impacts the timeliness of your decision.
By combining both delays and rate limits, along with different generator patterns, we can better develop models of how our systems behave under load long before they go to production. With wotio/ripple, we were careful to keep our test generation, application simulation, and our analysis phases decoupled. The message generator and the latency report generators are separate servers capable of being run on different hardware. As the software is written in Erlang, it is easy to distribute across a number of Erlang VMs running on a cluster, and through Erlang's built in clustering, can be coordinated from a single shell session.
The test program used to generate the above graphs and topology is as follows:
This sample file demonstrates the following features:
- consume, Source, Filename - consumes messages from Source and logs their latency to Filename
- pipe, Source, Sink - consume messages from Source and forward to Sink as fast as possible
- limit, Source, Sink, Rate - consume messages from Source and forward to Sink at a maximum rate of Rate messages per second
- delay, Source, Sink, Base, Ramp - consume messages from Source and forward to Sink with a Base delay in ms with Ramp ms delay added for each message processed
- generate, Message, Pattern - send the sample test message (with additional timestamp header) at a rate of messages per second specified in the Pattern.
In the near future, we will be adding support for message templates, sample message pools, and message filtering to the publicly released version of the tools. But I hope this gives you some additional tools in your toolbox for developing your IoT applications.
Over the summer, wot.io visited Columbia University in New York City to participate in an evening of presentations that were part of an interesting new graduate level course they are offering on IoT. The event, organized by IoT Central, had a packed agenda full of IoT presentations and information, including some demos of Atmel devices sending data to wot.io data services.
At the event, we demoed some early versions of our Ship IoT initiative, showing how Atmel devices can be connected to multiple data services from the wot.io data service exchange. In this demonstration we used PubNub for connectivity, and routed it to wot.io data services bip.io, scriptr.io, and Circonus.
This event was particularly interesting as Steve Burr, Director of Engineering at wot.io, unboxes and connects an Atmel device live during the demo and starts getting temperature readings from it. Live demos are always fun to watch! The IoT Central group recorded the event and you can watch the video below.
The entire video is full of interesting IoT information. If you're looking for specific parts, the Atmel portion starts at about 28 minutes, wot.io starts around 32 minutes, and the technical portion starts around 38:30.
In looking at many different IoT applications, a fairly common architecture emerges. Like Brutalist architecture, these applications are rugged, hard, and uncompromising, with little concern for a human scale aesthetic:
At its core it is a six stage pipeline, wherein the data is processed in a sequence. Variations on this architecture can be generated by branching off at any one of the six stages, and repeating some or all of the stages for some sub-path:
The stages correspond to different application types that are typically used in IoT systems:
- Acquisition - gathering data from device management or connectivity platforms, such as ARM mbed Device Server, PubNub, and Stream's IoT-Xtend™ Platform.
- Enhancement - augmenting data with data at rest usually from databases like Riak, PostgreSQL or MySQL.
- Analysis - applying machine learning and other forms of applied statistics to the enhanced data, such as ParStream, Simularity, or DataScription.
- Filtering - attempts to remove non-actionable data from the stream, increasing the relavence of the data to the end user, such as search by NGData, or stream processing like SQLstream, or ScaleDB.
- Transformation - converts the filtered stream into actionable workflow processing like Bipio, Medium One, or ThingWorx, or through procedural scripting like scriptr;.
- Distribution - delivers the salient information to the user including monitoring systems like Circonus, reporting like JReport or managed data feeds by Apache Nifi
One of the great pleasures of working at wot.io is seeing the development of new systems architectures and their interplay with real world constraints. As Chief Scientist, I spend a lot of my time metering and monitoring the behavior of complex soft real-time systems. In addition to looking at real world systems, I also get to design new test equipment to simulate systems that may one day go into market.
One such tool is ripple, a messaging software analog to an arbitrary waveform generator. Rather than generating a signal by changing volts over time, it generates a message waveform measured in messages per second over time. Much of the behavior of distributed IoT systems is only understandable in terms of message rate and latency. In many ways, the design constraints of these systems are more like those in designing traces on a PCB than it is like designing software. A tool like ripple allows us to simulate different types of load upon various combinations of application infrastructure.
Not all applications behave the same way under load, and not all data flows are created equal. Variations in message content and size, choice of partitioning scheme, differences in network topology, and hardware utilization, can all affect the amount of latency any processing stage introduces into the data flow. The variability in the different data pathways can result in synchronization, ordering, serialization, and consistency issues across the result set.
Consider a case where an application is spread across a few hundred data centers around the world. Due to variations in maintenance, physical failures, and the nature of the internet itself, it is not uncommon for an entire data center to go offline for some period of time. This sort of event can cause an immense backlog of messages from what is now the "distant past" (ie yesterday) to come flooding in, changing the results of the past day's analysis and reports. This problem is not just limited to hardware failures, but is common when using remote satellite based communication schemes. In these cases, a compressed batch of past data may appear all at once at a periodic basis when the weather and satellite timing permit the transmission.
Ripple was designed with these issues in mind, to make it easier to simulate these sorts of what-if scenarios we have encountered with real world systems. For our simulations, we use RabbitMQ as a message bus. It provides a reliable distributed queuing system, that is extensible. It is also convenient to use a protocol like AMQP for data interchange between processes, as it is well supported across languages. The core functionality ripple consists of:
- modeling application topologies within RabbitMQ
- creating pools of consumers which stand in for applications
- forwarding with delays which allow for simulating different latency characteristics of an application
- generating arbitrary patterns of messaging over time
- simulating "noisy networks" where in message rates vary by some random noise factor
In the next blog post, I will describe the ways to use ripple to simulate a number of different real world systems, and describe some of the architectural concepts that can address the observed behaviors.
ThingWorx is an IoT platform that enables users to collect IoT data from a variety of sources and build applications to visualize and operate on that data. As we showed in a previous post, wot.io supported the LiveWorx hackathon by deploying ThingWorx instances for developers to use in developing IoT solutions. In addition to automating ThingWorx deployment, we have also been working on creating a ThingWorx Extension for submission to the ThingWorx Marketplace.
As an IoT platform that values extensibility, ThingWorx provides a number of options for connecting other systems with a ThingWorx application. In particular, partners can package adapter into reusable ThingWorx extensions. Developers creating IoT solutions with ThingWorx can then install these extensions and easily access the additional functionality they provide in a fashion that is idiomatic to ThingWorx applications. wot.io developed an extension that follows this pattern and will provide a standard way to interface with the wot.io operating environment to send or receive data from a ThingWorx application.
As we've been working on our extension, we thought we would share some of the ways we think developers might use the extension. In this video we create a simple ThingWorx mashup showing just how a developer would access and use the installed wot.io extension.
We're looking forward to getting our extension listed in the ThingWorx Marketplace and getting feedback on how it works for ThingWorx developers.
As part of our involvement with Thingworx' LiveWorx Event this year, wot.io was happy to support the pre-conference LiveWorx hackathon. Participants were provided with some hardware and sensors, some suggested verticals including Smart Agriculture and Smart City challenges, and of course ThingWorx instances for them to pull their solution together.
Part of wot.io's support was to deploy and host the 85 ThingWorx instances for the teams to work on. How did we do it?
One of the fundamental components of the wot.io operating environment (OE) is our configuration service and the associated orchestration that allows us to quickly deploy and manage integrated data services. Leveraging OpenStack's nova command line client and the popular Docker container system, the wot.io OE provides APIs that allow data services to be configured and deployed. This API can then be scripted for specific sets of data services or to deploy multiple instances of the same data service as in the case of the hackathon. This video shows the script we used to spin up the servers in Rackspace. This version creates just 5 rather than 85 instances.
The wot.io OE can also be used to quickly update deployed containers, either individually or en-masse. During the process of preparing for the hackathon, ThingWorx engineers discovered that they needed to revise the base ThingWorx configuration a number of times. They would simply send us a new archive file and we were then able to use it to update our core container. Once we told the configuration service to reference the new version, all of the running instances then detected the new version and updated themselves automatically. This made it easy for us to deploy updates as they came in--even right up until the event started.
In addition to deploying and hosting ThingWorx instances, we have also been working on a wot.io ThingWorx extension that will simplify the integration of ThingWorx with the wot.io OE, allowing data to be routed back and forth between other IoT platforms and thereby solving the IoT Platform Interoperability for the large enterprise and industrial companies. You can read more about our progress on that here.
For the Love of Coffee
Coffee is amazing stuff, and when brewed just right, tastes incredible! I'm a coffee aficionado, and I'm always pursuing The Perfect Cup™. Preparation technique is critical! The Specialty Coffee Association of America has very rigid standards for how to prepare coffee, designed to ensure consistent and peak quality flavor in the resulting drink. Water temperature is one of the major factors, because as any good chemist knows, various compounds dissolve at different rates in different temperatures of water. The flavor of your cup of coffee is greatly determined by the temperature of water used, and consequently, the varying fractions of coffee compounds thereby extracted.
From the SCAA standard:
Cupping water temperature shall be 200°F ± 2°F (92.2 – 94.4°C) when poured on grounds.
We are engineers. We appreciate the scientific method, and data-driven decisions. The quest for The Perfect Cup must therefore entail data collection for later analysis. This collection should be automated, because life is too short for repetitive manual processes. So let's start out by checking our existing daily brewing process' water temperature, and logging the long-term variance.
We're going to do this with a Kinoma Create, which packs an 800 MHz ARM v5t processor, WiFi, Bluetooth, a color touchscreen, sound, I/O pins, and all kinds of other goodies. It's a comprehensive development kit that lets you code with JavaScript and XML, so it's a great choice, and even more so if JavaScript is one of your competencies. This will make our temperature logging simple, and the data services available through wot.io give us easy insights into our data because the integration is already done and working. Expanding beyond our first-steps of temperature logging will be a snap, as the Kinoma Create has more I/O than we can shake a stick at. Let's get to it!
Getting Started
For this project, I used:
- A Kinoma Create
- Kinoma Studio Eclipse-based IDE
- A Texas Instruments LM35 precision temperature sensor
- One genuine borosilicate Pyrex test tube
- Arctic Silver epoxy thermal adhesive
- Shielded cable, heatshrink tube, etc.
- PubNub endpoints
- wot.io data services
- Delicious coffee! A city-roast bean from the Kivu Butembo region of the Congo
The Probe
First thing I did was make a probe suitable for testing something I was going to drink. It needed to be precise, non-toxic, and tolerant of rapid temperature changes.
The temperature sensor is the LM35 from Texas Instruments, a military-grade precision Centigrade sensor with analog output, accurate to ±0.5ºC. That's well-within the specified ±2ºF spec from the SCAA for brewing water.
I attached the sensor inside a Pyrex borosilicate glass test tube, which will withstand the thermal shock inherent in measuring boiling water. We certainly don't want shattered glass shards or contaminants in our coffee! To ensure good heat transfer, I used some thermal epoxy to affix the sensor at the bottom of the tube.
The cable is Belden 9841, typically used for RS-485 industrial controls and DMX 512 systems. While we don't need precision 120Ω data cable for this, it has 100% foil+braid shield and will keep our analog signals nice and clean. Plus, I had a spool of it on the rack - always an advantage ;)
About that LED... It functions as a power indicator, and makes the probe look good for showing off at World Maker Faire. Normally I wouldn't stick an LED next to a precision temperature sensor. The power dissipated by the LED and current-limiting resistor will cause a slight temperature rise and throw off the measurement. But it only dissipates maybe 10 milliwatts, and coffee is really hot, so I stuck an LED in there! No worries.
Testing the Sensor
Before writing the code, I needed to be sure the sensor output matched what's claimed on the datasheet (always check your assumptions!). A quick setup on a breadboard proved the datasheet to be correct.
The temperature of the sensor itself measured ~24.1ºC with a calibrated FLIR thermal camera (with an assumed emissivity of ε0.90 for the plastic TO-92 case):
...and the output of the device was 245mV, right on target!
Now we know we don't need much correction factor in software, if any.
The Code
I'll lead you through a very brief walkthrough of the code. You can grab the code from the repo on github.
First thing you'll want to do is put your PubNub publish and subscribe keys into the code, and your channel name.
Grab the keys and put them in a the top of main.xml
:
<variable id="PUBNUB_PUBLISH_KEY" value="'YOUR_PUB_KEY_HERE'" />
<variable id="PUBNUB_SUBSCRIBE_KEY" value="'YOUR_SUB_KEY_HERE'" />
<variable id="PUBNUB_CHANNEL" value="'YOUR_CHANNEL_NAME_HERE'" />;
PubNub Library Integration
One of the key bits to using this PubNub library is you need to override the default application behavior. Their example came as straight JS, but I converted it to XML here, so you get to see both methods and learn some new tricks.
At the top, we include the pubnub.js
library file, and then define a behavior that uses the PubNubBehavior
prototype. While I won't claim to be an expert on PubNub's library, I believe we do things this way so that the PubNub library can handle the asynchronous events coming in from the message bus.
We also start into the main startup code, which resides in the onLaunch
method.
<program xmlns="http://www.kinoma.com/kpr/1">
<include path="pubnub.js"/>
<behavior id="ApplicationBehavior" like="PubNubBehavior">
<method id="constructor" params="content,data"><![CDATA[
PubNubBehavior.call(this, content, data);
]]></method>
<method id="onLaunch" params="application"><![CDATA[
...
...and we see the rest down at the bottom, where we instantiate the new ApplicationBehavior
and stick it into our main applicaiton.behavior
thusly:
<script>
<![CDATA[
application.behavior = new ApplicationBehavior(application, {});
application.add( maincontainer = new MainContainer() );
]]>
</script>
onLaunch Initialization
First thing we do is set up the pubnub
object with our publish and subscribe keys. Note that you don't need to use keys from the same exchange - you can write to one, and read from an entirely different one. That's part of the amazing flexibility of message bus architectures like PubNub and wot.io.
After init, we subscribe to the specified channel, and set up callbacks for receiving messages (the message
key) and connection events (connect
). Upon connection we just fire off a quick Hello message so we can tell it's working. For receiving, we stick the message contents into a UI label element, and increment a counter, again doing both so we can tell what's going on for demonstration purposes.
You could certainly parse the incoming messages and do whatever you want with them!
pubnub = PUBNUB.init({
publish_key: PUBNUB_PUBLISH_KEY,
subscribe_key: PUBNUB_SUBSCRIBE_KEY
});
pubnub.subscribe({
channel : PUBNUB_CHANNEL,
message : function(message, env, channel) {
maincontainer.receivedMessage.string = JSON.stringify(message);
maincontainer.receivedLabel.string = "Last received (" + ++receivedCount + "):";
},
connect: function pub() {
/*
We're connected! Send a message.
*/
pubnub.publish({
channel : PUBNUB_CHANNEL,
message : "Hello from wotio kinoma pubnub temperature demo!"
});
}
});
Next we set up our input pins for the temp sensor:
application.invoke( new MessageWithObject( "pins:configure", {
analogSensor: {
require: "analog",
pins: {
analogTemp: { pin: 52 }
}
}
} ) );
This uses Kinoma's BLL files which define the pin layout for hardware modules. I created a simple one for our temp sensor. I did not have the system configure the power and ground pins. At the time I coded this, Kinoma doesn't document an official way to do it (although it does exist if you dig into their codebase).
exports.pins = {
analogTemp: { type: "A2D" }
};
exports.configure = function() {
this.analogTemp.init();
}
exports.read = function() {
return this.analogTemp.read();
}
exports.close = function() {
this.analogTemp.close();
}
Lastly, we set up what is effectively the main loop. This fires off a message that will be processed by the analogSensor
read
method defined in the BLL file. It also sets it up to repeat with an interval of 500 milliseconds. The results are sent via a callback, /gotAnalogResult
:
/* Use the initialized analogSensor object and repeatedly
call its read method with a given interval. */
application.invoke( new MessageWithObject( "pins:/analogSensor/read?" +
serializeQuery( {
repeat: "on",
interval: 500,
callback: "/gotAnalogResult"
} ) ) );
The Results Callback
This is a message handler behavior which processes the analog value results from our periodic sensor read. It converts the reading to degrees Celsius, and fires off the data with an onAnalogValueChanged
and onTempValueChanged
message to whomever is listening. (We'll see who's listening down below...)
The sensor outputs 10 millivolts per degree Celsius, so 22ºC would be 220mV. This goes into our analog pin, which when read, gives a floating-point value from 0 to 1, representing 0V up to whatever the I/O voltage is set to, 3.3V or 5V. We do some conversion to get our temperature back.
You may notice that we only use a small range of the A/D converter's potential for typical temperatures, and this results in lower resolution readings. Ideally we'd pre-scale things using a DC amplifier with a gain of, say, 2 or 4, so the temperature signal uses more of the available input range.
<handler path="/gotAnalogResult">
<behavior>
<method id="onInvoke" params="handler, message"><![CDATA[
var result = message.requestObject;
// Convert voltage result to temperature
// LM35 is 10mV/ªC output; analog input is 0-1 for 0-3.3v (or 5 if set)
// Subtract 1 degree for self-heating
var temp = (result * 3.3 * 100) - 1;
application.distribute( "onTempValueChanged", temp.toFixed(2) );
application.distribute( "onAnalogValueChanged", result );
pubnub.publish({channel:PUBNUB_CHANNEL, message:
{"k1-fd3b584da918": {"meta": "dont care", "tlv": [ {"name": "temperature", "value": temp.toFixed(2), "units": "C"} ] }}
});
]]></method>
</behavior>
</handler>
The UI
Here we define the main container for the user interface. You'll see entries for the various text labels. Some of them have event listeners for the onAnalogValueChanged
and onTempValueChanged
events, and that's how they update the display.
<container id="MainContainer" top="0" left="0" bottom="0" right="0">
<skin color="white"/>
<label left="5" top="0" string="'PubNub Temperature Telemetry Demo'">
<style font="24px" color="red"/>
</label>
<label left="5" top="23" string="'Last Received (0):'" name="receivedLabel">
<style font="20px" color="blue"/>
</label>
<label left="5" top="39" string="'--no message received yet--'" name="receivedMessage">
<style font="14px" color="black"/>
</label>
<label left="0" right="0" top="80" string="'- - -'">
<style font="60px" color="black"/>
<behavior>
<method id="onTempValueChanged" params="content,result"><![CDATA[
content.string = "Temp: " + result + " ºC";
]]></method>
</behavior>
</label>
<label left="0" right="0" top="65" string="'- - -'">
<style font="24px" color="green"/>
<behavior>
<method id="onAnalogValueChanged" params="content,result"><![CDATA[
content.string = result.toFixed(6) + " raw analog pin value";
]]></method>
</behavior>
</label>
<picture url="'./assets/wotio_logo_500x120.png'" top="210" left="10" height="24" width="100" />
</container>
Results
It worked well! After perfecting my water boiling technique (who would have thought that was a thing), I got a great cup with the data to prove it. Dark chocolate, caramel, hints of cherry and vanilla; earthy and full.
The messages flowed to PubNub from the Kinoma Create, and anything published to PubNub from elsewhere would show up nearly instantly on the Kinoma Create's screen. Keep reading to see how we used some data services via wot.io.
World Maker Faire 2015
This setup was demonstrated at World Maker Faire 2015 in the Kinoma booth, where we also had a number of data services connected, scriptr.io, bip.io, and Circonus to start.
These fed into Twitter and Gmail also. You can see the message flow graph created with bip.io, showing the message processing and fan-out:
We've written about creating these graphs before, just look through the other posts on the wot.io labs blog for several examples.
In Closing
Kinoma's Create platform pairs effortlessly with the data services available via wot.io, and the power to leverage existing expertise in JavaScript is a huge advantage when it comes time to develop and ship your product. That power extends further with wot.io partners like scriptr, where you can integrate further cloud-based JavaScript processing into your data service exchange workflow. To get started, grab a Kinoma Create and take a look at shipiot.net today!
While the organizers don’t release official attendance figures, others report that north of 90,000 makers, enthusiasts, hardware-hackers and curious onlookers made their way to the New York Hall of Science in Queens, NY, this past weekend for Maker Faire New York.
Among the sprawling rows of tents set up were companies, organizations and individuals showing off everything from:
- 3-D printers
- CNC machines
- Automatic PCB Board Fabrication machines
- a Drone Zone
- RC robots battling each other to destruction.
- electric powered wagons.
- maker kits
- sonic massagers
- electronic wearables
- local artists, much of their work made from mixing machine-milled parts and expert hands to craft something beautiful
- other strange, wonderful creations that don't quite fit into any category
- and of course, you know, a 30-ft Fire Breathing monster made from recycled Airplane parts (pictured in the post header, more photos).
Big names like Google, Microsoft, and Intel were there, showing off various IoT initiatives, teaching kids how to solder, and generally helping them get started in building electronics.
And of course wot.io wouldn't miss a Maker Faire so close to home, so we were there too. We were very excited to be able to join our friends from Kinoma whose booth saw plenty of traffic.
We've used the Kinoma Create for projects in the past and it was fun to build a new one to show at the Faire. For this outing we added a temperature sensor suitable for giving a very precise reading on your favorite beverage.
Data from the temperature sensor was captured by the attached Create unit, sent through PubNub, and routed to wot.io data services bip.io (pictured on the screen), scriptr.io, and Circonus.
One of the biggest hits in the booth was the Kinoma Create-powered-robot. Kids were invited to control the robot wirelessly from another Kinoma Create. The different projects showcased by wot.io and Kinoma demonstrated how accessible the JavaScript-powered Kinoma Create platform is for makers of all ages.
It was great to see how many kids were at the Faire, getting excited about inventing and exploring new ideas and technologies. It was pretty great to see them just play with electronics and have the chance to make cool stuff. When you ignite the imagination of a kid, and give them the tools and support to build their ideas into reality, there's no telling what they're going to bring to next year's Maker Faire. Given what was on display this year, I'm pretty excited to find out.
If you're interested in deeper technical details on the demo we showed, be sure to check out our labs blog entry that explains the full setup.
A Bike and an Idea
A very good friend of mine recently picked up a motorcycle as a first-time rider. It's a nice bike, a Honda CBR250 in gloss black, with under 3000 miles on it. She's smart, and took the safety class offered by the Motorcycle Safety Foundation, but I was still looking for ways to make sure she was going to be ok, and that I could quickly help if ever needed.
We started out by having her send me an SMS message whenever she arrived at her destination. "Made it," she would send, which worked ok. My inner sysadmin thought, "process seems repetitive; shouldn't this be automated?" And so was born this demo: a motorcycle crash alert that will both quickly transmit and permanently log a help message and GPS location if there is ever trouble!
Already having access to wot.io data service integrations like Twitter and Google Sheets, and the quick power of a bip.io workflow, I needed some hardware. Mediatek offers a dev board called the LinkIt ONE, available from seeedstudio.com. The LinkIt ONE integrates a heap of features for anyone making an Internet of Things prototype, including WiFi, Bluetooth LE, GPS, GSM, GPRS, audio, and SD card storage, all tied together by an ARM7 micro controller.
It's largely compatible with the Arduino pin headers, and can interface with Grove modules, SPI, I2C, and more. We'll be using their Arduino SDK and HDK to create our demo app, and hook this board up to the wot.io-powered bip.io data service exchange and do some really cool stuff with Twitter and Google Sheets. The only other bit I needed to add was an accelerometer, and I found a three-axis module in my parts bin.
Let's get started!
Prerequisites
- A Mediatek LinkIt ONE board
- Our example code: https://github.com/wotio/shipiot-mediatek-linkit-one
- A computer that can run the Arduino IDE
- Ardiuno version 1.5.7 Beta (Mediatek requirement, check their page for updated SDK compatibility info - and get the one with Java included if you are on a Mac)
- Read the Getting Started guide over at Mediatek to get oriented
- An accelerometer with analog outputs (or digital if you want to adapt it - see below)
- A (full-size) SIM card with a GSM data plan. You can get one from Ting, T-Mobile, and others, just make sure it's GSM!
Optional:
- One motorcycle with rider.
Note! The SDK does not currently work with the latest Arduino IDE (v1.6.5 as of this writing)
Also note! If you have multiple installs of Arduino IDE like I do, you can simply rename the app with the version number to help keep them straight.
Updating Firmware
This is part of the Getting Started guide from Mediatek, but it's important so I'm calling it out specifically here. Make sure you update the firmware on your board before you begin. It's easy!
Building the Hardware
The setup for this is simple. Connect the GPS and GSM antennas to the LinkIt ONE board. Make sure to use the right antennas, and hook them to the proper plugs on the bottom, like it shows in the picture. We don't need the WiFi antenna for this demo, so I left it disconnected.
Hook the Li-Ion battery to the board so we can run it without a power cable. (Make sure it's charged; check Mediatek's docs).
And finally, we connect the accelerometer board. I used a tiny breadboard to make this easy.
Oh, and make sure the tiny switch in the middle is flipped over to SPI, not SD, or the green LED won't behave and other things may not work for this demo. Check the close-up image below.
Here you can see the detail of how we hook up the accelerometer. There are three pins connected as configuration options, using jumpers on the breadboard. These are for the self-test (disabled), g-force sensitivity (high sensitivity), and sleep mode (disabled). (Obviously you'd want to control the sleep mode programmatically on an actual product for lower power consumption, but we're keeping it simple here.)
We then have some flying leads over to the LinkIt board, for ground, +3.3v power, and the three analog outputs for x, y, and z axes.
The accelerometer breakout board I used for this demo is the MMA7361, which has three analog outputs. The chip was discontinued by Freescale, and Sparkfun no longer sells the breakout board. They have a similar one you could use, the ADXL335, which should work great. You can adapt this demo for whatever kind of accelerometer you are using, maybe even change it to a digital interface, since the LinkIt ONE board speaks I2C and SPI with ease.
Here we can see exactly where the flying leads come in for power and to the three analog inputs, A0, A1, and A2.
And finally, we neatly package up the prototype so it is self-contained and will fit under the motorcycle's seat:
That's it! The LinkIt ONE board has all the rest of the fun stuff already integrated and ready to use. Combined with some data services available from wot.io, we'll be up and running in no time!
Writing the Code
Let's walk through the code for this demo. You can get the code from our github repo to follow along.
Headers and Configuration
First, we need to include some standard headers from the Mediatek SDK. These give us access to the GPS module, and the GSM/GPRS radio for cellular data communications.
We also set up some #define
statements to configure the GSM APN, the API hostname, and the Auth header.
#include <LGPS.h>
#include <LGPRS.h>
#include <LGPRSClient.h>
// Change these to match your own config
#define GPRS_APN "fast.t-mobile.com"
#define API_HOSTNAME "your_hostname_here.api.shipiot.net"
#define AUTH_HEADER "your_auth_header_here"
Now lets define some variables we'll use later. These will include the HTTP POST request template, and the json data structure template. We surround these by the F();
function, which in Arudino-speak, means "store this string in Flash, not RAM". This is good practice for long static strings to save you some RAM space, but not strictly required for this small example.
We also have some global variables for building the request, for the GPRS cellular data client session, for the GPS data, and a C-style string buffer for sending the request.
String request_template = F("POST /bip/http/mediatek HTTP/1.1\r\n"
"Host: " API_HOSTNAME "\r\n"
AUTH_HEADER "\r\n"
"Content-Type: application/json\r\n"
"Content-Length: ");
String data_template = F("{\"x\":X,\"y\":Y,\"z\":Z,\"nmea\":\"NMEA\"}");
String request;
String data;
String nmea;
LGPRSClient c;
gpsSentenceInfoStruct gpsDataStruct;
char request_buf[512];
Initializing the Board
Now on to the setup()
function. This is your standard Arduino init function that will do all the one-time stuff at boot. We start out by setting the serial debug console speed to 115200 baud, call pinMode()
to configure some digital output pins for the three LEDs on the board, and then set the LEDs to both red on, and the green one off.
The idea here is to use the LEDs as status info for the user. The first red LED turns off when the GPRS connects. The second one will start blinking while the GPS is acquiring a fix. And finally, the red LEDs will be off and the green LED will turn on to indicate that the system is read. We also blink off the green LED one time for every call to the HTTP endpoint, so the user knows it's working. (It would be good to check the return value from the API for this, but we don't do that in this simple demo)
void setup() {
Serial.begin(115200);
Serial.println("Starting up...");
pinMode(0,OUTPUT);
pinMode(1,OUTPUT);
pinMode(13,OUTPUT);
// Turn on red LEDs, turn off green
digitalWrite(0,LOW);
digitalWrite(1,LOW);
digitalWrite(13,LOW);
You'll see throughout the code that there are Serial.print()
calls, which report the status to the debug console. This is handy during development, but you'd probably remove these for a production system to save space and power.
For setting up the GSM data communications, we need to connect to the GPRS APN. The actual APN name you need to use is defined by your cellular carrier, and you set it in the `#define
statements at the top of the code.
Serial.print("Connecting to GPRS APN...");
while (!LGPRS.attachGPRS(GPRS_APN, NULL, NULL)) {
Serial.print(".");
delay(500);
}
Serial.println("Success");
Now we turn on the GPS chip, and delay for a second so it can get its bearings. (get it? get its bearings? ha.)
Serial.print("GPS Powering up...");
LGPS.powerOn();
delay(1000);
Serial.println("done");
So we've powered up the GPS and attached to the GPRS system, and we're going to call that Phase 1 complete. We turn off the first red LED, and move on to getting a GPS fix. That's the last bit of initialization to do, and we'll flip the LEDs to green when it's all done.
// Phase 1 init complete,
// so turn off first red LED
digitalWrite(0,HIGH);
waitForGPSFix();
// Phase 2 init complete, LEDs to green
digitalWrite(0,HIGH);
digitalWrite(1,HIGH);
digitalWrite(13,HIGH);
Serial.println("Setup done");
}
Getting a GPS Fix
Let's take a look at the waitForGPSFix()
function. It's simple, and simply checks the GPS data for indication of a good lock. We toggle the second red LED on every check to let the user know we're doing something and still waiting.
The GPS returns data formatted as NMEA sentences. The one we are interested in is the GPGGA sentence, which contains the location fix information. We are checking the char at offset 43 to see if it's a 1 - this magic number is from the GPS Quality Indicator field, and 0 means not locked, 1 means locked.
Later on when we're initialized, we simply pass along the raw NMEA data for processing by the bip.io data workflow; you'll see that later on.
void waitForGPSFix() {
byte i = 0;
Serial.print("Getting GPS fix...");
while (gpsDataStruct.GPGGA[43] != '1') { // 1 indicates a good fix
LGPS.getData( &gpsDataStruct );
delay(250);
// toggle the red LED during GPS init...
digitalWrite(1, i++ == 0 ? LOW : HIGH);
i = (i > 1 ? 0 : i);
}
Serial.println("GPS locked.");
}
Great! Now that we're all initialized, we'll have a look at the main loop()
function.
The Main Loop
First off in the main loop, we read the accelerometer, and we print the accelerometer outputs to the debug console. You'll need to check your outputs and see what z-axis threshold makes sense for your particular chip. (A more sophisticated system would have an auto-calibration routine, and probably use data from all three axes.)
void loop() {
Serial.println("Reading accelerometer...");
int accel_x = analogRead(A0);
int accel_y = analogRead(A1);
int accel_z = analogRead(A2);
Serial.print("x: "); Serial.print(accel_x);
Serial.print(" \ty: "); Serial.print(accel_y);
Serial.print(" \tz: "); Serial.println(accel_z);
Now we read the GPS data.
Serial.print("Reading GPS: ");
LGPS.getData( &gpsDataStruct );
Serial.print( (char *)gpsDataStruct.GPGGA );
We take the GPS data and the accelerometer data, and insert it into our json data template. Then, we insert the json data into the HTTP request template, and finally turn it into a plain old C string. There's some commented code that will print the fully-formatted HTTP request to your debug console, for help if things aren't working.
// Format the HTTP API request template
nmea = String( (char *)(gpsDataStruct.GPGGA) );
nmea.trim();
data = data_template;
data.replace("X", String(accel_x,DEC) );
data.replace("Y", String(accel_y,DEC) );
data.replace("Z", String(accel_z,DEC) );
data.replace("NMEA", nmea );
request = request_template + data.length() + "\r\n\r\n" + data;
request.toCharArray(request_buf,512);
// Uncomment this to print the full request to debug console
// Serial.print(request);
Sending the Telemetry
Now it's time to send the data to the bio.io HTTP endpoint. We connect to the API, and if successful, write the request buffer out. Then we blink the green LED off for 250ms to give the user some feedback.
Serial.print("Connecting to api... ");
c.connect(API_HOSTNAME, 80);
Serial.print("checking connection.. ");
if (c.connected()) {
Serial.print("Sending...");
c.write((uint8_t *)request_buf, strlen(request_buf));
}
Serial.print("waiting...");
digitalWrite(13,LOW);
delay(250);
digitalWrite(13,HIGH);
Finally, we optionally receive any data returned from the API and print it to debug, close the connection, and delay for our next cycle. (You could do some cool stuff in the data workflow, processing the telemetry and returning something useful, and this is where you'd check the response.) We're just delaying for a brief time, and not accounting for the time it takes to contact the API, because we don't need a precise cadence of telemetry transmission, just periodic checks.
Serial.print("receiving...");
// uncomment this to print the API response to debug console
/*
while (!c.available()) {
delay(1);
}
while (c.available()) {
char output = c.read();
Serial.print( output );
}
*/
Serial.println("Closing.");
c.stop();
delay(5000);
}
That's it! Now we'll take a look in the video below at how to create the bip.io workflow, and attach the telemetry output to more than one data service.
Creating the Workflow, and Testing
In this video, I will show you how to create a data workflow that accepts NMEA and accelerometer data into an HTTP endpoint, transforms the data, tests a condition, and if the condition passes, sends a fan-out of messages to both Twitter and Google Sheets endpoints.
And then, I'll show you a test of it all working!
Conclusion
So now we have a working crash alert prototype! I think I'll work on this some more and wire it into the bike's electrical system, so it automatically runs and always has power. I've already got some ideas for additional features and improvements, and I'm sure you've thought of some of your own.
There's a lot of great hardware out there these days, but we need to be able to use the data that all these IoT devices produce for us. Easy access to data services through a unified interface is a powerful productivity catalyst, and enables even more people to bring their ideas to life.
I knew that Twitter was reliable and worked well for sending me alerts, and Google Sheets makes a good data store for later processing. I didn't have to spend any time reading their API documentation or experimenting, because wot.io's data service integrations and the bip.io workflow made it all just work. Plus, if this turns into a product, I know the support is there to scale up massively. Even better, I can add features and intelligence to it by changing the server-side workflow without ever touching deployed hardware!
Grab yourself a LinkIt ONE board, and sign up for some data services, and you'll be well on your way to shipping your own Internet of Things device like this!
- Mediatek LinkIt ONE board
- Code: https://github.com/wotio/shipiot-mediatek-linkit-one
- Free data services for prototyping
Who would have thought a dev board and data service exchange could make an excellent bit of safety gear?
Enterprises employ a large number of different software solutions to run their companies and have long looked to various forms of the enterprise portal to help manage the complexity of this wide array of software. As a data service exchange, wot.io is well aware of the challenge of pulling out the important components of multiple data services and putting them in one view. With this in mind, and as just one example, we have developed an integration with JBOSS Portal, the popular open source enterprise portal system, as an example of how you can manage the complexity.
The wot.io operating environment (OE) provides essential infrastructure to run, manage, and connect IoT applications from our partners including device management platforms (DMPs) and a wide array of data services including storage, analytics, scripting, Web API automation, real-time processing, visualization, and more. The main focus of an IoT solution lies in these essential data services, however the wot.io OE itself also requires configuration and administration. We used these administration interfaces for our first portlet integrations with JBOSS.
The wot.io OE administration tools are built as individual components of HTML suitable for placement in various portal frameworks as a portlet, widget, web thing, gadget, etc. These units can be composed together to provide sets to tools most useful for a given user or use case. We have portlets for user management (adding, updating removing), group management, API permissions, and more. The portlets are built with a hub in the browser, allowing them to all communicate efficiently with a given wot.io OE cluster over a single WebSocket connection.
Using the JBOSS portal tools, the portlets can be added to and arranged with other JBOSS portlets on a single portal page.
The design of the admin components as portlets make the portal design flexible and all leverage a single log-in even though they are separate components. Communication through the hub via a persistent WebSocket connection also makes the portlets "live" in the sense that they can both send new settings to the connected wot.io OE and receive updates and dynamically update status.
This video shows the full process for adding wot.io OE administration portlets to a JBOSS portal.
As a data service exchange, the next useful extension of this pattern is to add key components from deployed data services to the portal as well. This allows a user to include status information, reports, graphs and visualization, and other important information together in a single portal view even if the information comes from several different data services. Stay tuned for more examples of this approach and other solutions for making seamless IoT solutions.
Last week wot.io was excited to be traveling to Berlin, participating in the AllSeen Alliance booth at IFA. IFA Berlin is one of the largest consumer electronics and home appliance shows in the world, and it was an amazing experience.
The AllSeen Alliance's AllJoyn framework is designed to provide easy communication and interoperability between devices on a proximal network, like a typical home with a WiFi network. To show how different types of products, all running AllJoyn, can work together, the AllSeen Alliance booth at the show had a shared network with a dozen member companies all participating.
AllSeen Booth Demos
The booth had an amazing array of smart AllJoyn products including:
- Switches from LeGrand,
- Lights from LIFX
- Plugs from Powertech
- Sensors from EnOcean
- Air purifiers from Heaven Fresh
- Programmable boards from Dog Hunter
- A refrigerator from Electrolux
- TVs from LG
- Speakers powered by Qualcomm chips
- Device control interfaces by Innopia
- Data Service Exchange by wot.io
- Interdevice rules and management by Two Bulls with Higgns
All of these products had AllJoyn built into them, making them discoverable, allowing them to send notifications, and making it possible to control and program them. And because they all spoke AllJoyn, controllers from one company, like the switches from LeGrand, could be configured to manage any of the other devices.
Cloud Services
In addition to providing the specification for all of these devices to communicate on the booth network, AllJoyn also has a provision for allowing communication between local devices and the cloud with their Gateway Agent specification. This allows devices to securely interact with cloud-based applications, such as:
- Affinegy, providing cloud interfaces to local AllJoyn devices via their CHARIOT platform
- Kii, providing device management and cloud services
And, of course, wot.io. Working with Two Bulls, we were able to get a feed of all notifications being sent across the local AllJoyn network. Every time a device sent a notification, it was routed to Firebase where we then pulled it into the wot.io operating environment. We then configured some data services to show what you might do using various notifications.
We wrote a script in scriptr.io to parse the incoming messages and look for specific notifications. To make it interesting, we joined a few events together, sending a temperature reading back to wot.io each time we saw a "refrigerator closed" event. This allowed us to show a real-time event in the booth and still have something interesting to graph.
We then routed the incoming temperature reading to Circonus and graphed it. We also added some randomness to the temperature readings to make the graph more interesting and show some of the things you can do with scriptr.io. The resulting graphs clearly showed all of the activity during the day, and had some unexplained refrigerator open events over night!
It was great to work with our fellow AllSeen Alliance members to put together so many compelling demos of the smart home that is already a reality.
Other coverage of the AllSeen Alliance at IFA Berlin:
Getting Started
If you're an Atmel fan like I am, you're probably excited about their Atmel Xplained prototyping and evaluation platform for Atmel AVR and ARM microcontrollers. These platforms and extension boards, like the WINC1500 WiFi module that we use in this demo, make it fast and simple to get a hardware prototype up and running.
Along with the hardware, you'll find a broad selection of example code and projects included with the Atmel Software Framework, which is a great help when first learning a new MCU or peripheral chip.
I'm going to walk you through a demo application for the SAM D21 Xplained and WINC1500, which will send temperature and light levels to a dynamic chart visualization using bip.io's powerful workflow tools. This is a potent combination for rapid development and deployment of any Internet of Things device.
(This demo may end up monitoring the hot pepper plants in my garden when it's done!)
Prerequisites
You will need:
- Atmel ATWINC1500-XSTK Xplained Pro starter kit (Which includes: SAMD21 Xplained Pro, WINC1500 Xplained Pro extension board, I/O1 Xplained Pro extension board, and two USB Type A to Micro B cables)
- A Windows computer
- A WPA-secured WiFi access point with DHCP and Internet access
In addition to the above objects, you will also need:
- Atmel Studio IDE software. (we used version 6.2)
- A bip.io account
- Our example code from github
Completely optional, but delicious:
- One hot pepper plant for datalogging
Making a bip.io Data Visualization Workflow
We're going to start out by configuring a workflow on bip.io. This will be a simple one to demonstrate the ease of communication and data visualization, but feel free to experiment and extend your own workflow once it's working.
Connecting Your Boards
The SAM D21 Xplained board has three connectors for extension board modules. To follow along with this demo, connect the WINC1500 Xplained module to the EXT1 header, and the I/O1 Xplained module to the EXT2 header. Leave EXT3 unconnected, and hook the Debug USB connector to your computer.
Your setup should look like the image below:
Developing the SAM D21 Application
You can check out the Atmel Studio project files from github and use them to follow along; the link is at the top of this article.
Building and Loading
With the project open in Atmel Studio, go to the Build menu and select Build Solution, or you can just hit F7.
When complete, you should see zero errors listed in the panel at the bottom. If there are any errors, correct them in your code, save all the files and try the build again.
When you build is complete with zero errors, you can load it to the SAM D21 Xplained board using the tools in Atmel Studio. From the Tools menu, select Device Programming:
The Device Programming dialog will appear. You need to select the Tool type and the Device itself. Tool will most likely be EDBG
and Device will be something like ATSAMD21J18A
. This could vary based on your details.
Click the Apply button to connect to the board and put it into programming mode. The amber Status LED will blink steadily at ~2Hz when the programming tool is connected. Additional options appear. Select the Memories item from the list on the left. Keeping all the default options, just hit the Program button.
When done, you should see that the flash verified OK, and your program will be loaded and begin running. Close the Device Programming dialog. The amber Status LED should now be on solid, and will blink off briefly when serial console output is printed by the loaded application.
Now we need a serial terminal!
Serial Terminal Setup
You'll need a serial terminal program to talk to your board. As Microsoft no longer includes the venerable HyperTerminal with Windows, you'll need to get a terminal app. You can still get HyperTerminal from Hilgraeve, or you can use the open-source Tera Term which we use for this demo.
You will first need to know which COM port number Windows assigned to your board. Open the Device Manager (right-click on Start menu in Windows 8) and look under Ports (COM & LPT) for your board (there may be others in there, too). Note the COM number, COM5 in this instance:
Open Tera Term, and select Serial Port. We will do the detailed configuration of baud rate, etc., in a moment:
Go to the Setup menu and choose Serial Port..., then make sure you have a baud rate of 115200, 8 data bits, no parity, and 1 stop bit. Standard stuff, here.
Hit OK to close the settings dialog, and then press the reset button on the SAM D21 board, right next to the debug USB connector. This will restart the board and you should see the debug console output in the terminal. It should look something like this:
You now have data points being sent once per second to your bip.io workflow! Simply pull up the Bip you created, double-click on the Data Visualization, then the Chart tab, and open the chart URL to see sensor telemetry in near-real-time.
A Note About Firmware & Drivers
The WINC1500 module has firmware loaded, and that firmware version needs to match what's expected by the Atmel Software Framework version you are building against. You may see a message something like this on the debug console:
...
(APP)(INFO)Chip ID 1502b1
(APP)(INFO)Firmware ver : 18.0.0
(APP)(INFO)Min driver ver : 18.0.0
(APP)(INFO)Curr driver ver: 18.3.0
(APP)(ERR)[nm_drv_init][236]Firmware version mismatch!
main: m2m_wifi_init call error!(-13)
....
If you get a message like the one below, you'll need to update the firmware. You can find complete instructions in the Getting Started Guide for WINC1500 PDF on Atmel's website.
Briefly said, you need to load the WINC1500 Firmware Update Project example inside Atmel Studio, build it, and then run a batch file to load the compiled firmware to the WINC1500 board via the SAM D21 board. It is very simple to do, although I did run into a problem with an unquoted variable in one of the batch files, and a space in the pathname.
Once the firmware is updated, and matches the ASF version you are using to build the demo project, it should all work just fine.
Conclusions
After seeing this demo, I'm sure you agree that Atmel's Xplained boards combine with bip.io to make a potent combination for rapid prototyping and product development. With this demo as guidance, you can quickly start using the powerful data services bip.io ties together, and get your ideas flowing! With time-to-market such a critical factor these days, these tools will certainly help fuel the IoT revolution, and your future products.
Get started now! Here are those links again:
- Atmel Studio IDE software.
- A free bip.io account
- Our example code from github
As for this demo, it may just end up monitoring my hot pepper plants...
As part of our preparation for the IoT Evolution Expo in Las Vegas in August, we were happy to be able to work with some of our IoT hardware and data service partners. Together, we built a demo showing how several data services from the wot.io data service exchange came together to make up a complete IoT solution.
- Multitech IoT/M2M Connectivity Hardware Devices
- Stream Technologies' IoT/M2M Device Management & Connectivity
- Solair IoT/M2M Application Platform
- Circonus Event Metering and Monitoring
- bip.io Web API Automation
- scriptr.io Dynamic Data Scripting
This IoT Proof of Concept was based on events and readings from a coffee maker and some fans. We selected these because they are familiar and, more importantly, they demonstrate the types of instrumentation that can be applied to a wide range of use cases across many business verticals.
Multitech engineers added sensors to the coffee maker to measure the flow of water when coffee was made. These sensors were connected to a Multitech gateway, which then sent data to Stream Technologies' IoT-X platform.
Stream sent the device data to wot.io where we routed it to a set of data services that could then operate on the data. bip.io, scriptr.io, and Circonus were all configured to receive and operate on the incoming device data.
Device data was then routed to Solair where it was integrated with other information about the coffee maker in Solair's system to create a full application dashboard. This application provided all of the information you need for managing your infrastructure, combining asset data, like parts lists and schematics, with live sensor readings.
You can see a sample of the functionality provided by the various systems in the video below. Thanks to our partners for their help in putting together a great demo so quickly!
More reading on IoT Evolution:
Last October, at ARM TechCon, we showed a demo of a NXP LPC1768 connected to a WS2812 24 color RGB LED. The hardware was programed using ARM mbed Development Platform and connected to the mbed Device Server. We used the wot operating environment to seamlessly integrate the data coming off the devices to a search engine, an analytics package, and a business intelligence platform.
In this tutorial, we are simply going to cover the basics of developing an application with the mbed compiler and a Freescale FRDM-K64F. We will connect the on board Freescale FXOS8700CQ 6 axis accelerometer and magnetometer up to a bip.io workflow.
Prerequisites
In order to follow along with this tutorial you will need:
- an mbed Developer account
- a Ship IoT bip.io account
- a FRDM-K64F board
- an ethernet cable and a micro-USB cable
The cost of the board is around $35, and you probably have spare cables lying around. For my setup, I used Internet Sharing on my Macbook Pro, to connect the FRDM-K64F to the Internet. Optionally, you can wire your board to your switch, and it will receive a DHCP lease as part of the startup sequence.
The code for this tutorial can be imported directly from the public mbed repository.
Configuring your workflow
I am going to reuse a workflow from a prior tutorial so if you have already done the Photon tutorial, this will feel like old hat. If you haven't then first go to ShipIoT.net and sign in. You will then need to Create A Bip
to create a new blank canvas:
You can then click on the central circle to Select Event Source
:
Here we will select Incoming Web Hook
to create a HTTP endpoint to which our FRDM-K64F will send it's data. This is the staring point of our workflow. Once you select that you will be presented with a icon in the center:
Above the central canvas you'll see a URL in which we want to replace the Unlisted
with a proper path. For this workflow, we'll name it accel
which should produce a URL of the form:
http://<your_username>.api.shipiot.net/bip/http/accel
You can view this URL by clicking on the Hide/Show link icon next to the URL. The next step will be to add a Data Visualization
element to the workflow, so that we can chart the values coming from the accelerometer. If you click on Add An Action
you will be presented with a options panel:
If you click on Data Visualization
option, you will be presented with a list of actions.
Here we want to select View Chart
to create a graphical chart of the incoming web hook data. On the main canvas, we then can connect the Incoming Web Hook
icon to the Data Visualization
icon by dragging from one to the other:
This will link the incoming data to the chart. To configure the parsing options, we'll open the Parser
tab and create some representative JSON in the left hand panel:
{ "x": 0, "y": 0, "z": 0}
and then click the Parse
button to generate the associated JSON schema. We can then return to the Setup
tab and double click on the Data Visualization
icon to open up it's settings.
First we can set the X axis value to the time that the FRDM-K64F sent the request by setting it to a custom value of Invoke Time
:
We can then set the Y1 and Y2 values to the values X and Y respectively:
Clicking OK
will then save these settings. Opening the settings a second time will present us with a Chart
tab:
This will display the data as it comes in off of the webhook. The last thing we need to do is set the authorization settings on the URL. For our purposes we'll use HTTP Basic Authorization with a username and password of test
.
The important thing here is to gab a copy of the HTTP Request Header
. We will need to modify the source of the program to use this value to authenticate against the server. Feel free to use any username and password you'd like. Finally click Save
and your workflow will be running.
Developing the ARM mbed Application
As a picture is worth a 1000 words, a 15:48 @ 30fps video is worth 28,440 pictures:
In this video, I cover writing the application, and demonstrate the charting interface of the workflow we created above. If you don't feel like watching the video, just import the code into your mbed Developer account, and edit the three lines commented in the source to reflect your username and authentication choices.
You can then compile, download, and flash your FRDM-K64F board, and everything should just work! If it doesn't odds are good that it is a networking issue. Should your device not be able to acquire an IP address, you won't see the debug messages on the serial console. Similarly, should the webhook not work, you can check the Logs
tab for reasons. Often it is merely a copy/paste bug regarding the authentication token.
Once you have it working, you can extend your workflow to perform additional actions. You can save your data to a Google Spreadsheet, or have it send a tweet to Twitter, or even control your Nest thermostat, or all of the above.
Getting Started with the Photon
The Particle Photon is a small open source WiFi dev board with an embedded STM32 ARM Cortex M3 microcontroller:
It supports a number of analog and digital inputs, and with some work can communicate with devices via TWI, I2C, or SPI. For this example, we're going to connect it to a triple axis accelerometer breakout board from Sparkfun:
This board has a Freescale MMA8452Q accelerometer which supports an I2C interface. The total component cost of these two prototyping boards is about $30USD, making it rather inexpensive.
For our development environment, we will need both the Particle Dev and Particle CLI tools. All of the device driver code will be developed in the IDE, and the CLI will be used to setup the web hook to send our event data to Bip.io.
Writing a Device Driver
The Particle firmware mimics the Arduino API in a lot of ways, but there are some significant differences. These differences largely take three forms:
- new methods specific to the Photon's hardware
- different header files which need to be included
- compiler is remote in the cloud!
The fact that your compiler is not local to your machine means you need to bundle your projects in a Particle Dev specific way. All of your files must live in the same directory and are sent via HTTP to the compiler via a multi-part mime document POST request. This means you must supply all of your library code each time you compile or make a change.
The code for this device driver can be found at:
https://github.com/WoTio/shipiot-photon-mma8452Q
And you can get it via git clone using:
git clone https://github.com/WoTio/shipiot-photon-mma8452Q
The data sheet for the MMA8452Q can be found at:
http://www.freescale.com/files/sensors/doc/data_sheet/MMA8452Q.pdf
And it goes without saying, but you should download and save the data sheet somewhere for future reference. This chip has a lot of features that we aren't going to use or cover, and the data sheet provides information on using it for a wide range of applications including: tap detection, portrait vs. landscape orientation detection, and freefall detection.
The first file we're going to look at is the mma8452.h file:
Here we define a C++ class that will model the state of our MMA8452 accelerometer. Our default constructor will set the I2C bus address to 0x1d. If you jumper the pads on the bottom of the accelerometer board, you can change the address to 0x1c. If you do this, you will need to supply that address to the constructor.
In our initial setup phase, we will call begin
which will initialize the I2C interface, place the board into standby mode, and then set the scale and data rate to their default values. Finally it will turn the data output on by exiting standby mode:
Setting the board to standby mode is done by clearing the low bit on the register 0x2a:
We start data flow by toggling this bit back to one:
The Freescale MMA8452 needs to be in standby mode to modify any of the control registers. To modify the data rate, we can write a rate factor to the control register 0x2a:
The rate factor defaults to 000 which per table 55 of the data sheet amounts to 800Hz. Our wire speeds on the Photon has two settings 100kHz or 400kHz, both of which are more than sufficient by a couple orders of magnitude to support the output data of our device. And since we're going to drive this off of a 5V 1A mains wired power supply, we're not going to worry about the low power modes. We could easily lower the sample rate, as we are unlikely to update the web API that frequently. Changing the output data rate to something closer to 2x your polling frequency should be adequate.
To configure the scale (multiple in G over which our 12bits represent) of the x, y, and z components, we need to write to the low two bits of the register at address 0x0e.
These bits per table 16 set 2G (00), 4G (01), or 8G (10). Currently the value of 11 is reserved, and should not be supplied. Since we're unlikely to swing this around like a dead cat, 2G is sufficient for testing. If you'd like to handle larger swings, feel free to bump the rate up to 8G!
Finally, we need a way to test for the availability of data, and a way to retrieve that data once we have some. To test for availability of xyz data, we check bit 3 of the status register at address 0x00.
If we were only interested in the fact that one axis changed, or only wanted to look for movement in one direction, we could query other status register bits, but this is good enough for now.
Then to read the data once available, we can make a call to read the 6 bytes starting at address 0x01. This will give us X,Y,Z in MSB order, wherein we need to trim the bottom 4 bits of the LSB byte (as the sample size is 12 bits).
The actual input and output is done by a pair of routines which simply write the request and destination register and a input or output byte array and length. The available
and read
methods use the in
method:
Whereas the standby
, start
, scale
and rate
methods use the out
method to update the register values:
Writing a Sketch
The application we're going to use to forward our accelerometer data to our web application will simply initialize the accelerometer and then publish an event every second with the most recent values of x, y, and z.
For sake of our sanity, we can also log the x,y,z values to the Serial interface, which will allow us to record the values as we see them. If you supply power to the Photon via your computer's USB, the Serial interface can be discovered as usb modem / serial device.
You can understand the setup()
and loop()
functions in the context of the main.cpp file from the firmware. The .ino files get translated to C++, it adds some header files, and is referenced in the main()
function. The activities taken by main()
are:
- setup the WLAN interface
- waits until WLAN is ready
- first loop through, it calls
setup()
- then it calls
loop()
each iteration forever
Should the WLAN interface fail to connect, the system will stay in the wait state, which means the LED will continue to blink green, and your application will not run.
Wiring up the Board
Now that we have our software ready for testing on hardware, it is a good idea to wire up the nets on a breadboard for testing.
For testing we connect the USB to our computer so we can watch the Serial output. For the pins we connect from the Photon to the breakout board:
- 3.3V to 3.3V
- GND to GND
- D0 to SDA
- D1 to SCL
Following Sparkfun's app note, I'm using 2 x 330Ω resistors between D0 and SDA and D1 and SCL. You should see roughly 3.3V on pins 4 and 5 of the breakout board when the application is idle (I2C is active low). If you have a raw MMA8452Q, look at figure 4 on the data sheet. To wire it up, you will need:
- 2 x 4.7kΩ pull up resistors tied to 3.3V from pin 4 (SDA) and pin 6 (SCL)
- a 0.1µF bypass capacitor attached to pins 1 (VDDIO)
- a 0.1µF capacitor tied to GND and pin 2 (BYP)
- a 4.7µF bypass capacitor attached to pin 14 (VDD)
As we're going to wave the accelerometer around, I am going to fix mine to a perma-proto board. We can use a fairly simple setup:
Here I'm using:
- 5 x 6 pin female headers
- 2 x 330Ω resistors for current limiting
- 22 guage wire jumpers
For the wiring, I'm going to:
- jumper pin 1 of SV3 to pin 6 of SV1
- jumper pin 4 of SV3 to pint 1 of SV1
- tie one resistor to pin 6 of SV2
- tie the other resistor to pin 5 of SV2
- jumper the resistor tied to pin 6 of SV2 to pin 5 of SV1
- jumper the resistor tied to pin 5 of SV2 to pin 4 of SV1
This way I can replace both the breakout board the the Photon or simply reuse them on other projects. I've also added a 6th header on the other side of my board so I can setup a second app with some analog sensors on the right side:
The bottom of the board looks like:
As you can probably tell, I've reused this board for a few other projects, but that has mostly had to do with resoldering the the jumpers for the I2C pins.
The this point, you should be able to setup your WiFi connection using the Particle phone app, or you can use the particle setup wifi
CLI command to configure your board. Once your board is connected to your WiFi, you can use the compile and upload code using the cloud
button to flash your device with the app.
Configuring your workflow
Over at Shipiot.net, you can sign up for a free bip.io account. Bip.io is a data service that provides integrations into a number of web applications. It allows you to create automated workflows that are attached to your device's data.
For this tutorial, we will connect our data to a web based graph. We will use this graph to visualize the data. Later we could attach actions based on patterns we discover in the data. For example, if we attached the Photon and accelerometer to an object, each time the object moved, we could have Twilio send a text message to our phone, and we could record each movement to a Google Spreadsheet for later analysis.
Once you click the Create A Bip
button, you will be taken to a blank workflow canvas:
If you click on the center circle, you will be able to Select Event Source
:
For integrating with the Particle.io's Cloud API, we will select Incoming Web Hook
which will allow us to process each event that is sent by their webhook to our workflow. After selecting Incoming Web Hook
, your canvas should look like this:
Above the canvas, there is a URL bar with a path component set to Untitled
. Change this to accel
so that we can have a path that looks like:
http://<your_username>.api.shipiot.net/bip/http/accel
We will need this URL to setup the webhook on the the Particle.io Cloud API. Before we do that, however, we should finish configuring the remainder of the workflow so that the Cloud API doesn't error out while attempting to connect to an endpoint that doesn't exist!
Next we'll add a chart to visualize the data coming in off of the X and Y components of the accelerometer. First thing to do is click Add An Action
, and it will bring you to an action selection panel:
Here we will select Data Visualization
, which will enable us to plot the values sent by the device. Click it will bring us to a subpanel:
To view the data in chart form we'll obviously pick View Chart
, but we could as easily generated a visualization that would have allowed us to view the data as JSON or simply see the raw data as it enters the system. This is very handy for debugging communications between elements in our workflow.
Once we've selected the View Chart
option, we will be presented with a canvas with two nodes on it:
Now by dragging and dropping from the Incoming Web Hook
icon to the Data Visualization
icon, we can create a data flow from one to the other:
Now all of the messages that come in at our URL will be sent to the chart. But in order for us to plot the data, we need to describe the contents of the message. Clicking on the Parser
tab will bring you to a two panel interface that looks like this:
Into the left panel, we will enter some JSON that looks like the JSON that our application sent to the API:
We then click the Parse
button to generate the associated JSON schema:
We can now use these values as inputs to our chart. If we double click on the icon for the data visualizer, it will bring up a control panel for setting a number of values. Scroll down to the bottom of the panel and you'll see entries for setting the value of X, Y1, and Y2. For the X value we'll use the time of the incoming request:
We can then set the Y1 and Y2 values to the accelerometer's x and y values respectively:
Once you click OK
it will save the configuration. Double clicking the icon again will present you additional tabs including one for the Chart
:
Here we can copy the URL and open it up in a new browser window to see the data as it comes in.
The last thing we need to do before saving our workflow is setup some credentials for the webhook. By selecting the Auth
tab under the webhook panel, we can change the authentication to Basic Auth
, and in my case I'm going to use test:test for submitting data to my webhook:
We will also need the Authorization
header later to configure the webhook on the Cloud API side of things. Clicking Save
or Save and Close
will start your workflow running!
Configuring the Cloud API
For the remainder of the setup, we will use the particle CLI tool to interact with the device and the Particle Cloud API. The particle
CLI tool allows us to do basically everything we need including flashing the chip.
To compile our source code we can use the particle compile command
:
This will save our firmware image to a .bin file in the local directory. We can flash this via USB or over WiFi. To flash over WiFi, we need the device ID from the listing which we can get with particle list
Here I'll flash my device darkstar02
using the particle flash
command:
Being able to flash your device remotely is really handy when you've installed the device somewhere. You can also provide the source files to the particle flash
command, and the cloud will attempt to compile it and flash it too, but I like having a copy of the firmware for flashing again at a later date.
Once the device is done flashing and LED is breathing teal, we can attach to the serial monitor using the particle serial monitor
command:
As you can see I shook the device a couple times to make sure it was working. If you don't see any serial output, that probably means your sketch isn't working. Unfortunately, debugging via the serial monitor doesn't have a full debugger support.
Assuming you are seeing data in the serial monitor, you can then setup the Particle Cloud API webhook. Edit the accel.json
file to contain your device details:
Once you've setup your device
and url
settings, you can create the webhook using the particle webhook create
command:
This webhook forwards all of the events published to the accel
events to the url specified in our accel.json
file. We can sample these events using the particle subscribe
command to view the messages as they arrive from the device:
Because the webhook has the json
line in it, the data sent to our URL will have the x, y, and z values extracted from the data
field and placed in the top level of the event. The json sent to the webhook also contains device details, ttl, etc. as well.
Viewing the Data
If you remember the URL of the chart URL, you can after a few seconds pop over to the chart and see the new data as it comes in. You will need to be logged into shipiot.net to view the data. Here's a sample with me shaking the device:
At this point, you can go back and add new actions to your workflow, and have them trigger based on changes in the state of your device. Send a tweet. SMS your friends. Record your shaking to a spreadsheet.
Snow, the freshest bip.io, has just been made available through both the public repository and npm, and contains some significant improvements on it's data model, general concepts, user interface and tooling. Many hundreds of improvements have distilled bip.io into a stable, easier to use and vastly more powerful tool.
While it doesn't constitute any backwards breaking changes with older installs, the way some concepts are communicated and used may instead just break your brain. Don't worry, its a good thing.
I'm pleased to welcome bip.io version 0.4 (Snow). Let me show you what's inside.
Channels, Be Gone!
The concept of Channels is the biggest conceptual pain for users. A 'Channel' has always been a container that bip.io can use to hold persistent configuration for an action or event, whether needed or not. The requirement of the old data model was to create a Channel explicitly, and then use that in a bip. In most cases, this configuration wasn't required to begin with as there was no configuration to store. This meant that you would soon fill up your lists of saved actions with junk channels while creating new bips, and feel increased pressure to maintain a mental model of how things were named, where they lived, and what bips they were used for. It's what you might call a leaky abstraction.
With that in mind, Channels have evolved into something new, called 'Presets', which you usually won't have to think about.
An intended side-effect of this change is there's now freedom to use actions how and where you like, multiple times, in the same bip. This is perfect for creating logical gates, performing calculations and transforming data. I have been literally squealing at how easy it is to build functional pipelines now. Dropping channels reduces the barrier to entry significantly, and we've been exploding here with new bip possibilities!
So Where Did They Go?
While Channels still exist under the hood, and this doesn't break any old bips that use them, they are now only necessary in very specific cases, which the User Interface now takes care of for you. All your old channels will still be available after this update, they've just been repackaged as 'Presets', which can be found by entering a node's Personalizations screen.
The way Presets work is they take what were previously configuration options for channels, and merge those configurations with personalization. This does a couple of important things
It means that you now have the flexibility to override an otherwise protected configuration option from within a bip itself. Take Tumblr (or any blog integration) for example. Previously you would need to create one channel for every permutation of URL and post status (draft/published/image/video etc etc), appropriately label those channels, and maintain a mental picture of where they all were in the system. Now you can save those permutations as presets, if and only if you want to, or otherwise override them for specific nodes in specific bips in ways which won't have unintended side effects.
It clarifies the architectural intent of Channels. Merging of channel configurations and imports already happened on the server side, but this was never clear to users or developers. Aligning the experience with the API expectation means there's fewer surprises in making the leap into bip.io development.
Additionally, you'll notice the Pods screen has been completely removed. We're still playing with this idea and it might be re-born as something else in the near future. For now however, it's been disabled. Everything that could be done in the Pods screen can now be done when configuring a Bip itself.
I haven't touched on how this is modeled in the API as that's more of a developer concern. For developers, you can find updated API documentation here, and templating examples here. In a nutshell, Snow templating uses JSONPath exclusively, with channel id's or action pointers (eg: 0e7ab3fc-692e-4875-b484-834620d1c654
or email.smtp_forward
) usable interchangeably.
RPC's Overhaul
The RPC's tab has also been dropped, and every RPC available for a channel, action or event appears as its own tab under Personalizations. Additionally, RPC's are displayed inside their own tabs with a link available if you want to break it out into its own window. This means that Pods can start composing their own interactive experiences in the context of an active graph.
Actions Quick Search
The list of saved events and actions in the left sidebar has been emptied and will not show any results until you start searching. Only action and event names will appear now in these results, not the channels themselves. Once you select an action and drag it onto the canvas, you can get to the channel information under the Personalizations dialog, and selecting a Preset.
Search fields have also been added to the Action/Event select dialogs across the board, keyboard shortcutting and tab key awareness are a work in progress.
Native Functions
We've packaged up a handful of native system pods into something called Functions. Functions is a dropdown which you'll find on the bip editing canvas and consists of all actions from Flow Controls, Templating, Time, Math, Crypto, and a brand new Pod called Data Visualization.
You can find the full manifest of Functions in our knowledge base.
Data Visualization
Data Visualization is especially awesome because you now have the ability to track data as it travels through a bip, from raw or structured data to even displaying a chart over time. These logging and visualization functions are perfect for debugging. The actions are View Raw Data, View JSON and View Chart.
Like any other action, data visualizations can be linked to any other node in the bip, capturing and displaying whatever they receive. Once configured, double-click on the node and go to the functions appropriate RPC tab. The nodes will only show data they've received while your edit view is open.
Here's a quick video to show how to get it running. You'll also notice there's no channel or preset management involved, the full library of services is instantly and always available.
Or just install this yourself
Lots Of Love For Web Hooks
Web hooks are getting more and more use as you unlock the power of bip.io with your services and devices. We drove through some significant improvements that make integrating web hooks with your applications and devices a truly joyful affair.
Web Hook Parser
An ongoing frustration with Web Hooks is they never really knew what data they would receive, and therefore that data couldn't be transformed very easily between different actions. Parser accepts the expected JSON payload from a device or application, has bip.io understand its structure and then makes it available for any personalization. This can be found as a new tab which appears when configuring an Incoming Web Hook
For example :
Makes Unbabel callback attributes available for personalization :
Although this example is simple, the Parser supports fully formed structured JSON, nested documents and arrays etc. Any part of this complex structure can be extracted and personalized per node.
Testing
Coupled with the Parser feature, Testing (via the Outbox) has now become much more powerful and easy. When you test a Web Hook, the payload will be pre-filled with a sample payload based on your Parser example, as well as provide a cURL string for the endpoint, including authorization headers that can simply be copied and pasted into your console.
And That's Not All Of It!
Don't think that with so many big features to talk about, we've ignored the smaller stuff. You'll find tons of small tweaks and improvements across the system, and every Pod has also received a once-over. It's fantastic to be able to go back to support tickets with fresh solutions!
In other news, more support is incoming for developers and device integrators via Enterprise ShipIoT. Expect many more pods to appear in the coming months and sign up to ShipIoT to stay abreast of Internet Of Things integrations!
The feedback we've been receiving has been fantastic, please continue to share whatever's on your mind.
Enjoy.
When we got our hands on a couple of Electric Imps here in the office, we set about to see how we could use their ARM Cortex processors and WiFi capabilities. Within minutes of plugging the Imps in and loading up some example code in their cloud-based IDE, we had our first successful internet<->device communication going. Cool! That was simple enough, so onto something slightly more interesting:
In the software world, every example starts with printing 'Hello World'. Well in the device world the equivalent is the famed blinking LED. So for that we'll need:
- an Electric IMP
- a 330Ω resistor
- a breadboard
- some jumper wires
- an LED, of course.
We're using an IMP card and the standard April Breakout Board for our little project. You can purchase one as well through that link. Wiring up the board as shown here
and again grabbing some example code that we dropped in the IMP cloud IDE, we were able to then push the code to the device. Sure enough, our LED began to blink!
Manually configuring the LED to loop on and off every 5 seconds is neat 'n all. What would be more interesting, though, is to have our Imp respond to events sent over the internet. So, onto setting up our agent code to handle incoming web requests:
function requestHandler(request, response) {
reqBody <- http.jsondecode(request.body);
device.send("led", reqBody["led"]);
// send a response back to whoever made the request
response.send(200, "OK");
}
http.onrequest(requestHandler);
and our device code *:
led <- hardware.pin9;
led.configure(DIGITAL_OUT);
agent.on("led", function (value) {
if (value == 0) led.write(0); // if 0 was passed in, turn led off
else led.write(1);
});
now we can send an example request: curl -X GET -d '{"led":1}' https://agent.electricimp.com/XJOaOiPDb7UA
That's it! Now whenever we send a web request with {"led": 0 | 1}
as the body
of the message, we can send voltage through to pin9
and control the state of our LED (on or off), over the internet!
Pretty cool.
We'll leave securing the control of our device to some application logic that you can write into the agent code, which for anything more than a blinking LED you'll absolutely want to do.
With the Electric Imp we're essentially demonstrating that it is now possible with just a few lines of code to remotely manage and control a physical thing that you've deployed out in the field somewhere. We've attached a simple red LED to one of the GPIO pins, but of course you can start to imagine attaching anything to the Imp, and have that thing now be a real 'connected device'.
Tails
One other cool offering from Electric Imp is their Tails extensions which make it super-easy to start reading environmental data. Ours has sensors to read temperature, humidity, ambient light, and even the barometric pressure. They're designed to work with the April dev board, so we grabbed one and connected it up as well.
Some quick changes to the agent code:
function HttpGetWrapper (url, headers) {
local request = http.get(url, headers);
local response = request.sendsync();
return response;
}
function postReading(reading) {
headers <- {}
data <- reading["temp"] + "," + reading["humidity"] + "," + reading["light"] + "," + reading["pressure"] + "," + reading["timestamp"];
url <- "http://teamiot.api.shipiot.net/bip/http/electricimp" + "?body=" + data;
HttpGetWrapper(url, headers);
}
// Register the function to handle data messages from the device
device.on("reading", postReading);
<device code omitted for brevity>
And with that we are setup to handle reading sensor data and sending it over the internet. But to where?
Ship IoT
The url line from above is where we want to point our data to, e.g.
url <- "http://teamiot.api.shipiot.net/bip/http/electricimp" + "?body=" + data;
which is an incoming web-hook URL we setup in bip.io to receive our sensor data into the cloud. Once we have the url setup in bip.io, we can start to realize the value of that data by connecting easily to a range of web-services.
Let's setup that URL:
- Create an account on bip.io through Shipiot.net
- Set up our bip. The following video shows us how:
3. There is no step three! We're done.
Open up the sheet in Google Drive and you'll see we that our Imp is sending its four (4) sensor readings (temperature, humidity, ambient light, and barometric pressure) right into our bip.io 'bip' which automatically handles sending and recording the data to a Google Spreadsheet.
There are dozens of different services you could also connect your device data to from within bip.io- it all depends on what use case makes sense for you.
wot.io + Electric Imp =
With the recent release of Electric Imp's BuildAPI we were also able to set up an adapter to be able to securely command-n-control our entire collection of Imps from within the wot.io Operating Environment (wot.io OE), including:
-
Send commands to activate and control a particular device. e.g:
["send_data", < Agent Id >, "{\"led\":1}"]
- Send commands to read data off of any specific Imp (or read data from all of them).
- List all of the devices - and query their current state - registered under each account.
- Review the code running on some (or all) of the Imps.
- Remotely update the code running on some (or all) of the Imps.
- Remotely restart some (or all) of the Imps.
- View the logs of every device
- and more...
which, when connected to the range of data services offered through the wot.io Data Services Exchange really starts to unlock the potential value of amassing connected-device data on an industrial scale.
We've been having some fun with the Philips Hue smart lighting system and I wanted to expand the interactivity beyond the local office WiFi network. We had an Imagination Creator Ci20 (version 1) board available, so I thought it would work as a good gateway to pull data from the Philips Hue bridge and send it to some online services with one of the wot.io data services; bip.io.
To keep it simple, I decided to share one value, the current hue setting of a single one of our lights (see the Hue documentation for details on how it defines light values). To get the value, I wrote a Perl program on the Ci20 to connect to the Hue gateway using the Device::Hue module. The program finds the correct light (we have several), pulls out the hue value, and then sends it along to our bip.io instance set up for Ship IoT. My bip then calls Numerous and updates my hue metric.
Details
First I set up the bip so I would have an endpoint to send data to. It's a simple one with a web hook (HTTP API) for input and a single call to Numerous. The Numerous pod configuration involves activating the pod with your Numerous developer API key, creating a number in Numerous, then providing the metric id for your number created in the Numerous app as a configuration to the configured pod (see the video for details).
If you're not familiar with Numerous, it's a mobile app that displays individual panels with important numbers on each panel. If you install the app and search on "wot.io" you'll find our shared "wot.io Buffalo Hue" number. Then you can see how our Hue changes and maybe set one of your lights to the same color as ours.
Once the bip is created, you have an endpoint. Next is to send data to it using the Ci20 board and a short Perl program.
The Ci20 board uses a MIPS processor and runs Debian out of the box. Add a monitor, keyboard, and mouse and you're ready to go. The board has wifi connectivity, so once the Debian desktop came up, I used the desktop GUI tool to connect the board to the same network the Hue gateway runs on.
Perl is part of the standard install. There are many ways to install the Device::Hue module, cpanminus is likely the fastest:
sudo apt-get install curl
curl -L https://cpanmin.us | perl - --sudo App::cpanminus
cpanm --notest --sudo Device::Hue
You can find the program I used in the wot.io github project. The values you need to run it are all listed at the top:
- Base URL of the Hue bridge on your network
- A username or "key" for the Hue bridge (instructions)
- The name of the light you want to pull data from
- URL of your bip endpoint and token
Once it's configured, you can run and your hue value is updated on Numerous!
Another interesting idea to extend the project is to schedule a job in cron, store the hue value, and send updates only when the value changes. It would also be fun to set up an endpoint on the Ci20 to receive a value back and set the hue value. Maybe a project for next time, or maybe someone will beat me to it and send me a pull request.
Prerequisites
In this tutorial, we will cover how to use bip.io to connect a TI Launchpad module with dedicated ARM MCU to a Google Sheet. What you'll need to follow along:
- a TI Tiva-C Launchpad or TI Stellaris® LM4F120 LaunchPad
- a SimpleLink Wi-Fi CC3100 BoosterPack
- a MMA8452Q Triple Axis Accelerometer
- two 330Ω resistors
- a breadboard
- a fistful of jumper wires
- a multimeter
- (and if you don't have a breakout board, two additional 10kΩ resistors)
Some additional tools that are useful but not absolutely necessary (but you should definitely acquire):
- an oscilloscope
- a Bus Pirate
In addition to the physical goods, you will also need:
Configuring bip.io
Before going in and creating your workflow, it is a good idea to first setup a new Google Sheet for this project. The two things you should do are create a doc named ShipIoT
and rename Sheet1
to accel
.
The integration with Google Sheets in bip.io can use both of these values to identify where to put your data. By renaming the sheet, you can easily save the data for later.
Next we'll
which will become our workflow for collecting our accelerometer data in a Google Sheet. Once you click that button, you will be presented with a blank canvas:
From here, it take about two minutes to configure the workflow:
If you didn't catch all that, click on the circle in the center, and you will be presented with a menu of event triggers:
Select the Incoming Web Hook
option so that our CC3100 board will be able to connect to bip.io via a HTTP connection used by the library. This will generate the following view:
You can now see the center circle has changed to our Incoming Web Hook
icon. We also have a URL above that broken out into parts. We need to provide the final component in the path which is currently Untitled
. For this example, let's name this sheets
.
Clicking on the Show/Hide Endpoint
button will assemble the URL into one suitable for copying and pasting into your application.
Next we'll add an action to add the data sent to the Incoming Web Hook
to our Google Sheet. Click the Add an Action
button to open the action selection modal:
Select Google Drive
, and if this is your first time through, proceed through the activation process. When you have Google Drive selected you will be presented with a set of individual actions that this module can perform:
Select the Append To A Spreadsheet
action. We'll later configure the properties of this action to write to the Google Sheet we created earlier. But first we'll drag and drop from the Incoming Web Hook
to the Google Drive
icon to link them.
After they are linked, we'll describe the data coming from the Incoming Web Hook
by selecting the Parser
tab. In the left hand panel, we can type some sample data that the accelerometer will send:
If we then hit the Parse
button, it will generate a JSON schema that we can use to parse the incoming messages. There's no need to save the schema, as we will be using it in just a moment.
Next back at the Setup
tab, double click on the Google Drive
icon, and you'll be presented with an options screen. Fill out the fields according to how you setup your Google Sheet. In my case I named the spreadsheet ShipIoT
and the worksheet accel
:
If you then scroll down, you'll see a section for the New Row
contents, which we will use the Custom
option to select the Attributes
of the incoming message to populate. Because we configured the JSON schema before coming here, we will be presented with a list of fields that correspond to our device's data.
Then all that is left to do is to click OK
and then Save
, and our new workflow is now running. We can use the Auth
tab to change the authentication credentials on the endpoint. By default it uses your user name and your API Token (found under settings) as the password. Selecting Basic Auth
can allow you to change the username and password which is generally a good idea if you want to make the service available to someone else.
Setting up the Board
Once you have all of your components together:
- insert the MMA8452Q breakout board in column j
- trim the leads on two of the 330Ω resistors
- insert the 330Ω resistors in series with the SDA and SCL pins bridging the gap
The board should look like this:
It is a good idea to leave some free holes between the resistors and the board so that you can safely probe the lines. These two resistors are there for current limiting on the data (SDA) and clock (SCL) of the I2C bus.
Next we'll add some jumpers to the board. I'm going to use red (3.3V), black (GND), yellow (SCL), and green (SDA), but you can pick your own favorite colors. My pin out will look like:
- Red f1 to 1.01 (3.3V)
- Black f6 to 3.02 (GND)
- Green a2 to 1.10 (I2C1SDA)
- Yellow a3 to 1.09 (I2C1SCL)
The breadboard will look roughly like this:
On the other side, the CC3100BOOST should be installed on top of the TIVA-C or LM4F120 Launchpad. The jumper wires should look like:
If you only have male to male jumper, you can wire up the back of the Launchpad as follows (remember that everything is backwards!)
Verifying the I2C address (optional)
If you have a Bus Pirate handy, now is a great time to verify that your accelerometer is working, and ensuring it works. To start, we'll make sure we have the right leads:
Once you're certain you have the correct leads with 3.3v and ground, it is time to test to see if your MMA8452 is working correctly. The Bus Pirate allows us to connect search the I2C bus for the address of the device. If you're using the Sparkfun breakout board, there are two possible addresses: 0x1D or 0x1C. By tying the jumper on the bottom to ground, you can select the second address. To make sure you have the right address you can search the I2C bus this way:
Programming the Board
Now that we have verified that the MMA8452 is working, we can program the board. If you don't like typing, you can download the code for this tutorial from Github.
git clone https://github.com/WoTio/shipiot-cc3100-mma8452.git
You will need to install the directories contained within into the libraries directory for your Energia setup. On Mac OS X, you should be able to place both the ShipIoT and SFE_MMA8452Q directories into ~/Documents/Energia/libraries.
Once all of the files are installed, open the ShipIoTCC3100MMA8452Q.ino file in Energia, and you will see the following source:
The things that you need to change to work in your environment:
- change the ssid to your network's SSID
- change the password to your network's password
- change the bip.io URL, user, and password
- change the address passed to accel if necessary
If you use my credentials from the image, you will get an access denied response from the server. But points if you tried it :) Plugging in the board, and selecting the correct Tools >> Board
and Tools >> Serial Port
from the Energia menu should make it possible just to compile and load onto your board.
Once running, open up the Serial Console
and you should see a sequence of dots as it connects to your WiFi network, obtains an ip address, and then the device should start making HTTP request to your URL. If everything is working correctly you should start seeing data appear in you spreadsheet.
Odds and Ends
If it all worked, you should have seen results appearing in your spreadsheet within seconds of the device finishing it's programming. Though in the real world not everything goes as planned. In fact, the first time I tried using the Wire library, I got no activity on any of the pins. So I hooked it up to a scope:
If you look at the SFE_MMA8452Q.cpp
line 45, there's a special Wire.setModule(1)
line which is necessary for selecting which of the I2C interfaces Energia's Wire library should use. After discovering this method however, I was able to get something on the scope:
In this picture the SCL is on probe 1, and the SDA is on probe 2. I2C is active low, so all you'll see if it isn't working is a 1.8V trace. You can see that the clock isn't terribly square, and there is some ringing, but nothing too much.
If you run into issues with I2C, this is a good place to start looking. And always remember to ground yourself, test w/ your multimeter, and double check with your scope. The Bus Pirate was also very useful in ensuring that the I2C communications were going through once everything was running. Macro (2) can be a very powerful tool for finding communication problems.
Setup
For this tutorial you will need to sign up for a set of accounts:
You will also need to acquire a CloudBit, based around a Freescale i.MX23 ARM 926EJ-S processor, from LittleBits. The parts we will use for this tutorial are:
- a 5V 2amp USB power supply
- a micro USB cable
- p3 usb power
- i3 button
- w20 cloud
- o9 bargraph
They should be assembled in series as follows:
Instructions for connecting the Cloudbit to your local WiFi network for the first time can be found on the Cloudbit Getting Started page. If you have already setup your Cloudbit, you can change the WiFi settings by going to your Cloudbit's page and going to Settings and selecting the following option:
If you don't already have a Slack account, you can create a new team and setup your own #littlebits channel for testing:
So by now, you have signed up for Ship IoT, have a CloudBit sitting on your desk, a Slack channel, and are wondering what's next?
Creating our first Bip
When you first Sign In to Ship IoT, you will encounter a friendly green button that looks like this:
Clicking that button will take you to a blank canvas onto which you can install an Event Source:
By clicking on the target in the middle, you will be presented with an Event selection screen:
We'll select "Incoming Web Hook" to provision a URL to which our CloudBit will send messages. In the field that say "Untitled":
Enter a path of "littlebits" for our Event Source, and we
should now have an event trigger on our canvas:
Next we will "Add An Action" which will bring us to an action selection screen:
If you scroll down a bit you will find a Slack pod which we can activate. Your first time through, it will request you to sign into your Slack account and authorize Ship IoT to access your Slack account. In the background it will provision a new access token and send you an email notifying you of that. In the future, you can deactivate this token through the Slack interface.
After you have activated the pod, you will be asked to select an action to perform:
In this case, our only option is to "Post to Channel". Selecting this action will result in another node in our bip:
Double click on the Slack icon to open up the action's preferences:
We can give the bip.io bot a name of "LittleBits":
We can select the "Channel ID" either using the "Use Preset" button which will attempt to discover the list of channels you have created, or you can supply a custom channel id:
Finally, we need to specify the "Message Text", and for this we will send the full message sent by the CloudBit by selecting "Incoming Web Hook Object":
After clicking OK, we can now link these together by dragging and dropping from the purple Event Source to the Slack action:
Now whenever a message is sent to https://yourname.api.shipiot.net/bip/http/littlebits it will be sent to our "Post to Channel" action!
Well not exactly. We still need allow LittleBits to send us messages. Under the "Auth" header, we can change the authentication type to "None":
Turning off auth makes our URL a shared secret that we should only share with LittleBits. Anyone with that URL will be able to send messages to our Slack channel, so some care should be taken not to share it!
Configuring a LittleBits Subscription
Configuring your CloudBit to talk to bip.io requires using the command line for a little bit. First we will need a little information from our Settings panel:
We will need to record both the Device ID and the AccessToken. These are needed to setup the subscription to our bip.iob application.
To setup a subscription requires a bit more black magic on our part. The CloudBit API Documentation describes how to make a HTTP request to register our device with a 3rd party subscriber. In our case, we would like to register our Incoming Web Hook URL as a subscriber to our CloudBit. To do so, we'll write a small bash shell script use curl in a script we'll name "subscribe":
#!/bin/bash
DeviceID=$1
AccessToken=$2
URL=$3
EVENTS=$4
curl -XPOST \
-H "Accept: application/vnd.littlebits.v2+json" \
-H "Authorization: bearer $AccessToken" \
https://api-http.littlebitscloud.cc/subscriptions \
-d publisher_id=$DeviceID \
-d subscriber_id=$URL \
-d publisher_events=$EVENTS
To use this script we need only make it executeable and run it:
$ chmod u+x subscribe
$ ./subscribe yourDeviceId yourAccessToken yourUrl "amplitude:delta:ignite"
This will cause the the URL you supply to be contacted each time the amplitude changes from low to high. If you want to have the value periodically reported instead you can just use the value of "amplitude" to get a message roughly every 750ms.
If the script works you will get a message back like:
{"publisherid":"xxxxxxxxxx","subscriberid":"http://yourname.api.shipiot.net/bip/http/littlebits","publisher_events":[{"name":"amplitude:delta:ignite"}]
This means your subscription has been registered and events are now flowing through the system.
Testing it out
If you push the button now:
A message should show up in your #littlebits channel:
You can use this same technique to drive any number of workflows through bip.io. All at the press of a button.
wot.io has made it much easier to Ship IoT projects with Ship IoT, currently in beta, to make it easy to prototype smaller IoT projects with maker boards and starter kits. Ship IoT is deployment of one of our wot.io data services, bip.io, which is a web API automation system that makes it easy to connect to open APIs for dozens of services and automate data workflows.
As an example of the types of projects we think users might be interested in, we put together a simple project using the Kinoma Create--which packs an 800 MHz ARM v5t processor--as the device. Using the HTTP API provided by Ship IoT Lite, we're able to send data and events from the Kinoma Create to Ship IoT Lite, and then to other services like Twitter. Here's a video showing how we did it.
The IDE used there is Kinoma Studio which is the tool you use to deploy code to the Kinoma Create. You can find the sample code in our Kinoma project in github. I shared the simple Twitter bip in Ship IoT so you can get up and running quickly.
Ship IoT is a free to sign up, so give it a try today!
At wot.io, we're proud to be able to accelerate IoT solutions with our data service exchange™. We've put together this Ship IoT tutorial to show you how you can use the Texas Instruments based Beaglebone Black board and its ARM Cortex-A8 processor with one of our data service providers: bip.io, giving you access to web API integration and automation with 60+ services.
This blog explains how to use the BeagleBone Black in a project that enables you to send a tweet with the press of a button! After implementing this, you should be ready to replace that button with any sensors you like, and to use bip.io to create interesting work flows with the many web services that it supports.
What you'll need
- A beaglebone black
- An internet connection for it
- A breadboard
- A pushbutton
- A 1kOhm Resistor
- Some wires
Step 1 - The hardware
Let's start by wiring up the button on the breadboard. The image below shows the circuit layout.
The pins on the switch are spaced differently in each direction, so they will only fit in the correct orientation on the breadboard. An astute eye will also notice that a 10kOhm resistor was used in our test setup, as we didn't have any 1kOhm ones lying around!
The two connections on the P8 header are both GND (pins 2 and 12). The connection on the P9 header (pin 3) is 3.3V. What we're doing is pulling down the input pin (P8-12) so it stays at 0V until the button is pressed, at which point it goes up to 3.3V.
If you'd like to know know what all of the pins on the board do, check out http://stuffwemade.net/hwio/beaglebone-pin-reference/.
Step 2 - Connecting to Ship IOT Lite
Now, before we get into writing any code, let's set up a Ship IOT Lite account, along with a bip that tweets. You'll need a twitter account to complete this step, so go to twitter.com and sign up if you don't have one. Then you can go to shipiot.net and follow the instructions in the video below.
And you're done! You can test the endpoint by entering your username, API token, and bip name into the following URL and using
$USERNAME='shipiot_username'
$APITOKEN='api_token'
$BIPNAME='bbbtweet'
curl https://$USERNAME:$APITOKEN@$USERNAME.shipiot.net/bip/http/$BIPNAME/ -H 'Content-Type: application/json' -d {"title":"BBB", "body": "Check it out - I just tweeted from ShipIOT Lite."}
Then, in a few seconds, you should see the message pop up in your twitter account! Note that twitter has a spam-prevention feature that prevents you from sending duplicate message, so if you want to test it multiple times, make sure you change the message body each time.
Step 3 - The software
For the device code, we're going to write a simple application in Python. If you don't know Python don't be afraid to give it a shot - it's a very straightforward and easy to read language, so even if you can't program in it you should still be able to understand what's going on.
Going through the basics of how to use the Beaglebone Black are out of the scope of this tutorial, but as long as you can SSH into it and it has an internet connection you are good to go. You can check out the getting started page for help with that. We will be listing the instructions for both Angstrom (the default BBB Linux distro) and Debian, which is a bit more full-featured.
First, we're going to install the required libraries. To do this we'll use pip
, Python's package manager. But first, we'll need to install it.
On Angstrom (the default Linux distro, if you haven't changed it), run:
opkg update && opkg install python-pip python-setuptools python-smbus
On Debian, run:
sudo apt-get update
sudo apt-get install build-essential python-dev python-setuptools python-pip python-smbus libffi-dev libssl-dev -y
Now, on either distro, install the dependencies that we will be using in the code:
pip install Adafruit_BBIO # A library from Adafruit that provides easy access to the board's pin data.
pip install requests # A library to make HTTP requests simpler
pip install requests[security] # Enables SSL (HTTPS) support
Now, the code. For now, let's just get it to run, and then we can go through it and investigate about how it works.
Create a new file called wotbutton
and paste the following code into it:
#!/usr/bin/env python
###############################
### SHIP IOT ACCOUNT DETAILS ###
###############################
shipiot_username = "wotdemo"
shipiot_token = "5139354cedaf7252c776ecf793452344"
shipiot_bip_name = "bbbdemo"
###############################
import Adafruit_BBIO.GPIO as gpio
from time import sleep
import json, requests, time, sys
def on_press():
##############################
###### THE SHIP IOT CALL ######
##############################
## This is the important part of the integration.
## It shows the single HTTP call required to send the event data
## to bip.io, which then acts upon it according to the `bip`
## that was created. Note that since we are using twitter in our
## demo, and twitter has an anti-spam feature,
############################
r = requests.post(
"https://%s.shipiot.net/bip/http/%s/" % (shipiot_username, shipiot_bip_name),
auth=(shipiot_username, shipiot_token),
data=json.dumps(
{"title": "BBB", "body": "Beaglebone Black Button Pressed!\n" + time.asctime(time.localtime(time.time()))}),
headers={"Content-Type": "application/json"}
)
############################
############################
if r.status_code != 200:
print "Ship IOT Lite connection failed. Please try again"
else:
print "event sent!"
# Prepare to read the state of pin 12 on header P8
gpio.setup("P8_12", gpio.IN)
notifyWaiting = True
oldState = 0
# Program loop
while True:
if notifyWaiting:
print "Waiting for button press..."
notifyWaiting = False
sleep(0.01) # Short delay in the infinite reduces CPU usage
if gpio.input("P8_12") == 1:
sys.stdout.write('Pressed button...')
notifyWaiting = True
on_press() # Calls Ship IOT Lite, as detailed above
while gpio.input("P8_12") == 1:
sleep(0.01)
Now, at the command prompt, type:
./wotbutton
and, after a few second of loading libraries and initializing inputs, you should get the prompt Waiting for button press...
. Now press the button, and check out your tweet!
In March, wot.io had the opportunity to be a guest in the ARM booth at Mobile World Congress in Barcelona, Spain. We showed a transportation demo with Stream Technologies providing data from trucks in the London area. We routed the data to ARM's mbed Device Server which was hosted on the wot.io operating environment. We then routed it to several data services including a Thingworx mashup, ElasticSearch and Kibana visualization, and scriptr.io script.
ARM captured the demo in this video.
Welcome to the wot.io labs blog. As we work with various types of IoT products and solutions, we often have the opportunity to create some really interesting proofs of concept, demos, prototypes, and early versions of production implementations. We're going to be sharing all of those with you here so you can get a view into some of the things we're working on. Stay tuned for some interesting posts!
IoT solutions have many moving parts. Devices, connectivity, device management, and data services. So we've taken all of these components, wrapped them all up into one unified environment, and provided them for you to try out. We're going to take a look at one of our core data services, scriptr.
scriptr
Scriptr, on it's own, is a cloud-based javascript web API framework. It gives you an interface to write some javascript code, and then makes that available at an HTTP endpoint in the cloud. At wot.io, we've taken that interface and wrapped it into our environment. So now, any messages that your devices send can automatically get parsed in scriptr, giving you a tool to create your business logic out of those messages.
Getting Started
To get started, check out the tutorial on getting the Philips Hue system integrated with wot.io here. Once you have that running, get the path for the Blink Demo from the email you received when signing up, or contact us for help.
Demo one - blink
In this demo, we're going to use scriptr to automatically turn on and off a lightbulb.
In your scriptr account, create a new script and call it connectedhome/blink
. To find out how, please refer to the scriptr documentation. Then type in the following code and save it:
var lights = JSON.parse(request.rawBody).data[0].lights; // Retrieve the data that we are putting onto the wot.io bus
return '["phue","set_light",3,"on",' + !(lights["3"]["state"]["on"]) + ']'; // Send a message that either turns off the light if it is on or vice versa.
Now, run your Philips Hue integration from the previous step, using the path from the email
python hue.py <bridge ip> <wot path>
And that's it! Your light bulb should now turn on and off every few seconds. If you want to adjust the speed, simply change the delay in hue.py
.
Demo two - color change
Let's try another simple script, where the color of the bulb changes. This time, create a script called connectedhome/changecolor
, type in the following code, and save it:
return '["phue","set_light",3,"hue",' + Math.floor((Math.random()*65000)) + ']'; // Send a command to change the light bulb color
Connect with the hue.py
script again, and voila! That's all that it takes to create a production-ready deployment. There's no need to set up any servers or anything else.
It's really easy to do many things at once with bip.io, but something that we see in the wild is duplication of workflows (and effort!) consisting just minor modifications. This can get out of hand and really stacks up if you have a bunch of social channels to reach. Especially if you need to have the same message format or filtering applied for every outgoing message. Making sweeping modifications or tweaking your strategy can get really cumbersome and tedious, really quickly!
bip.io's graph model lets you chain an unlimited number of different actions and content transformations together and create complete workflows, and the best part is - every action can potentially be parallelized! What this means is rather than having to do things in limited sequential order, bip.io can send the same message out to many different apps at one time. This is what we call a 'fanout'.
Say you have your own RSS feed and would really like to cross post updates to it, on both Twitter and Facebook. Rather than just taking the RSS entry and posting it wholesale to Twitter (and then duplicating the same flow for Facebook), we can pipe, filter and compose content into standard messages, which can be distributed to those services at the same time (fanned out). Having the flexibility to add or remove targets to this fanout dynamically means more time spent perfecting your content strategy and less time stuck in configuration.
Let’s create a Bip that Tracks some Stock Prices and saves it for us out to an Analytics Service. We'll use keen.io for this example, but you could just as easily use mongodb or other pods and run the data through whatever services you want as well.
This data could be anything that’s found on the web. So although this example is polling for a current stock price, any data that can be scraped from the web is fair game. We’ll show how that’s done in a bit.
First lets define which stock symbol we’d like to track. Big Blue is as good as any, so lets pick the (1) Flow Controls and (2) ‘Generate a Payload' with (3) “IBM" as the payload.
(1)
(2)
(3)
This tells our bip to start out emitting "IBM" into whatever we want to connect to it.
We'll want to bind that "IBM" payload to a specific HTML DOM element on a specific page, so let's go ahead and use the HTML pod DOM Selector action (4) to make a web request and use the jQuery Selector to get the data we want. We'll add that action the same way we added our initial payload. (Create New Action -> HTML pod -> DOM selector)
(4)
(5)
Go ahead and add our new DOM Selector action onto the graph (5) and double-click to edit the properties. (6)
(6)
Here we’re going to go out to the web to grab the latest Stock Price. We'll use Google Finance for that as you can see in the URL field. We're interested in a specific DOM element, so we're going to enter #price-panel .pr span
to select the page element we're interested in (the price!)
Do you see how we used the generated payload of "IBM" in the first step to build out our URL query in this step? That's how we grab a piece of dynamic data from one pod and feed it into another in whichever way we want!
Now that we're grabbing some data (IBM listed price), we need a place to put it. Well, that's exactly what Keen.io was built for, so let's use the keen.io pod to send our data over to our keen.io account!
Again, adding a Keen.io action to our bip is done the same way we added our Payload Generator & DOM Selector actions. (Pods -> Keen.io -> Add an Event)
Once we've added Keen.io, go ahead an add the action to the graph and its simply a matter of connecting the dots!
(7)
By connecting the actions in this way, as if by magic we now have available to us the transformed or collected attributes from the other actions.
Keen.io expects to recieve a well formed key:value object, so we'll marshal the Event Object field to send over a JSON object keyed on "IBM" with the value dynamically set to the Text
field from our DOM Selector pod action.
With those things set, let's run our bip and then head over to our keen.io dashboard to see that everything is flowing correctly.
Viewing the latest data event we sent to keen.io (via our bip!) and sure enough we see that our stock price value is getting sent correctly.
{
"keen": {
"timestamp": "2015-04-06T14:00:01.257Z",
"created_at": "2015-04-06T14:00:01.257Z",
"id": "5522916196773d1d96613471"
},
"IBM": "159.08"
}
Success!!
Now we can take advantage of all of the rich analytics and monitoring that Keen.io provides on that data we've sent.
To recap:
1. Pods -> Create Event -> Flow -> Generate Payload
2. My Bips -> Create a Bip -> Add Generate Payload
3. Add an Event -> HTML -> DOM Selector
4. Add an Event -> Keen.io -> Add an Event
5. Click-n-drag to connect Payload -to- DOM-Selector
5a. Click-n-drag to connect DOM-Selector -to- Keenio-Event
6. Double Click Each Pod Icon To Configure the Actions.
7. Hit Run!
Feel free to follow these steps to build up a tracker for your own portfolio.
And of course with bip.io we can connect many different services to store and mix-n-match whatever data we want, coming from whatever website or web-service we want, pretty easily.
So get out there and start bipping!
One of the nice things about bip.io’s ability to automate the web for you is that things can appear to happen for you without you having to do anything. But sometimes you want to have more control over when those events occur.
So we’ve added the ability to schedule bips to trigger on whatever schedule you want.
Say you want one of your personal bips to trigger every weekday at 5pm, and you want some of your business-oriented bips to only trigger on the third Friday of every month.
Well, you can now set a tailored, specific calendar for when each and every bip you own should run. As you can see here in this example, which will run every 3rd Friday of the month at 4:30pm, starting on April 1st. (You can even tell it to run in someone's else's timezone!)
Remember that this scheduling feature is on a per-bip basis.
If however, you want to change the interval of how often all of your triggered bips are fired, you can define your triggers schedule by adjusting the crons settings in your config/*.json, like this example to run every 15 min:
"crons": {
"trigger": "0 */15 * * * *",
...
},
let us know if you find this feature useful and be sure to share with the community the awesome bips you're scheduling!
and as always, you can stay up to date with what’s going on with bip.io by following us on twitter
When thinking about web automation we need to go beyond simple integrations and think of it as a class of programming that makes use of various distributed API's to be put to work backing parts of applications, or replacing them entirely. Where once all software was built and run on a single machine with localised libraries and runtimes, we now have the world's API's doing their one thing well, at scale, right at our fingertips.
That's pretty neat! It's an exciting future.
bip.io is purpose built to this end, and although it's a web automation framework, it's also a visual programming tool for building and easily maintaining discrete endpoints which can augment your own applications, using external and native API's.
I had the chance to build a new 'randomised integer' flow control recently and took it as an opportunity to not only build some useful Bips (essentially, distributed graphs), but also take advantage the 'app-like' characteristics of Bips by replacing the loading message on the bip.io dashboard. Anyone else on the team could now update the configuration as they like, no programming required. No sprint planning, no builds, no deployment plan. It was fun, quick to do and pretty useful, so here it goes!
In A Nutshell
We're going to generate the message in the dashboard loading screen
From a public Web Hook Bip that looks like this
Problem Space
So this is the most important part - Planning. The loading messages are generated randomly for each hit on the dashboard from a list of possibilities. When we look at a piece of existing code like :
$motds = Array( "message 1", "message 2", "etc..." ); define('MOTD', $motds[ array_rand($motds) ]);
... it can be distilled down to being just a 'list of strings'. One of those strings needs to be extracted randomly, stored and displayed somehow. And that's what this Bip should do.
I know we have a text templater that can contain a list, and ways to split lists of strings using flow controls. The randomisation part means there's some math involved, so the math pod is needed too. That bit should take care of the extraction requirements.
The logical steps of extracting a random string from a list will be (annotated with bipio actions) :
- get the number of lines (
templater.text_template
) - generate a random number between 0 and the number of lines (
math.random_int
) - get every line number (
flow.lsplit
) - test that the line number matches the random number (
math.eval
) - if the line number matches the random number, then that's a random line (
flow.truthy
)
However, because Bips are asynchronous, distributed pipelines, there's no way to loop back all the possible outputs from that computation, I'll need somewhere to store the result for retrieval later. For that, I'll use a Syndication container, which can store line items (syndication.list
).
Select A Web Hook Trigger
Web Hooks are a certain flavor of Bip which sits passively in the system and waits for messages to arrive before performing it's work. They're kind of like personal API's that can be called on demand with any ad-hoc data structure.
Find them by just going to Create A Bip > Create New Event and select Incoming Web Hook
Now I'm ready. While it's not completely necessary, it's useful to give web hooks a name. The name will become part of the URL. For this I called it random_motd
- Random Message Of The Day, because it will behave similarly to unix motd's
Create Containers
Here's a little cheat if you're following along. Go ahead and plug this filter into the action search area message template,math,by line,truthy,store
.
It should give you a list that matches pretty closely to the actions mentioned earlier, and look similar to
Add them all into the graph. When it comes to creating the syndication container, make sure to set replace mode, rather than append mode. This will make sure that new motd's aren't appended to the container, and that it's contents are replaced instead.
^^ This is really important and ties everything together in the end
Connect The Dots And Personalize
Whichever the preferred method is, by either connecting these in advance or step by step, eventually we'll need a graph that looks like this :
I usually connect them all up first and let bip.io try to figure out how data should be transformed, but it's personal preference. I start by dragging from the Web Hook icon to templater and so on, until the syndication container is the last node in the pipeline.
For each of the nodes, double click and set up these transformations
Templater - Set the template
Math - Generate random value
Flow Control - Split lines
Math - Select Random Line
Flow Control - Test Current Line Equals The Random Line
Syndication - Replace Into Container
Select A Renderer
Almost there, we have the pipeline defined but how is calling this endpoint going to return a random message? That's where Renderers come in. Renderers let a Web Hook Bip respond in custom ways beyond a simple 200 OK
message by making use of RPC's provided by Actions themselves, but usually hidden.
What we want to do in this case, is serve the random line that has been stored in the syndication list container, back to the connecting client. Luckily syndication.list
has a renderer to do just this, so I enable it by going to the Renderer tab and hitting 'Enable' for the 'Returns List Content' renderer under 'Syndication : current motd'
Makes Sense?
Ready To Save
Because it's a web hook, you'll generally need your username and API token to call it, but I don't care about authentication for this demo, so under the Auth tab, Authentiction should be set to 'none'. None auth means its available to anyone with the link.
Or install it yourself
A couple of notes if you're building this yourself...
After the Bip has been saved it will appear in your list under My Bips, and callable as https://{your username}.bip.io/bip/http/random_motd
You'll need to prime the endpoint by calling it once, which will set the container content and give it something to respond with. After that, you're all set.
Video here
Happy Bipping!
Here in New York, we get very little control over our apartment's environment. Our buildings have central heating, window A/C units are the norm, and depending on how high your building goes your windows might not even open. This makes a perfect environment to use a device like the Netatmo weather station. It reports on the 'vital stats' of your environment, both indoors and outdoors (temperature, atmospheric pressure, CO2 levels, and even noise levels) so you can get a good idea of what your environment is like. But better yet, with a system like Wotio you can actually put that data to use, and have your A/Cs, windows, and heaters act on that data to create a more comfortable environment. In this post we'll get our NWS connected to Wotio, and in a future one we'll use some of Wot's partner services to automate our A/C with it.
The Integration
One of the cool parts of this integration is that, unlike some of the others, the Device Management Platform for the NWS is cloud-based. This means that this integration can be run from anywhere without being limited to your local network. So spin up a cloud instance and try it!
Prerequisites
To get started, you'll need the following:
- A Netatmo Weather Station
- A computer, virtual machine, or cloud instance with Python 2.7 installed
- A Wotio access token and path
- Netatmo Credentials, Client ID, and Secret Token
To get your Netatmo Credentials, register on their developer site and create a new app (https://dev.netatmo.com/dev/createapp). Your Client Id and Client Secret will be given to you there.
The integration
For this integration, we'll be using Python. Even if you don't know it it's fairly easy lanugage to read and understand what's going on.
You can find the code here:
https://github.com/WoTio/wotnetatmo
Download it, then run:
sudo python setup.py install
wotnetatmo <netatmo username> <netatmo password> <netatmo clientid> <netatmo clientsecret> <visit http://wot.io for your wot.io path>
Where <UUID>
is a random string you've generated to identify your application. If you're doing multiple integrations and want them to communicate, this should be kept as the same value.
And that's it! Your wemo devices are now integrated into Wotio. The trick here is in the wotwrapper
library. What it does is it takes the functions in a python class (in this case the Netatmo
class) and wraps them into Wot-style JSON commands. In this case the only function available is get_state()
, which is used to push data onto the Wotio bus.
For detailed usage, see the code and documentation at WoTio/wotnetatmo on Github.
The Wot side
Now that we've got our station wrapped, let's check out it's data stream on Wot.
We can connect to Wotio using several different protocols, but for our purposes we're going to use websockets. Why? Because we can use any number of mechanisms to view the data streams we're generating in real-time. If you don't have a websocket shell installed, you can try progrium/wssh. It's also based on Python 2.7, so if you've gotten this far in the integration it should work for you perfectly fine. So let's use it to open a websocket connection to our data stream on wotio:
wssh <visit http://wot.io for your wot.io path>
And you should see the data stream coming from the Netatmo Weather Station, similar to the one below:
[{"Noise": 38, "Temperature": 22.2, "When": 1426618048, "Humidity": 42, "Pressure": 1002.6, "CO2": 1058, "AbsolutePressure": 1001.7, "date_max_temp": 1426595993, "min_temp": 21.7, "date_min_temp": 1426578048, "wifi_status": 60, "max_temp": 23.5}, {"date_min_temp": 1426582660, "rf_status": 44, "Temperature": 15, "date_max_temp": 1426614084, "min_temp": 10.3, "battery_vp": 5344, "max_temp": 16.5, "When": 1426618032, "Humidity": 41}]
Notice how this data is now being pushed and routed within Wotio automatically? That's what gives Wot it's power. There are lots of pre-integrated services that can act on this data. You can even integrate another device (just as easily, mind you) and Wotio gives you the power to link them together so that they can communicate no matter where they are in the world; all they need is an internet connection. For example, you could have a trigger set up so that if some data point here reaches a certain threshold, some other action occurs (say, the battery running out triggers a tweet, or some other device to turn off). And all of this is production-ready, with no infrastructure for you to maintain, should you want to take your devices (or services) to market.
And if you've got another device connected as described on the blog, you should be able to send and receive commands on it as well!
Learn More
If you'd like to see a sample of the power you can get out of Wotio, visit the wot.io website and we'll call you to schedule a meeting.
When looking around the smart home market, it's hard to miss the Belkin WeMo line of products. They've got many different kinds of compatible devices, from light bulbs to power sockets to space heaters. All of which have some sort of data associated with them, from the state of the device to the state of the surrounding environment. So why don't we integrate our devices into Wotio, just like we did with the Philips Hue.
Why Wot?
What if the WeMo's light sensors, bulbs, and just about every other device was already pre-integrated into a single system, including devices on other platforms? What if the only thing needed to get them working together was a short script? What if you could then tack on other services from 3rd parties (maybe SQL storage, or tweeting specific events) with little extra effort? And someone else did all of the hosting and scaling work for you? Well, then you'd be using Wot. So in this post I'll be showing you how easy it is to integrate the WeMo system into Wotio, and how Wotio can be used to leverage it.
The integration
WeMo devices can be controlled two ways - either via their Cloud API, or via the local Device Management Platform ( Unfortunately, their DMP doesn't have an open API and is (at least publicly) undocumented. So in this post we'll be seeing how to bypass their DMP and still retain control of our devices over the internet, and then in a future post, how to replace theirs with one on Wotio. If you're cringing right now, you've probably done some system integration before. But don't worry, that's what makes Wot so powerful - it makes these integrations so quick and easy that we don't have to worry about them anymore.
Prerequisites
To get started, you'll need the following:
- A Belkin WeMo device (in this demo, we'll use an Insight Switch)
- A computer, virtual machine, or cloud instance with Python 2.7 installed and a network connection to the Hue bridge
- A Wotio access token and path
The integration
For this integration, we'll be using Python. Even if you don't know it it's fairly easy lanugage to read and understand what's going on.
You can find the code here:
https://github.com/WoTio/wot-wemo
Download it, then from that directory run:
sudo python setup.py install
wotwemo <visit http://wot.io for your wot.io path>
Where <UUID>
is a random string you've generated to identify your application. If you're doing multiple integrations and want them to communicate, this should be kept as the same value.
And that's it! Your wemo devices are now integrated into Wotio. The trick here is in the wotwrapper
library. What it does is it takes the functions in a python class (in this case the WotWemo
class) and wraps them into Wot-style JSON commands.
JSON Message received | Function Called
------------------------------------------------------------------
["list_devices"] | wemo.list_devices()
["help","<device_name>"] | wemo.help(device_name)
For detailed usage, see the code and documentation at WoTio/wot-wemo on Github.
The Wot side
Now that we've got our switch wrapped, let's check out it's data stream on Wot, and attempt to control it over the internet.
We can connect to Wotio using several different protocols, but for our purposes we're going to use websockets. Why? Because we can use any number of mechanisms to view the data streams we're generating in real-time. If you don't have a websocket shell installed, you can try progrium/wssh. It's also based on Python 2.7, so if you've gotten this far in the integration it should work for you perfectly fine. So let's use it to open a websocket connection to our data stream on wotio:
wssh <visit http://wot.io for your wot.io path>
And you should see the data stream coming from the insight switch! To control it, all you have to do is type a command:
["set_state","<device_name>","off"]
["set_state","<device_name>","on"]
Notice how each time you send one of these commands, you not only toggle the switch, but change the data coming out of it as well? That's what gives Wot it's power. There are lots of pre-integrated services that can act on this data. You can even integrate another device (just as easily, mind you) and Wotio gives you the power to link them together so that they can communicate no matter where they are in the world; all they need is an internet connection. And all of this is production-ready, with no infrastructure for you to maintain, should you want to take your devices (or services) to market.
And if you've got another device connected as described on the blog, you should be able to send and receive commands on it as well!
Learn More
If you'd like to see a sample of the power you can get out of Wotio, visit the wot.io website and we'll call you to schedule a meeting.
We’re very happy to announce the latest release of bip.io realizes all of our community’s feedback!
bip.io 0.3 (‘Sansa’) is both an upgrade of the user experience and open source server, introducing a bunch of new features for authoring and sharing your Bips and contributing to its ever growing list of supported services (Pods).
The bip creation workflow has had a significant overhaul which places action discovery, configuration and workflow assembly at the heart of the experience without sacrificing any of the instrumentation you might already love.
We’ve taken the best bits of flow based programming tools (the non-tedious parts!) and applied them to general web automation, with more crowd intelligence under the hood so you only need to customize when it makes sense. Some of that intelligence has also been baked into our open source server (fork it on GitHub) so your own software can learn as we do. You can read a bit more about that in Scott’s recent post covering transforms synchronization - it’s one of our best features.
The changes may look drastic but many core paradigms remain intact, now streamlined and modernized. Lets take a look at some of the bigger changes so you can get started right away. We also have a new built-in help system to refer to any time, or please reach us at support@bip.io if the going gets tough.
My Bips vs Community
Lets face it, there wasn’t a lot you could actually do with Bips from their landing screen. We’ve split community and personal bips into dedicated areas and consolidated their representation, making some common functionality available without having to drill down into Bips themselves. Simple things like Copying, Sharing, Pausing and Testing were buried in the old system, and while those things are still actionable per Bip, they’re also now available in the My Bips landing screen. The way Bips are represented also received a facelift and is consistent across the whole system making them uniquely and consistently identifiable, embeddable and very close visually to the graphs they represent
All Shared Bips are now part of the Community section, and now fully searchable with a much easier and less issue prone guided install. We’ll be building out more community features over the coming months, and while we have some strong ideas about what this should look like we can’t do it without you, so drop some ideas into our Feedback And Support widget. We’ll get to them faster than you can blink.
Building Your Bips
OK, this was a big one. The old workflow was pretty convoluted and needed some inherent knowledge about how the system worked, with certain steps in different areas depending on what you needed to do.
The point of a User Experience isn’t to just duplicate what a programmer would have to do, visually, but to create an abstraction that’s workable and easy to use for everyone. The original experience just had to go!
Here’s a little before and after, these are always fun to show off, if only for posterity.
Some of the bigger changes
- Channels are now called ‘Actions’ and ‘Events’, depending if something is performing an action on demand or generating events or content on its own.
- All Bips are created the same way, by pressing the ‘Create A Bip’ button. Email and Web Hooks have been turned into regular event sources.
- The workspace takes up the entire screen and replacing the list of bips to the left are your saved Actions and Events. You can search, add, enable, authenticate and discover integrations from the one screen now, as you’re doing work.
- Dragging from a node and having it spawn a Channel selection and transforms has been completely dropped. To add Actions and Events, either click ‘Add’ from the left sidebar or drag it onto the workspace. Connect services by dragging from source to destination
- Transforms are now called Personalizations, and they don’t need to be explicitly configured. The system will do its best to map actions based on your own behavior and when in doubt it will look to the community for suggestions, in realtime
- Hovering over an Action or Event will now give you contextual information about the item itself, including any global preferences
- The ‘source’ node is always identifiable as a pulsing beacon so it’s easy to see where source messages are emitting from
- Workspace physics behave more naturally. We know how annoying chasing icons is and will continue to work towards more predictable physics.
- Experimental - Some aspects of the Data View are now editable
- Sharing no longer requires a description
- Bips can be shared without saving, giving you the opportunity to remove personal information before sharing
- Triggers can be manually invoked with a ‘Run Now’ button, even when paused
- State and Logging tools! We’ll tell you the last time a Bip ran, and the Logs tab now includes Action/Event failures, run times and state changes
- Flow Controls now all have unique icons, so you can see how the control is affecting your integrations at a glance
And a quick demo of the improved workflow
What’s Next
Adding some much needed normality to the experience, including all the underlying server engineering, gives us a great platform on which we can concentrate on the one most important thing. Building a fantastic community!
We’ll be seeding hundreds of new shared bips over the coming weeks with the 60+ services currently supported and really fine tuning documentation and service discovery, making Bip.IO easier for you to not only learn and utilize, but also contribute to and make part of your own application.
We’ve had a great time receiving and implementing your feedback in building a new experience. We hope you like it and would love to know your feedback, suggestions or success stories to make the next version even better!
Many Thanks
- Team Bip.IO
Hey, so we just released a cool new feature into the bip.io server that we hope you’ll find useful.
Whenever you use bip.io to connect awesome web-services together, its sometimes tedious to remember how a piece of the information you’re getting from one service should be mapped to another service, like when you want to create an email digest of your curated RSS feed of your favorite twitter content. It’s sort of annoying to have to remember and configure, okay, this RSS field field should map to that email setting, and this input should map to that output. every-single-time you want to create a bip.
Well now you don’t have to.
When you connect one web service to another, you can do all sorts of interesting ‘transforms’ with that data as it flows across the graph that you create as your bip. And well, some ways of connecting those services together are frankly more interesting and common amongst all bip.io users. And so we’ve taken the most popular transforms that people are using, and made them available as the default suggestions, for everyone to use.
You don’t have to use the suggested transform, of course. Its just a suggestion after all! You’re free to connect what you want, however you want. That’s the power and flexibility of bip.io. When setting up your bip you can always personalize your integration by clicking on the +Variables button and choose whatever field you want to capture.
Let’s walk through how to set it up:
When you install bip.io, there’s now an extra option in the setup process to confirm that you’d like to periodically fetch the community transforms. This will set a “transforms” option in your config/default.json, like so:
This will tell the server to periodically go and fetch the latest popular transforms, which is set as a cron job that you can also configure in the server config settings. If you already have bip.io installed, you can update to the latest build to get this feature as well.
As this is largely a community-powered feature, the more you use it, the better it gets. Its smart like that. So give it a try. Let us know if you find this aspect useful.
Enjoy.
Philips really dominated the connected lighting market with the Hue system of bulbs. They give you a Device Management Platform (the "bridge") with an open API, which all the bulbs in your home - or business - connect to. While there are already tons of apps compatible with the Hue system, and there's IFTTT support for some simple triggers, how do you handle more advanced applications? What if you needed to hook up the bulbs to a light sensor to build an automated greenhouse? Or wanted to have them strobe if someone trips your alarm system? Sure, you could build the entire system manually, but that takes too long. That's what Wot.io is for.
Why Wot?
What if the hue, light sensors, door alarms, and just about every other device was already pre-integrated into a single system? What if the only thing needed to get them working together was a short script? What if you could then tack on other services from 3rd parties (maybe SQL storage, or tweeting specific events) with little extra effort? And someone else did all of the hosting and scaling work for you? Well, then you'd be using Wot. So in this post I'll be showing you how easy it is to integrate the Philips Hue system into Wotio, and how Wotio can be used to leverage it.
The integration
When you buy the Hue system, you get a bridge to install on your local network that connects and controls all of the Hue Bulbs from a central location via a RESTful API. This is Philips' Device Management Platform - it updates the light bulbs' firmware, pushes new 'scenes' to them, and sends them commands. We will be integrating this API with Wotio to provide both a data stream and a control endpoint.
Prerequisites
To get started, you will need the following:
- A Philips Hue bridge and light bulbs
- A computer, virtual machine, or cloud instance with Python 2.7 installed and a network connection to the Hue bridge
- A Wotio access token and path
The integration
For this integration, we'll be using Python. We will need two libraries. First, there's studioimaginaire/phue, which gives us a convenient way to access the bridge API Second, there's wotio/wotwrapper. To install them, run:
sudo pip install phue
sudo pip install wotwrapper
Then we can write our connector. The code is this simple:
#!/usr/bin/env python
# hue.py
# A module for connecting the philips hue platform to the Wot environment
import sys, wotwrapper
from phue import Bridge
# Initialize the bridge connection with IP address
b = Bridge(sys.argv[1])
# If the app is not registered and the button is not pressed, press the button and call connect() (this only needs to be run a single time)
b.connect()
# Wrap it onto Wotio
# connect(<wotio path>, <module name>, <object to wrap>, <function to retrieve data>, <delay between data publishes>)
wotwrapper.connect(sys.argv[2], 'phue', b, b.get_api, 10)
Save that as hue.py
, then press the link button on the bridge and run:
hue.py <ip address of bridge> <visit http://wot.io for your wot.io path>
Where <UUID>
is any identifier that you specify (try to make it unique). That's it. It took only five lines of actual code! We're initializing the studioimaginaire/phue library, and passing it the bridge's IP address over the command line. We then wrap that library onto the Wot bus using wotio/wotwrapper.
So what did this wrapper actually do? Two things:
- It uses the
b.get_api
function to pump the current state of the system onto the bus (as you probably guessed) - It wraps the methods of the
Bridge
class into Wotio-style JSON calls:
JSON Message received | Function Called
------------------------------------------------------------------
["phue","get_api"] | b.get_api()
["phue","set_light",1,"bri",254] | b.set_light(1, 'bri', 254)
For the full documentation and code of wotwrapper, visit the wotwrapper github page. For all of the API calls, visit the phue github page. And to get a copy of this integration's code, visit wot-phue.
The Wot side
Now that we've got it wrapped, let's try and see the data stream on Wotio, and control a few lights while we're at it.
We can connect to Wotio using several different protocols, but for our purposes we're going to use websockets. Why? Because we can use any number of mechanisms to view the data streams we're generating in real-time. If you don't have a websocket shell installed, you can try progrium/wssh. It's also based on Python 2.7, so if you've gotten this far in the integration it should work for you perfectly fine. So let's use it to open a websocket connection to our data stream on wotio:
wssh <visit http://wot.io for your wot.io path>
And you should see the data stream coming from your light bulbs! So why don't we try controlling them too. All you have to do now is type a JSON message and hit enter, and you'll change the state of the bulbs:
["phue","set_light",1,"bri",254]
["phue","set_light",2,"bri",50]
["phue","set_light",1,"on",false]
Notice how each time you send one of these commands, you not only change the lights in your house but the data coming out of the bridge as well? That's what gives Wot it's power. There are lots of pre-integrated services that can act on this data. You can even integrate another device (just as easily, mind you) and Wotio gives you the power to link them together so that they can communicate no matter where they are in the world; all they need is an internet connection. And all of this is production-ready, with no infrastructure for you to maintain, should you want to take your devices (or services) to market.
Learn more
If you'd like to see a sample of the power you can get out of Wotio, visit the wot.io website and we'll call you to schedule a meeting.
I’m thrilled to announce BipIO was recently acquired by wot.io, a data service exchange for connected device platforms, and I have joined as the director of application engineering. Let me be the first to not only welcome you to the combined wot.io and BipIO ecosystem, but also explain how this acquisition benefits you.
With the wot.io and BipIO shared philosophies, this represents many positives for our community.
My open source BipIO vision is now maturing into a dedicated full-time team and will allow us to continue to support the platform as we’ve done in the past, with additional people to lean on for answering questions, fixing bugs, and implementing features. We’ll be working even harder at making the orchestration of web and data services meaningful, expressive and easy. I’m very pleased that wot.io will continue to maintain our open platform with the current terms of service and privacy policy still applying to your user data. Just as we’ve done in the past, if we make material changes to the privacy policy, we will seek your consent to such changes.
Together with wot.io, we are working to open access to the IoT and M2M platforms with an innovative data service exchange model, and now you are involved in the next generation of connected data and devices. While still supporting the existing BipIO community, wot.io will enable BipIO to scale and expand into new markets.
Our vision will always be to break down the walled gardens of the internet, and with the resources of wot.io, we are closer to reaching that goal. This is a very exciting time at wot.io, and we’re honored that you’re a part of our growing ecosystem.
If you have any questions, or wish to opt out of the BIpIO service, feel free to reach out to hello@bip.io, or me personally at michael@wot.io.
Thankyou for your support
- Michael Pearson
The Outbox is an under utilized feature of BipIO for crafting and sending messages to any of your web hooks at a moments notice. You can find it under Dashboard > Apps. It’s great for invoking or testing web services from one place without having to write a client yourself and it just received a great new Redactor overhaul. Keeping in the spirit of BipIO, messages you send with the Outbox are ephemeral in nature, with no history being tracked on the server side unless you explicitly need it. Otherwise, your messages are bound to your browsers local storage only and when they’re gone, they’re gone.
I’m going to show you how to put it to work for HTML, Plain Text, JSON and Markdown payloads with a short video. And yes, this post was built and published to Tumblr with the Outbox itself!
Incase you don’t know, ‘Web Hook’ Bip’s are a special kind of graph from which you can receive messages from a web browser or app, and perform some workflow based on a payload. Hooks are the workhorse of many of BipIO’s own website logic (so meta!) because they can serve or process data for any application. To come to grips with web hooks, I’m going to fall back to using the ‘my first bip’ hook which you may have already created through the website tour when you first signed up.
Didn’t quite follow? These are the basic rules -
- HTML mode will send as HTML. ‘HTML Mode’ is when all formatting options are available.
- Code mode will send un-formatted text. Toggle HTML/Code mode with the <> button
- To translate to JSON, hit the ‘Parse JSON’ check box while in code mode
- Unless you’re ‘parsing JSON’, the Outbox exports will be for ‘title’ and ‘body’
You may notice I took a couple of seconds to create a Markdown to HTML Channel to handle Markdown parsing, and dropped it onto ‘My First Bip’. Markdown is just plain text, and needs some special treatment. Otherwise HTML, Text and JSON parsing are already handled by the Outbox itself.
Have fun, with great power comes great responsibility! follow me on Twitter and fork me on GitHub to see whats happening in the here and now.
[edit] Exciting news! JSON, Markdown, HTML and plaintext Outbox modes are compatible with the Poetica Chrome plugin so you can collaborate socially on the full gamut of payload type with joyful, instant abandon!
A little while ago I rolled out the HTTP Pod for BipIO which provides a basic webhook channel (called ‘request’) that can drop onto any hub to do some work. Think fetching a file, or posting form data or making some ad-hoc API call as part of a pipeline. It’s the kind of irresponsibly powerful tool you might want to leverage for your app to fan out web requests or temporarily make available a hidden or authenticated resource to (un)trusted 3rd parties.
A more interesting feature of Bips (the graphs that do work) and in particular the HTTP flavor is that they support the concept of rendering dynamic content from any available channel. Bips themselves have the characteristics of ephemerality, authentication, name, transport encapuslation and graph processing. They’re public facing endpoints that anyone can call, with a graph behind them that potentially can process requests. Without a renderer, HTTP Bips will just sit there accepting data and all going well respond with a happy ‘200 OK’ message. With renderers, we add the ability to respond in a custom way beyond the generic ‘200 OK’, per endpoint. This makes them a very powerful tool for backing parts of a web application or adding ‘app-like’ characteristics to simple URL’s in something as simple as email.
That said, I want to demonstrate a few of the funky things I was able to get going in no time with HTTP Bips and some of the new renderers the HTTP Channel provides. Using both HTTP Bips and Channels is a good starting point because they share the same protocol, but keep in mind that any Pod/Channel which has a renderer can also serve content or have its protocol encapsulated via HTTP Bips. The demo’s were active workflows during the beta pre-launch period for user onboarding and segmentation which turned out really useful. I hope you find so, too!
Simple Proxying
When the launch key emails were being sent out, I wanted to get a rough guage of how many people were opening the message for their invite code, grouped per day. There’s a range of different techniques for surfacing this kind of simple metric but I went for simply tracking when an image was loaded - specifically the bipio logo embedded in the email’s HTML.
The endpoint is https://beta.bip.io/bip/http/logo.png and it looks and renders like just any other image. Because HTTP Bips don’t know anything about serving images by themselves it was the perfect time to try out the ‘proxy’ renderer supported by the new ‘Request’ Channel in the HTTP Pod. The logic simply being, ‘When the image is served, record a count grouped by day’.
To get started, the ‘Channels’ section of the bipio dashboard is where the channel itself will get created. Clicking on the HTTP Request icon will start the channel creation process. Channels will generally just sit there doing nothing until added to a Bip’s graph, or enabled as a renderer. Lets have this ‘http.request’ channel GET the logo image file :
To create this with the API, just POST a channel structure like so :
POST /rest/channel { "name": "BipIO logo", "action": "http.request", "config": { "url": "https://bip.io/static/img/logo.png" } }
Building the web hook which will serve this channel’s content publicly is then pretty simple. Under ‘Bips’, click ‘Create Web Hook’ and make sure :
- It has a name that looks like an image file,
- Has authentication disabled
- A renderer set as a the ‘HTTP Proxy’ renderer of the logo image channel which was created earlier. You can find renderer setup in its own tab during Bip configuration :
So just enable the ‘HTTP Proxy’ renderer for your new channel.
Via the API, it’s something like :
POST /rest/bip { "name": "logo.png", "type": "http", "config": { "auth": "none", "renderer": { // renderer object "channel_id": "01ded262-4150-4041-bcea-6727bd46960e", "renderer": "proxy" } }, "hub": { "source": { "edges": [] } } }
And that’s it! The named endpoint which is created
(https://beta.bip.io/bip/http/logo.png) will proxy any requests it receives via the renderer and serve up the file like magic. To handle hitcounts on the endpoint and build a basic report, its just a matter of filling out the hub with two extra channels for time formatting (time.format) and count (flow.counter). I’ve split it out here so you can wrap your head around the structure but it can replace the empty ‘hub’ in the previous POST, also :
"hub": { "76ba8c52-7161-4e80-aa06-146b45da75b9": { "transforms": { "c9a7c8ad-bd43-447a-9eea-fac54a9e1ebc": { "_note" : flow.counter channel", "increment_by": "1", "group_by": "" } }, "edges": [ "c9a7c8ad-bd43-447a-9eea-fac54a9e1ebc" ] }, "source": { "transforms": { "76ba8c52-7161-4e80-aa06-146b45da75b9": { "_note" : "time.format channel", "format": "MMDDYYYY", "time": "1402247179" } }, "edges": [ "76ba8c52-7161-4e80-aa06-146b45da75b9" ] } }
The ‘flow.counter’ channel has a renderer itself which dumps out all the data it has collected. I could either call that renderer directly to get at the data, or encapsulate it in a bip like the previous example to run a report etc. Pretty neat!
Request Redirection
To redirect a user directly to a target resource rather than proxy it, its just one small change to the renderer structure above - just set ‘renderer’ from ‘proxy’ to ‘redirect’ in the config section.
PATCH /rest/bip/{bip-id} { "config": { "auth": "none", "renderer": { "channel_id": "01ded262-4150-4041-bcea-6727bd46960e", "renderer": "redirect" } } }
A couple of cases for using the HTTP Pods redirect renderer might be as a link shortener, or to segment users in MailChimp by who clicks through via the generated link!
SSL Encapsulation
A common problem for people running content sites or apps with SSL is that any non-encrypted content being rendered into a browser raises a bunch of security warnings as it undermines the integrity of the content. On the flipside there is significant infrastructure overhead in downloading every piece of content onto your server or cdn for serving over SSL, simply to make browsers happy. By using the ‘http.request’ proxy renderer its possible to encapsulate insecure content in SSL instead.
Here’s a simple SSL bridge which you can use via the BipIO website (which forces SSL connections). Creating a bridge is very similar to the ‘http.request’ channel defined earlier which served a logo, however a ‘url’ config parameter is not defined - its injected by the Bip, instead :
POST /rest/channel { "name": "SSL Bridge", "action": "http.request", "config": { } }
POST /rest/bip{ "name": "anonymousbridge", "type": "http", "config": { "auth": "token", "renderer": { "channelid": "91ded262-4150-4041-acea-6727bd46960e", "renderer": "proxy" } }, "hub": { "source": { "edges": [ ] } } }
To test it out just call the endpoint like so, authenticating with your username/API key :
https://{username}.bip.io/bip/http/anonymous_bridge?url=http://example.org
Voila!
A quick note on security …
Given the ability to proxy, redirect and encapsulate web requests with abandon, its generally a bad idea to accept and process any URL a web client throws at you. Be sure to always authenticate clients using the authentication config attributes in HTTP bips! Additionally, if you’re running the server on your own infrastructure, you may notice ‘blacklist’ errors returning to your connecting clients. This is because by default all local network interfaces are blacklisted by the HTTP Pod out of the box. To whitelist local network interfaces, add their IP or Hostname to the HTTP Pods ‘whitelist’ section in your server config.
Integration 101 is an ongoing series where I’ll describe some of the useful workflows that can be created with the Bipio API in a few easy steps. A basic understanding of RESTful API’s and SaaS is assumed, and if you have experience with graphs and graph traversals that’s helpful too but not necessary. If there’s anything you’d like to see, let me know. Otherwise it’s onward and upward with some crazy experiments!
I’m a lazy (busy?) SoundCloud music consumer, why click favorite > download > have a cup of tea > find in downloads > drag to Dropbox folder when I can just click ‘like’ instead and let a robot do the legwork?
Here’s a little assembly for doing just that with Bipio. It takes 2 Channels (soundcloud.getfavorites and dropbox.savefile) and 1 trigger bip to fire a message into the hub. Less clicks, more consumption.
SoundCloud
POST /rest/channel
{
"action" : "soundcloud.get_favorites",
"name" : "Get All The Favorites!"
}
Response
201 Created
{
"id": "23bb1de5-c950-c448-fbbf-000056c1eed6",
... more attributes here
}
Dropbox
POST /rest/channel
{
"action" : "dropbox.save_file",
"name" : "Save to SoundCloud Folder",
"config" : {
"base_dir" : "/soundcloud"
}
}
Response
201 Created
{
"id": "5d2de02d-a70c-4ffd-ac15-9a97f6cb3d0f",
... more attributes here
}
So armed with those two id’s, creating the bip is easy, ‘get_favorites’ is an emitter, meaning it generates its own content periodically (in this case, mp3 files and metadata). We can capture these files with a ‘trigger bip’, and process the files with an edge pointing to the Dropbox Channel.
{
"config": {
"channel_id": "23bb1de5-c950-c448-fbbf-000056c1eed6"
},
"end_life": {
"time": 0,
"imp": 0
},
"hub": {
"source": {
"annotation": "SoundCloud > Dropbox",
"edges": [
"5d2de02d-a70c-4ffd-ac15-9a97f6cb3d0f"
]
}
},
"name": "SoundCloud to Dropbox",
"type": "trigger"
}
Synced, just like that. There’s no need to transform any content unless its really necessary. Files that appear to a bip from an external service are simply carried across the hub incase channels need them. So, for this basic sync, we know that a file is appearing from SoundCloud, and whatever that file is, gets saved to DropBox.
Extra Credit
ok ok, so that assembly just ends up in big amorphous glob of music in the /soundcloud folder, so of course we should probably template the destination path a little better. Lets re-work the hub in that bip and add a transform to file it by Genre and Artist, which are two of the exports the soundcloud.getfavorites action already provides.
"hub": {
"source": {
"annotation": "SoundCloud > Dropbox",
"edges": [
"5d2de02d-a70c-4ffd-ac15-9a97f6cb3d0f"
],
"transforms" : {
"5d2de02d-a70c-4ffd-ac15-9a97f6cb3d0f" : {
"/soundcloud/[%genre%]/[%artist%]" : "basedir"
}
}
}
}
Now isn’t that better?
Bipio lets you create cheap dynamic endpoints quickly and have them perform some complex task, so when our LaunchRock page failed after a recent account migration (they were super helpful in debugging, no love lost) it seemed like a perfect opportunity to build a similar email capture widget using the toolset already baked into our API. Time to Eat Our Own Dogfood, as it were, as a simple exercise I could fit into a saturday morning between breakfast coffee and brunch coffee. Great!
The requirements for the widget were, at their core, very simple - for every user that arrived at the home page, I wanted to create a temporary, single use endpoint which took an email address, passed it to a data store and sent a thankyou message. Maybe later I can render this syndication into excel files containing batches of 100 leads for onboarding (because we can), but for now, just keep it simple. Don’t want to be late for brunch. Enter the Candy Strip. It’s a little dynamic call to action widget that you can find on our home page, and it took all of 30 minutes to set up. Fun Fact : Although the website itself is a blocking architecture PHP/Zend stack, creating these endpoints takes ~10ms across our DMZ to the API, a barely noticeable hit to application performance and we get a bunch of functionality that can be easily extended at no cost to application complexity.
So, we can create these endpoints right out of the box with HTTP Bips, with all the normal characteristics of a Bip including temporary lifespan, authentication scheme, dynamic naming and a predictable data export. To have that bip send an email and store some data, I just need to create a simple Hub (graph) with two channels (vertices) which are to send a transactional Email (email.transaction), and list email addresses to a List Syndication (syndication.list).
To create the syndication, just POST /rest/channel a little packet pointing at the Syndication Pods ‘list’ action. The list action is perfect because it gives me the option to render everything we’ve received to a CSV, TSV or Excel file down the track. Channels are re-usable across Hubs, so this Channel could now act as a datasource for a different hub sitting outside this email capture workflow if I don’t want to add more actions to the pipeline.
mode": "append"
},
"action": "syndication.list",
"name": "Signup Reservation List"
}
{
"config": {
"write
Done! The data store gets created, and every line item we get in this list is appended to the store
Just black holing content into a syndication isn’t very useful, so I wanted to send an email to the early adopter thanking them for their interest. I used the email template we already had for LaunchRock so didn’t have to don my Marketing Hat before lunchtime. Bonus, only 10 minutes in so far!
{
"action": "email.transaction",
"config": {
"template": "betaregistered",
"from": "Bipio Signups ",
"subject": "A quick update about your beta invite"
}
"name": "Email Capture Receipt"
}
email.transaction is a special Email Pod action we have for sending pre-baked transactional messages, you won’t find them in the API just yet, sorry!
We now have two Channels for processing the capture, so in the website controller code, I’ll just make a call out to the API when the page renders and assemble these Channels onto a Hub with a POST /rest/bip. The name of the bip matches the session id in our application, they’re single use or naturally expire after a day. Together between the size of our capture syndication, and the number of created http bips, I can also figure out a rough conversion rate. The ‘repr’ attribute in the POST /rest/bip response payload tells the widget which endpoint URI to send the email address to in our eventual AJAX request. _repr means the string representation of the object, all REST resources in Bipio have this decorator, harking back to the systems Tornado roots.
If I hit the website, I see a new HTTP bip in the ‘beta.bip.io’ account dashboard whose name matches my session id, so its working. The graph for my temporary bip renders like so :
Now to expose the widget in the view and slap it some lipstick courtesy of cameo & ycc2106 @ colourlovers :
And we’re done :)
Like many projects, bipio emerged from the fog of creativity and necessity to solve a problem and scratch an itch. For us, it was mostly about solving some tedious problems around implementing (and refactoring) complex message delivery stacks, and also more mundanely trying to manage and control the volume, quality and re-usability of messages that flowed through the system, including what appeared in our inboxes and from whom.
After many years of feeling the pain and inefficiency of shoe-horning TCP transports, API’s or other services into playing nicely together, the eventual takeaway was very clear :
API Glue Abstraction is hard, solving the abstraction problem efficiently and intuitively in a way that maximizes creativity and responsiveness is very valuable
.. so here we are. We’re building an API to programatically create dynamic endpoints that can accept or generate content (these are called ‘bips’) and push that content, transformed or filtered, out to other services via re-usable components (‘channels’) with a strong API developer focus.
We’ve started with simple email -> smtp/rss pipelining, sign up and stay tuned :)