It’s not your fault. You have a shiny new CS degree from a (hopefully accredited) college or university. You have spent months preparing for the hours-long hazing ritual euphemistically referred to as a “job interview.” You are well-versed in Node.js, Spring, Golang and are prepared to whiteboard endlessly on topics ranging from scalability in Big Data applications to orchestration and concurrency in containerized micro-services. While doing so, you are careful to use trendy jargon like “performant” in place of actual English language phrases like “performs well” and can quote Gang of Four design patterns, chapter and verse (builders and facades and strategies, oh my!) You know you are ready for any coding challenge that may come as you start your first job as a Software Engineer. Best of all, you eagerly anticipate the joys of fully embracing “agility” as the organizing principle of your working life.

Darling child, you could not be more mistaken.

None of the above matters. None of the above is particularly useful. Much of it is highly counter-productive.

Super-high-level frameworks and languages are only safe and useful for developers who are sufficiently experienced and disciplined to not benefit substantially from their use.

Over-use of “design patterns” in contexts where they are not actually needed results in illegible, unmaintainable code.

The skills and character traits selected for by the modern tech industry job interview are largely the opposite of those which make for happy and productive development teams.

This blog exists not so much to help you become a better Software Engineer but, at least in part, to point out some of the many ways in which you have probably chosen the wrong career.

[{"id":"e75e5c5e.241ac","type":"tab","label":"WallMote Quad","disabled":false,"info":"Handle WallMote Quad button presses."},{"id":"293f9837.b1f728","type":"zwave-in","z":"e75e5c5e.241ac","name":"Z-Wave Event","controller":"148468b5.b28d57","x":110,"y":160,"wires":[["d6e3b47c.5a85b8"]],"info":"Listen for event messages from the Z-Wave controller."},{"id":"99bde630.fcd338","type":"debug","z":"e75e5c5e.241ac","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","x":790,"y":280,"wires":[],"info":"Write output to debug log."},{"id":"d6e3b47c.5a85b8","type":"function","z":"e75e5c5e.241ac","name":"WallMote Button Pressed?","func":"// javascript function node that passes\n// its input message through exactly 1\n// of its 5 outputs\n//\n// a message indicating that a particular\n// button was pressed on a particular\n// Aeotec WallMote Quad will be passed\n// through the corresponding output 1 - 4\n//\n// any other message will be passed\n// through output 5\n//\n// notes:\n//\n// - each element in the returned array\n// corresponds to one of the javascript\n// function node's outputs\n//\n// - setting an output to null means that\n// there will be no flow continuing\n// along that path\n//\n// - this function assumes incoming\n// messages conform to the pattern\n// for those arising from a `z-wave in`\n// node from _node-red-contrib-openzwave_\n//\n// - it inspects incoming message\n// payloads for particular values\n// indicating that they are button\n// press events from a particular\n// Aeotec Wallmote Quad on my\n// network and forwards them\n// accordingly\n\n// msg.payload.nodeid on my network\n// for the Aeotec WallMote Quad\n// to monitor\nvar NODE_ID = 2;\n\n// msg.payload.cmdclass for the\n// button events to monitor\nvar CMD_CLASS = 91;\n\n// msg.payload.currState value\n// indicating a button press event\n// (rather than release, double\n// tap etc.)\nvar CURR_STATE = \"Pressed 1 Time\";\n\n// pass msg through output 5 if it fails\n// any of the following checks\nvar result = [null, null, null, null, msg];\n\ntry {\n\n // pass msg through output 5 if not the\n // WallMote Quad\n if (msg.payload.nodeid != NODE_ID) {\n return result;\n }\n \n // pass msg through output 5 if not button related\n if (msg.payload.cmdclass != CMD_CLASS) {\n return result;\n }\n \n // pass msg through output 5 if not a button press\n if (msg.payload.currState !== CURR_STATE) {\n return result;\n }\n\n} catch (e) {\n // pass msg through output 5 if an error occurs\n return result;\n}\n\n// eureka! we have a button press!\n\n// clear output 5\nresult[4] = null;\n\n// pass msg through output corresponding to button\n// number\nresult[msg.payload.cmdidx - 1] = msg;\n\nreturn result;","outputs":5,"noerr":0,"x":340,"y":160,"wires":[["701ddd07.cbfb14"],["ae3c0083.047d2"],["99bde630.fcd338"],["99bde630.fcd338"],["99bde630.fcd338"]],"info":"Message on output 1 - 4 means corresponding button was pressed.\n\nMessage on output 5 means a different event fired."},{"id":"701ddd07.cbfb14","type":"change","z":"e75e5c5e.241ac","name":"outlet off","rules":[{"t":"set","p":"topic","pt":"msg","to":"setValue","tot":"str"},{"t":"set","p":"payload","pt":"msg","to":"{\"nodeid\":3,\"value\":0}","tot":"json"}],"action":"","property":"","from":"","to":"","reg":false,"x":560,"y":100,"wires":[["fac79d0b.0d3c5"]],"info":"Payload and topic for command to turn off the Smart Switch outlet."},{"id":"ae3c0083.047d2","type":"change","z":"e75e5c5e.241ac","name":"outlet on","rules":[{"t":"set","p":"topic","pt":"msg","to":"setValue","tot":"str"},{"t":"set","p":"payload","pt":"msg","to":"{\"nodeid\":3,\"value\":1}","tot":"json"}],"action":"","property":"","from":"","to":"","reg":false,"x":560,"y":140,"wires":[["fac79d0b.0d3c5"]],"info":"Payload and topic for command to turn on the Smart Switch outlet."},{"id":"fac79d0b.0d3c5","type":"zwave-out","z":"e75e5c5e.241ac","name":"Z-Wave Command","controller":"148468b5.b28d57","x":750,"y":120,"wires":[["99bde630.fcd338"]],"info":"Send command message to the Z-Wave controller."},{"id":"e8b20c6d.2b321","type":"catch","z":"e75e5c5e.241ac","name":"","scope":null,"uncaught":true,"x":560,"y":340,"wires":[["99bde630.fcd338"]],"info":"Catch errors."},{"id":"148468b5.b28d57","type":"zwave-controller","z":"","port":"/dev/tty-zstick","driverattempts":"3","pollinterval":"10000","allowunreadyupdates":false,"networkkey":"","logging":"off"}]

Adventures in Home Automation (Part 13 of an Open-Ended Series)

A Z-Wave Use Case

Here is a flow that turns on or off a particular Aeotec Smart Switch 6 in response to specific button-press events from a particular Aeotec WallMote Quad. It does so by use of nodes from node-red-contrib-openzwave controlling an Aeotec Z-Stick Gen5 plugged into the same Raspberry Pi on which I run Node-RED.

Ought to be simple, right? Guess again…


Along with generic communications protocols like WiFi, Bluetooth etc. there are a number of competing protocols developed specifically for home automation and the so-called “Internet of Things” (IoT). Z-Wave is one such protocol. It is a wireless, low-power protocol designed around the idea of a collection of physical nodes (not to be confused with Node-RED nodes, sigh) representing devices like switches, sensors, dimmers and so on. A particular type of Z-Wave node exists as a controller for the other nodes in such a network.

In my case I currently am experimenting with a Z-Wave network consisting of the three nodes referred to above. Though they all happen to be from a single manufacturer (Aeotec), since Z-Wave is an open-source protocol (unlike Zigbee, which is a very similar closed-source protocol used by, for example, Philips Hue devices) theoretically it should be easy to replace or supplement any of these with Z-Wave compatible devices from other manufacturers.

On my network:

Node Id Device
1 Aeotec Z-Stick Gen5 USB Z-Wave controller module
2 Aeotec WallMote Quad 4-button Z-Wave remote control
3 Aeotec Smart Switch 6 Z-Wave AC power switch

The following flow turns on the Smart Switch when button 1 of the WallMote is pressed and turns the Smart Switch off when button 2 is pressed:

But, wait! Since the WallMote and Smart Switch are both Z-Wave devices from the same manufacturer, no less, you might assume that they just work together out of the box, or at least using an app provided by Aeotec. You would be wrong. Most Z-Wave hardware manufacturers assume tht you will be using a third-party hub or gateway that will do the heavy lifting of integrating their devices together in order to achieve anything useful. For reasons discussed elsewhere in this blog series I have found no stand-alone gateway sufficient to my needs, minimal as they are.

And thereby hangs the following tale full of laughter and tears, pathos and drama.

First Principles

Before you can even start trying to automate Z-Wave devices using Node-RED you must first have them configured into a network with a controller that is accessible to your home automation platform. In my case, this meant:

  1. Figuring out the rather unintuitive process of pairing the button and switch devices with the controller before plugging it into the Pi

  2. Installing the OpenZWave library

  3. Configuring udev to give the controller a sensible and predictable name if I were to add or remove other USB devices over time

  4. Installing node-red-contrib-openzwave in the Node-RED “palette” and configuring a Z-Wave controller node accordingly

For step 1, bring a magnifying glass and be prepared to interpret the rather non-standard English in the tiny instruction sheets supplied by Aeotec. Other manufacturers’ offerings are not likely to be much better given where most of them are located in the market-driven neo-colonial world in which we live. As a hint, the general pattern is to press the corresponding buttons on the controller and devices to put them into pairing mode and hope for the best when guessing what the various subsequent LED colors pulsing at various speeds mean. You must do this with the Z-Stick charged but not currently plugged into a USB port. Come back with your shield, or on it!

See Configuring a Rasperry Pi B3+ for information on steps 2 and 3. Note in particular the reference to installing libopenzwave.so in /usr/lib rather than its build script’s default location since the latter is incorrect for Raspbian.

Step 4 is where step 3 comes in handy. If you followed my script exactly, the value you would enter for Port in the Z-Wave configuration node is /dev/tty-zstick. Use whatever name you used if you specified a different one. If you didn’t bother with all that udev silliness, then the value to use is probably, but not guaranteed to be, /dev/ttyACM0. It turned to be /dev/ttyACM1 for me, so it really can vary and thus for me it was worth the trouble of using udev to create a predictable name.

On the other hand, a ripple effect from all this is that I found it not worth the effort of trying to get Node-RED working in a Docker container. Trying to get libopenzwave installed in a container with proper access to /dev/tty-zstick in the host was simply a bridge too far. The closest I got required running the container with a level of privilege and degree of coupling to the host environment that eliminated much of the supposed benefit, in any case.

We Don’t Need No Stinkin’ Documentation

The next hurdle you must surmount in getting Z-Wave devices to work in Node-RED flows is the nearly complete lack of useful documentation and the very low accuracy in what does exist. For one crippling example, the node-red-contrib-openzwave readme file, minimal as it is, is just wrong in many simple but critical details in how to construct and interpret message payloads when communicating with the Z-Wave controller. A case in point is its description of how to invoke the Z-Wave enablePoll command. It shows passing a simple numeric node id value as the first parameter where you must actually pass the JSON representation of a complete “value id” object.

“What’s a ‘value id’ object,” I hear you ask. Good question. Are there any others? [A “value id” is a tuple consisting of a node id, command class, command index and instance number. Aren’t you glad you asked? Good luck finding all that out and what they mean from nothing but a few Google searches.]

You have been warned!

Not So Graphical After All

Last but not least, in order to do anything useful with the nodes from node-red-contrib-openzwave be prepared to write some good, old-fashioned JavaScript. The messages that arise from zwave-in nodes are very direct representations of the raw Z-Wave data structures as JavaScript map objects. Ditto for the messages that you must send to zwave-out nodes. Plumbing the two together using the simple filtering and routing provided by built-in switch nodes and the like would be a nightmare.

Instead, the core of what should be a simple “press buttons to turn on and off a switch” flow relies on a JavaScript Function node:

Here is the complete JavaScript function implementing the critical bit of this flow’s logic:

// javascript function node that passes
// its input message through exactly one
// of its 5 outputs
// a message indicating that a particular
// button was pressed on a particular
// Aeotec WallMote Quad will be passed
// through the corresponding output 1 - 4
// any other message will be passed
// through output 5
// notes:
// - each element in the returned array
// corresponds to one of the javascript
// function node's outputs
// - setting an output to null means that
// there will be no flow continuing
// along that path
// - this function assumes incoming
// messages conform to the pattern
// for those arising from a `z-wave in`
// node from _node-red-contrib-openzwave_
// - it inspects incoming message
// payloads for particular values
// indicating that they are button
// press events from a particular
// Aeotec Wallmote Quad on my
// network and forwards them
// accordingly

// msg.payload.nodeid on my network
// for the Aeotec WallMote Quad
// to monitor
var NODE_ID = 2;

// msg.payload.cmdclass for the
// button events to monitor
var CMD_CLASS = 91;

// msg.payload.currState value
// indicating a button press event
// (rather than release, double
// tap etc.)
var CURR_STATE = "Pressed 1 Time";

// pass msg through output 5 if it fails
// any of the following checks
var result = [null, null, null, null, msg];

try {

// pass msg through output 5 if not the
// WallMote Quad
if (msg.payload.nodeid != NODE_ID) {
return result;

// pass msg through output 5 if not button related
if (msg.payload.cmdclass != CMD_CLASS) {
return result;

// pass msg through output 5 if not a button press
if (msg.payload.currState !== CURR_STATE) {
return result;

} catch (e) {
// pass msg through output 5 if an error occurs
return result;

// eureka! we have a button press!

// clear output 5
result[4] = null;

// pass msg through output corresponding to button
// number
result[msg.payload.cmdidx - 1] = msg;

return result;

In the main flow, the first output from this JavaScript function node connects to a change node that formats its topic and payload to send a Z-Wave command to turn off the switch device whose node id is 3:

The second output from the JavaScript function node connects to a nearly identical change node. The only difference is that msg.payload.value is 1 rather than 0 so that the switch will be turned on rather than off.

In either case, the formatted command message is passed to a zwave-out node that causes the Z-Wave controller to issue the corresponding command. The logged output at that point reflects only that the command was sent. A separate, asynchronous message will be logged after passing through the fifth output of the JavaScript function node when the switch state actually changes as a result.

The complete JSON source code for this flow, including the preceding JavaScript function, can be downloaded here. Note that this is only useful as a model for your own flows since it will not work unless you happen to have a Z-Wave network that exactly matches mine in terms of models of devices, node id’s etc.

Adventures in Home Automation (Part 12 of an Open-Ended Series)

Hosting a Secure Home Automation Server

Previous posts in this series have alluded to the many challenges of hosting home automation services at home. While there are a number of third-party, stand-alone “hub” products that provide this capability out-of-the-box, none of those I’ve tried have been well-supported by their manufacturers or have functionality sufficient to most people’s needs. Given the fragmented and rapidly changing nature of the home automation / IoT space good products go bad very quickly and most of these products were only ever so-so to start with.

For these reasons I have taken the completely self-hosted, self-maintained approach, instead. The following is a very high-level survey of what that actually means.

Buckle up!

The Platform

My current home automation platform consists of Node-RED running on a Raspberry Pi 3B+. The Pi is on the same Local Area Network (LAN) as my home automation gear (“smart” lights, switches, security cameras etc. of various makes and models). It is exposed to the Internet using my router’s ability to do port-forwarding and the use of the DuckDNS free dynamic DNS service. It is secured using a free certificate issued by Let’s Encrypt obtained and regularly renewed via the free certbot service.

While I have tried a number of open source home automation platforms including openHAB and Home Assistant I have gone with Node-RED for reasons discussed earlier in this blog series. Suffice it to say here that for my purposes, and given the state of the art in all things IoT, “less is more.” While products like openHAB and Home Assistant promise to be one-stop shops in controlling your complete “smart” home, the reality circa 2019 is that I have found that relying primarily on each IoT vendor’s native apps and services while using Node-RED, Adafruit IO and IFTTT to provide a minimal amount of communication and automation gluing them together produces far more satisfactory results. Your mileage, as the saying goes, may vary.

The Details


My “smart” home hardware setup consists of:

  • A Philips Hue hub controlling a number of lights spread across a number of indoor and outdoor spaces

  • A couple of Wemo wifi-controlled AC outlets

  • An Arlo cloud-enabled security camera

  • A Google Home device in each of several rooms, one of which is a Google Home Hub (which, despite the name, is really just a Google Home speaker with a built-in LCD display)

  • A Raspberry Pi “always on” server

  • A Google Wifi mesh router setup

I have and continue to dabble with additional devices and technologies. For example, I have a ZWave controller and a few compatible devices but have not found them to be up to the task of replacing equivalent products from more mainstream manufacturers.

Network Configuration

  • The Hue hub, Google Home devices, Arlo camera and Wemo outlets are all on the same LAN as the Raspberry Pi

  • The router is configured to assign a reserved IP address to the Raspberry Pi to forwards ports 80 and 1880 to it (the latter is the standard port used by Node-RED)

  • To make the latter reasonably secure, the web server (Apache) and Node-RED instances are configured to require SSL / TLS

  • To make it possible to use SSL / TLS the Raspberry Pi is configured to use a free Let’s Encrypt certificate that is automatically installed and regularly renewed using certbot

  • To make those forwarded ports useful, the Raspberry Pi is configured to use DuckDNS to publish my home LAN’s dynamic IP address using DNS

Software Configuration

The “net result” (pun intended) is that I can access all of my home automation gear and services in a reasonably secure way from anywhere I have an Internet connection. I do so mostly through each product’s native apps and cloud services but can use my Node-RED instance for the last mile of integration and automation. The latter requires yet more free, cloud based services from IFTTT and Adafruit IO.

To get there, I had to rely on my professional experience supplemented by extensive research and experimentation on:

  • Linux system administration (the Raspbian flavor of the Debian stretch distro, in particular)

  • Best practices for installing and using Raspberry Pi based hardware for always on, fairly high volume (by home application standards) services – e.g. using higher capacity storage media with better life-time read / write cycle performance than the SD cards that are the norm for booting and running a Pi. (Hints: Use balenaEtcher to burn your desired Raspbian image onto a USB drive of any size up to 2TB but be sure to use a powered USB hub due to the Pi’s notoriously under-powered USB ports. Install the fuse exfat packages and the same powered USB hub to mount additional external storage of any size.)

  • Apache web server installation and configuration on Raspbian

  • Node.js installation and configuration on Raspbian (required for Node-RED)

  • Node-RED server installation and configuration on Raspbian (required in order to obtain a more up-to-date version than that installed by default in the standard Raspbian images)

  • Dynamic DNS using DuckDNS in particular

  • SSL / TLS configuration using certbot

  • How to work around “features” in the various wifi and other protocol implementations in the murky sea of Android versions and IoT devices to get them all actually communicating with their own cloud services and mobile apps, let alone each other and with IFTTT and Node-RED (Hint: disable the feature in recent versions of Android that constantly scans for wifi signals even when wifi is theoretically off in order for the apps for third-party wifi based devices like Wemo, Roomba etc. to work correctly. Seriously.)

For what it’s worth, I experimented with running Node-RED in a Docker image but abandoned that because the “official” Node-RED image is too limiting for node types with native dependencies like OpenZWave. Creating my own image to address such issues would just add yet more links to an already long and fragile chain of dependencies. In the end, publishing and securing a native Node-RED instance on Raspbian was far simpler and achieved better results than trying to implement the same functionality inside a container.

Adventures in Home Automation (Part 11 of an Open-Ended Series)

Part 10 of this series describes a particular Node-RED flow that implements home automation driven by presence detection events arising from Life360 and delivered to my local Node-RED instance from an Adafruit IO feed using IFTTT as an intermediary.

No, really.

As stated in that earlier post, I chose Node-RED less for its particular virtues than because of the ever dwindling number of viable alternatives. In this post I will examine in some detail while the trend in IoT toward approaches like that epitomized by Node-RED is a tremendously bad idea. I do include a few hard-won insights on using Node-RED if you are brave and foolish enough to follow me that far down this path.

## Why Graph-Based Programming Paradigms Do Not Help

The only way to understand Node-RED and how to create automation using its “flows” is to already be very well versed in a number of software development and information technology concepts and techniques:

- A flow is a directed graph (digraph) whose nodes represent flow-of-control operations and / or API invocations and whose edges represent the transmission of messages from one node to zero or more successor nodes.

- Messages are either Javascript entities or binary “blobs” representing API-specific data.

- The flow-of-control represented by various built-in node types and the edges connecting them can result in parallel threads of execution requiring concomitant care in synchronizing their activities.

While all of the preceding complexity is hidden behind the “simplicity” of a graphical paradigm, it still must be accounted for by the unwary automation developer. The simplified presentation as graphical nodes quickly becomes more of a hindrance than help. In other words, rather than allowing “non-programmers” to gain access to IoT automation, for all practical purposes graphical scripting engines like Node-RED can really only be used by people who are already quite comfortable with writing software using traditional programming languages and API’s.

The names of some of the built-in node types in Node-RED give examples of what I mean be the preceding:

- switch
- join
- http
- http response

…and so on. For a very simple case, consider the switch node labeled validate in the Mode Handler flow described in part 10. It is visually nondescript, saying nothing about the actual flow of control it represents other than it has one input and three outputs. You must open up its detail page to see what it actually does:

It should then be fairly intuitive to someone already familiar with a programming language like Javascript, C++, Java or C# that the validate node in question is more or less equivalent to something like:

switch (msg.payload) {
case AWAY:
case HOME:

All of a sudden, Node-RED starts to feel far less “graphical” and much more like a traditional programming language with a particularly cumbersome IDE.

The join node is even worse. As its name suggests, it is the Node-RED equivalent of what the Linux pthread library and the Java Thread class mean by “join.” The fact that “join” is even a thing to take into account when designing Node-RED flows means that you had jolly-well better be able to think like a programmer before you start.

If you have no idea what I am talking about at this point, perhaps you might consider whittling as a hobby instead of trying to automate your “smart” home. That is not a disparagement of you, Dear Reader, but of the state of the art in home automation.

Why DIY Servers Are Not for the Naive Nor Faint of Heart

Even if Node-RED were all it is cracked up to be, just getting as far as being able to run a “hello world” level flow requires that you have a Node-RED instance running somewhere. For it to be able to respond 24x7x365.25 to IoT events it must be always on, so running it on your desktop PC or laptop is not generally an option. For it to have direct access to your local IoT devices it must be on your local network. For it to integrate with cloud-based services like global MQTT brokers, IFTTT, not to mention the many IoT device brands whose automation API’s are cloud-based, your always on, local Node-RED instance must also be connected to the Internet.

Not only does this require that you understand how to install, configure and maintain a stand-alone server of some kind (I use a Raspberry Pi to host services like Node-RED), it is a malware invasion of your home network waiting to happen unless you really, truly know what you are doing and take some fairly labor-intensive precautions.

An alternative could be to run Node-RED on a cloud hosting service (IBM’s cloud hosting service formerly known as Bluemix has built-in support for hosting Node-RED instances). However you would still need to open up attack vectors based on exposing your local devices and other cloud service credentials to the Internet in order for a cloud based IoT server to do anything particularly useful.

Using Node-RED Despite All of the Above

The flow described in part 10 demonstrates a number of subtleties:

  1. Sequential flow of control along a single thread is implemented by connecting an output of one node to an input of another.

  2. Two or more edges emanating from a single output creates multiple threads of control which subsequently proceed in parallel.

  3. Additional flow of control constructs are implemented as special node types such as switch, split, join, batch etc.

  4. Flow threads should only begin from input-only nodes like inject, http in, mqtt in and so on (but see discussion below about design issues in some node libraries like node-red-contrib-huemagic) or from specific flow-of-control nodes like batch, trigger and the like.

  5. Flow threads must terminate when they reach output-only nodes like http out, mqtt out, debug etc.

A common design defect in many built-in, optional and user contributed node libraries is to implement device control operations as terminal output nodes. Examples include control nodes provided by node-red-node-wemo, node-red-contrib-ifttt etc. This is bad design because it means that the operations they implement can only occur as the last step in flow thread.

A less common design defect appears in node-red-contrib-huemagic (but at least it also provides a useful, if very poorly documented, work-around for at least some cases). That is where a function node (one with both inputs and outputs) that exists for its side-effects (controlling Hue light scenes or groups) can also function as source of asynchronous state change messages.

The Home and Away sub-flows described in part 10 address the latter by always passing in scene names and group id’s as msg properties rather than by configuring the Hue Group and Hue Scene nodes directly. Without that, the flows would work as expected when a message was injected using one of the inputs in the Mode Handler flow but there would also be spurious, asynchronous threads appearing from the middle of the sub-flows every time one of the configured Hue groups or scenes changed state.

Another common design problem among Node-RED’s library of built-in node types is coupled pairs that require preserving message properties from one to another. Often with no documentation of exactly what must be preserved. Always without regard to what any intervening nodes may choose to do.

For example, the message that arrives at a http out node must have msg properties that came from the corresponding http in node. One of the primary use-cases for a join node depends on preserving meta-data that originates in a split node.

Trouble arises when attempting to do anything non-trivial while handling a HTTP request before responding to it, between a split and its corresponding join and so on. The Away and Home sub-flows are both implemented according to a design pattern that addresses this issue. The same pattern supports the accumulation of multiple outcome message payloads when more than one operation is invoked when handling a request.

In the abstract, this pattern looks like the following:

The important features of this pattern are:

  • Execute multiple automation operations sequentially

  • Accumulate all operation outcome message payloads in the final output message

  • Preserve meta-data (e.g. msg properties that hold HTTP request headers, msg.parts etc.) from the input message to the final output message

This is accomplished by:

  1. Connecting each intermediate output to the input of the next to achieve the desired sequence of operations

  2. In parallel, attach the original input message and the output of each intermediate operation node directly to a join after setting a well-defined value for msg.topic to each such message

  3. Configure the join to manually construct a key / value map based on msg.topic as the key from _n_ messages where _n_ is the number of parallel input paths leading to the join

The output of the join will be a single message with meta-data from the original input and a Javascript map object as its payload. That map will have _n_ key / value pairs whose key names are the values of msg.topic from each input to the join and whose values are the corresponding contents of msg.payload.

Putting It All Together

The Away and Home sub-flows from part 10 follow the preceding pattern, with the following nuances:

  • Use multiple intermediate nodes to adjust input payloads and topics between operations

  • Some of those intermediate nodes exist to help protect sensitive credentials it is all too easy leak accidentally using API’s like IFTTT’s in a simpler, more “natural” fashion (although note that even then there are inherent weaknesses in Node-RED’s security model that means you should take extraordinary precautions in securing your Node-RED instance as a whole in addition to using these kinds of node-based security helpers)

  • Take into account that operations that suffer the “terminal node” design defect described above cannot be included in the accumulated output or participate directly in the sequential order of operations anywhere but at the end of a thread of control

Even then, the sequence of operations is very straightforward in both of these sub-flows. Each of them adjusts a few lights and controls a security camera in a particular order. They adjust the lights using individual invocations of Hue group and scene control nodes, treating the corresponding collections of lights as atomic operations. This relies on having set up the light groups in the native Hue app accordingly.

But what if you want to control a number of devices independently as a single “operation” in the sense of the preceding design pattern? In that case, use a sub-flow in place of an atomic operation in the abstract version of the flow.

For example, if I wanted to turn on or off a number of separate Hue groups as a single “operation” I could use a sub-flow like the following:

When passed a message with the boolean value true or false as its payload this sub-flow will turn on or off the Hue groups with the id’s 1, 2 and 3 and send a single message as its output whose own payload is an array of the output message payloads of three Hue Group command invocations. It takes into account rate limitations when making successive calls to the underlying Hue API using a delay node. It properly synchronizes the final output using its own join encapsulated within this sub-flow, separate from the outer join in the overall flow that calls it.

Another way in which graphical programming paradigms can be inefficient to learn and debug is in how visually misleading they can be. To understand the preceding example you must take note of the fact that once the sub-flow diverges into three distinct threads based on the divergence of paths leading out of the initial input pseudo-node, there continue to be three parallel paths of execution all the way to the join just before the pseudo-node labeled output 1. In particular, despite appearances, the delay node labeled 1 msg/sec and the Hue Group command node will be invoked, in that order on each thread, three times but in no guaranteed order among the threads; i.e. once each for the three values of msg.topic set in the visually parallel change nodes. Hence the need for the rate-limiting delay and the subsequent join.

Good luck explaining any of that to your non-programmer friends who hope you can show them how to do DIY automation using a platform like Node-RED without having to “learn to program.” Such sub-flows can become arbitrarily complex depending on your use case, the number of different makes and types of devices you want to control and how they are configured.

[{"id":"c5875b70.75f548","type":"subflow","name":"Home","info":"Disarm security camera then activate lighting scene.\n\nA standard `http request` node is used rather than _node-red-contrib-ifttt_ due to the latter's lack of the ability to receive outcome messages from the IFTTT request and therefore be included in sequential flows.","category":"mode","in":[{"x":60,"y":60,"wires":[{"id":"9d980284.226"},{"id":"cf25c48.951ef38"}]}],"out":[{"x":880,"y":60,"wires":[{"id":"9d980284.226","port":0}]}],"env":[]},{"id":"61bd6dad.b8b1e4","type":"change","z":"c5875b70.75f548","name":"Great Room Sunset","rules":[{"t":"set","p":"payload","pt":"msg","to":"Great Room Sunset","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":330,"y":200,"wires":[["c92afcf4.61575"]],"info":"Set payload to specify the Hue scene to activate"},{"id":"c92afcf4.61575","type":"hue-scene","z":"c5875b70.75f548","name":"","bridge":"ef03cafe.898348","sceneid":"","x":530,"y":200,"wires":[["b82e2f30.a787a"]],"info":"Hue scene command."},{"id":"9d980284.226","type":"join","z":"c5875b70.75f548","name":"","mode":"custom","build":"object","property":"payload","propertyType":"msg","key":"topic","joiner":"\\n","joinerType":"str","accumulate":false,"timeout":"","count":"3","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"","reduceFixup":"","x":770,"y":60,"wires":[[]],"info":"Join three messages on topic."},{"id":"d7b58fe0.b29fc","type":"change","z":"c5875b70.75f548","name":"ifttt:","rules":[{"t":"set","p":"topic","pt":"msg","to":"ifttt","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":590,"y":120,"wires":[["9d980284.226"]],"info":"Set `msg.topic` to `\"ifttt\"`."},{"id":"b82e2f30.a787a","type":"change","z":"c5875b70.75f548","name":"hue:","rules":[{"t":"set","p":"topic","pt":"msg","to":"hue","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":690,"y":200,"wires":[["9d980284.226"]],"info":"Set `msg.topic` to `\"hue\"`."},{"id":"8eb655a.9694fa8","type":"http request","z":"c5875b70.75f548","name":"disarm security camera","method":"GET","ret":"txt","paytoqs":false,"url":"","tls":"","proxy":"","authType":"basic","x":370,"y":120,"wires":[["d7b58fe0.b29fc","a1e30e2b.fbcac"]],"info":"Send request to IFTTT webhook to arm security camera."},{"id":"cf25c48.951ef38","type":"credentials","z":"c5875b70.75f548","name":"url","props":[{"value":"url","type":"msg"}],"x":170,"y":120,"wires":[["8eb655a.9694fa8"]],"info":"Set `msg.url` to specify the IFTTT webhook to invoke.\n\nA credentials node is used to hide the IFTTT API key URI segment."},{"id":"a1e30e2b.fbcac","type":"delay","z":"c5875b70.75f548","name":"","pauseType":"delay","timeout":"5","timeoutUnits":"seconds","rate":"1","nbRateUnits":"1","rateUnits":"second","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":false,"x":140,"y":200,"wires":[["61bd6dad.b8b1e4"]],"info":"Give the webhook request a bit of time to propagate and execute."},{"id":"b3227a4e.1af8b8","type":"subflow","name":"Away","info":"Turn off indoor lights then arm security camera.\n\nA standard `http request` node is used rather than _node-red-contrib-ifttt_ due to the latter's lack of the ability to receive outcome messages from the IFTTT request and therefore be included in sequential flows.\n\nNote also that the same design defect preventing the use of _node-red-contrib-ifttt_ makes it impossible to include information regarding the outcome of the Wemo command node in the output.","category":"mode","in":[{"x":60,"y":60,"wires":[{"id":"c59de58.1d88618"},{"id":"2526003d.d9d49"}]}],"out":[{"x":880,"y":60,"wires":[{"id":"2526003d.d9d49","port":0}]}],"env":[]},{"id":"c59de58.1d88618","type":"change","z":"b3227a4e.1af8b8","name":"14:false","rules":[{"t":"set","p":"topic","pt":"msg","to":"14","tot":"str"},{"t":"set","p":"payload","pt":"msg","to":"false","tot":"bool"}],"action":"","property":"","from":"","to":"","reg":false,"x":120,"y":200,"wires":[["36a7df50.a2bbc","1f163948.519857"]],"info":"Set message topic and payload to send command to turn off indoor lights.\n\nTopic is the Hue group id (ignored by Wemo).\n\nPayload `false` indicates lights should be turned off."},{"id":"1f163948.519857","type":"hue-group","z":"b3227a4e.1af8b8","name":"","bridge":"ef03cafe.898348","groupid":"0","colornamer":true,"x":290,"y":160,"wires":[["d26df2d8.664df","66f41c36.7183e4"]],"info":"Hue light group command."},{"id":"36a7df50.a2bbc","type":"wemo out","z":"b3227a4e.1af8b8","name":"","device":"8588745c.1e1098","label":"Laser Up Light","x":300,"y":240,"wires":[],"info":"Wemo Insight command to turn off Laser Up Light."},{"id":"a7482f19.ae481","type":"http request","z":"b3227a4e.1af8b8","name":"arm security camera","method":"GET","ret":"txt","paytoqs":false,"url":"","tls":"","proxy":"","authType":"basic","x":640,"y":200,"wires":[["ef47d938.2164e8"]],"info":"Send request to IFTTT webhook to arm security camera."},{"id":"2526003d.d9d49","type":"join","z":"b3227a4e.1af8b8","name":"","mode":"custom","build":"object","property":"payload","propertyType":"msg","key":"topic","joiner":"\\n","joinerType":"str","accumulate":false,"timeout":"","count":"3","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"","reduceFixup":"","x":770,"y":60,"wires":[[]],"info":"Join three messages on topic."},{"id":"d26df2d8.664df","type":"change","z":"b3227a4e.1af8b8","name":"hue:","rules":[{"t":"set","p":"topic","pt":"msg","to":"hue","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":450,"y":120,"wires":[["2526003d.d9d49"]],"info":"Set `msg.topic` to `\"hue\"`."},{"id":"ef47d938.2164e8","type":"change","z":"b3227a4e.1af8b8","name":"ifttt:","rules":[{"t":"set","p":"topic","pt":"msg","to":"ifttt","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":830,"y":200,"wires":[["2526003d.d9d49"]],"info":"Set `msg.topic` to `\"ifttt\"`."},{"id":"66f41c36.7183e4","type":"credentials","z":"b3227a4e.1af8b8","name":"url","props":[{"value":"url","type":"msg"}],"x":450,"y":200,"wires":[["a7482f19.ae481"]],"info":"Set `msg.url` to specify the IFTTT webhook to invoke.\n\nA credentials node is used to hide the IFTTT API key URI segment."},{"id":"ef03cafe.898348","type":"hue-bridge","z":"","name":"Hue Chez Nous","bridge":"","key":"Lx1feghOoNQdbkBiPFqyGtYTVbOKQcPNQinlZOFJ","interval":"3000"},{"id":"8588745c.1e1098","type":"wemo-dev","z":"","device":"231838K12000A4","name":"Laser Up Light"},{"id":"36291bbb.9d2f34","type":"tab","label":"Mode Handler","disabled":false,"info":"Listen for `home` or `away` messages from the Adafruit IO `mode` feed MQTT broker and invoke the corresponding sub-flow."},{"id":"75ba2ed0.8dc92","type":"mqtt in","z":"36291bbb.9d2f34","name":"","topic":"parasaurolophus/feeds/mode","qos":"0","datatype":"auto","broker":"bbd83ebe.928de","x":160,"y":60,"wires":[["993372ba.b24fc"]],"info":"Listen for MQTT messages from Adafruit IO `mode` topic."},{"id":"b318df8f.0dd73","type":"switch","z":"36291bbb.9d2f34","name":"validate","property":"payload","propertyType":"msg","rules":[{"t":"eq","v":"away","vt":"str"},{"t":"eq","v":"home","vt":"str"},{"t":"else"}],"checkall":"true","repair":false,"outputs":3,"x":360,"y":280,"wires":[["d6ac9878.9007e8"],["f2e2a97e.9592a8"],["9607b8a1.b44d48"]],"info":"| `msg.payload` | Output |\n| ------------- | -------- |\n| `away` | 1 |\n| `home` | 2 |\n| Otherwise | 3 |\n"},{"id":"9607b8a1.b44d48","type":"template","z":"36291bbb.9d2f34","name":"invalid mode","field":"payload","fieldType":"msg","format":"handlebars","syntax":"mustache","template":"invalid mode: ","output":"str","x":530,"y":340,"wires":[["9e6cf177.489cf"]],"info":"Format error message."},{"id":"9e6cf177.489cf","type":"debug","z":"36291bbb.9d2f34","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","x":710,"y":280,"wires":[],"info":"Write output to debug log."},{"id":"d6ac9878.9007e8","type":"subflow:b3227a4e.1af8b8","z":"36291bbb.9d2f34","name":"","env":[],"x":530,"y":220,"wires":[["9e6cf177.489cf"]],"info":"Invoke _Away_ sub-flow."},{"id":"f2e2a97e.9592a8","type":"subflow:c5875b70.75f548","z":"36291bbb.9d2f34","name":"","env":[],"x":530,"y":280,"wires":[["9e6cf177.489cf"]],"info":"Invoke _Home_ sub-flow."},{"id":"993372ba.b24fc","type":"change","z":"36291bbb.9d2f34","name":"mode:","rules":[{"t":"set","p":"topic","pt":"msg","to":"mode","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":170,"y":160,"wires":[["b318df8f.0dd73"]],"info":"Normalize `msg.topic` to `mode`."},{"id":"4380a75a.8a98b8","type":"inject","z":"36291bbb.9d2f34","name":"","topic":"mode","payload":"away","payloadType":"str","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":150,"y":240,"wires":[["b318df8f.0dd73"]],"info":"Inject `away` message on topic `mode` for testing purposes."},{"id":"b41b3678.331978","type":"inject","z":"36291bbb.9d2f34","name":"","topic":"mode","payload":"home","payloadType":"str","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":150,"y":320,"wires":[["b318df8f.0dd73"]],"info":"Inject `home` message on topic `mode` for testing purposes."},{"id":"3ec1cb7.e48c334","type":"inject","z":"36291bbb.9d2f34","name":"","topic":"mode","payload":"fubar","payloadType":"str","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":150,"y":400,"wires":[["b318df8f.0dd73"]],"info":"Inject `fubar` message on topic `mode` for testing purposes."},{"id":"bbd83ebe.928de","type":"mqtt-broker","z":"","name":"","broker":"io.adafruit.com","port":"8883","tls":"4c296db4.fd6aa4","clientid":"","usetls":true,"compatmode":true,"keepalive":"60","cleansession":true,"birthTopic":"","birthQos":"0","birthPayload":"","closeTopic":"","closeQos":"0","closePayload":"","willTopic":"","willQos":"0","willPayload":""},{"id":"4c296db4.fd6aa4","type":"tls-config","z":"","name":"","cert":"","key":"","ca":"","certname":"","keyname":"","caname":"","servername":"","verifyservercert":true}]

If only... Trek


Having endured the tired, formulaic Enterprise, the dystopian NeoCon nightmare of the Abrams reboot movies and now both seasons of the ham-handed Discovery I am forced to wonder: what would a Trek prequel or reboot be like if made by people who had ever watched the original series or would have understood and enjoyed it if they had? Or even just knew how to write movie and TV characters for whom one did not almost immediately feel contempt if not outright loathing?

  • The NX-01 would have a crew of at most a handful of people crammed into a ship with no room for passengers or extraneous personnel of any kind.
  • Ships of the NX-01 era would have no technology capable of visual communication with species with whom they had not previously had friendly relations sufficient to have agreed on network protocols and video encoding standards.
  • Neither Romulans, Klingons nor any other culture known to the nascent Federation would have cloaking technology prior to the time of the original series episode Balance of Terror.
  • No one in the “prime” universe would have any knowledge of the existence of the “mirror” universe prior to the time of the original series episode Mirror, Mirror.
  • Pike and all the senior officers of Discovery would be executed for violating General Order 7 in (re-)visting Talos IV.
  • No-one not directly involved in Section 31’s operations would have heard of it before the (extremely ridiculous) self-outing that began in the Deep Space Nine episode Inquisition.
  • Nothing like the most basic technology available to the crew of Discovery would have been obtainable by the NCC-1701 era Star Fleet, laying aside the nonsensical anti-science underlying the whole concept of the “spore drive,” “time crystals” etc.

I am pretty sure that during the production of the 2009 reboot Star Trek movie I was not intended to find myself hoping that the giant ice monster would swallow Chris Pine’s version of James T Kirk whole. Yet by that point in the movie that would have been the only really satisfying twist in Abrams’ abominable “re-imagining” of the characters and events leading up to the original series.

Still, I find myself increasingly well-disposed to the timidity of the Enterprise show runners’ refusal to break from the well-known formulae. (Of course Archer would just “intuit” and act on the Prime Directive instead of exploiting the many interesting plot lines that could have been extracted from the “mistakes” made early in Star Fleet’s career that led to its creation. Everyone knows that possession of cloaking technology is the Romulan Empire’s defining characteristic. And so on and on.) Even the hateful authoritarianism and war-mongering of Abrams’ version of the United Federation of Planets compares favorably to the just plain, old-fashioned incompetence of Discovery’s writers and producers.

I found myself having to rewind many times while trying to watch both seasons of the latter. I had to reassure myself repeatedly that no, I hadn’t missed some important bit of dialog or action. The incessant non-sequiturs and self-contradictions that occurred with such frequency within any given scene, let alone across each of the seasons’ “plot arcs” (more like “scatter plots”) were not my imagination. Discovery managed to remove any vestige of science from its version of “science fiction” while simultaneously violating every accepted norm of basic story-telling. So much so, that I finally came to suspect that this was deliberate, as when the effects went full-on Galaxy Quest as in their depiction of the “spore drive spin and drop,” the “turbo-lift rollercoaster” and the like. Galaxy Quest was a brilliant homage in the form of an affectionate, well-written parody. (“Whoever wrote this episode should die!) If taken at face value, Discovery is just a sad joke its creators appear to have no idea they perpetrated on themselves.

As with Enterprise, the most consistent theme in Discovery was missed opportunity on the part of the writing staff. They actually gave themselves the perfect opportunity to do away with any curmudgeonly carping from the likes of myself at the end of season 0.5 when they described the in-all-ways absurd “spore drive” as tapping into a true multiverse. I would have been fine with the idea that the decision to simply co-opt the Star Trek brand rather than be part of it was actually explainable in context. In that alternate history, the writers had Stamets (the TV character, not the New Age “mycologist” whose goofball theories underlie the “spore drive” after whom the former was named) provides an explanation which amounts to proving that there never was a “prime” universe and the characters and incidents of Discovery were as distinct from those in any preceding series as the “mirror” universe was from that of the original series. But, no, even that use of their own immediately-dropped plot device seemed to be beyond the ability of the clowns hired by the “CBS Originals” executives to bring home. Instead, their most consistent device for attempting to engage their audience was to end every single episode with an overtly melodramatic cliff-hanger worthy of 40’s vintage Saturday-matinee serials.

In the end, it remains true that the only real Trek property since the end of Voyager’s run is The Orville. As with Galaxy Quest, The Orvile demonstrates that well-executed satire can be a more sincere form of flattery than a poorly executed imitation.

Adventures in Home Automation (Part 10 of an Open-Ended Series)

Node-RED Redux

As it turns out, IFTTT integrates with Adafruit IO which incorporates a cloud based MQTT broker. Now that Comcast has decided to screw-over its Stringify customers by shutting down the service, I was more motivated to see how close I could come to reproducing the now-defunct approach to presence detection automation described in part 3 using Node-RED in place of Stringify. The answer turns out to be “tantalizingly.”

The bottom line is that in addition to the enormous level of effort and technical sophistication required by this approach, it is made into mostly a proof-of-concept due to missing features, bugs and reliability issues in the many open-source components and cloud-based services involved. While the end-to-end functionality works much of the time, there are occasions when the delicate sequence of Internet messages between phone, Life360, IFTTT, Adafruit IO, Node-RED, Arlo and Wemo fail to fully propagate in a timely fashion if at all. That is when the many bugs and unfinished features in Node-RED and its user-contributed components do not prevent one from pursuing the most sensible approach from the beginning.

[Note that I preceded this proof-of-concept using Node-RED by exploring the use of openHAB’s built-in presence detection and rules engine which would have eliminated many links in the extremely fragile chain referred to above. Unfortunately, openHAB is in even a worse state than Node-RED such that neither its presence detection nor automation features work at all in the currently released version at the time of this writing.]

Here is what this version of the “absence detected” sequence looks like:

Not shown is the corresponding “presence detected” sequence, but you can imagine that it looks very similar. In fact, the only difference is that the triggering event from Life360 would be “first family member arrives” with a corresponding IFTTT applet forwarding a message to the same Adafruit IO feed except with the payload home instead of away.

This assumes that you have:

  • A reciprocal willingness to continuously share your detailed location information with everyone whose presence you want to detect and a working Life360 “circle” that includes all of them

  • A working IFTTT account and a comfort level with creating applets, including ones based on the built-in “webhook” service as well as services to control devices not directly supported by Node-RED (Arlo cameras, in my case)

  • A working Adafruit IO account and have successfully created a feed by which to transmit arrival and departure notifications

  • A working Philips Hue hub with some lights to control

  • A working Wemo outlet connected to its cloud service

  • A working Arlo security camera connected to its cloud service

  • A working Node-RED instance running on a machine that is always on, always connected to the Internet and on the same Local Area Network (LAN) as your Hue hub

  • The ability and willingness to plumb all the various, cloud accounts together so that IFTTT can communicate with Life360 and Adafruit IO, Node-RED can communicate with Wemo and so on

No, really, I’ll wait… all done? good job!

If any of the preceding eludes you be assured that it isn’t your fault. Rather, the point of the preceding litany is to illustrate the degree to which the “Internet of Things” isn’t yet something that is ready for real-world applications accessible to the typical home user.

On to the details…

Handle MQTT Message from Adafruit IO

This IFTTT applet:

forwards Life360 absence detection events to an Adafruit IO feed:

The Mode Handler Node-RED flow (mode-handler-flow.json) subscribes to messages on the Adafruit IO feed’s MQTT broker:

It uses two sub-flows to handle away and home messages, respectively:

The Node-RED flow relies on optional and user-contributed nodes to integrate directly with Philips Hue and Wemo. It calls back to an IFTTT webhook to control an Arlo camera, since Netgear only shares its API with a small number of “partner” companies, including IFTTT, Google and Amazon, but refuses to make its API available directly to customers for implementations such as this.

Adventures in Home Automation (Part 9 of an Open-Ended Series)

openHAB: Things and Items and Channels, oh my!

This not a tutorial on setting up or using openHAB. If anything, it is a cautionary tale about what you are about to subject yourself to if you start down the path of trying to create a “smart home” circa 2019.

In Adventures in Home Automation (Part 5 of an Open-Ended Series) I touched at a high level on why, part way through 2019, home automation is not yet ready for home users. Here I dive into the details of one such system, as an example of why I make that claim.

The reason I focus on openHAB is simple: despite its near uselessness for real-world home automation purposes it is actually the best of the options with which I have experimented to date.

  • Both it and its cloud service are open source and free (unlike Home Assistant, for example, whose core contributors have launched a commercial cloud service)

  • It runs on inexpensive, ubiquitous hardware platforms (including the Raspberry Pi)

  • Despite its pre-beta quality and completeness, it is actually the most mature such product that is still under active development

With all that going for it, why don’t I love it?

First, there is no point even in attempting to set up openHAB or anything like it unless you are prepared to dive into the details of hardware configuration and system administration, software installation, configuration and scripting that could land you a DevOps gig at any Silicon Valley tech company.

Further, just to understand the basic getting-started documentation you must have a firm grasp of software engineering principles to make sense of openHAB’s jargon based on its Binding, Thing, Item and Channel software design model. (If you just asked yourself, “what on earth did this clown mean by a nonsensical phrase like ‘software design model?’” then you are like most normal human beings and openHAB is probably not for you.)

Still with me? Ok, here we go!

First off, openHAB does not have a UI, it has many of them. This is because none of them actually do everything you would need to use this product effectively. They are all web browser based and are accessed either by connecting to the web server built into your openHAB instance itself or through its associated cloud service. Note, however, that just installing any of these various UI’s and connecting your local openHAB instance to the cloud is a non-trivial undertaking for anyone who is not already very familiar with the intricacies of open source software projects and the system engineering expertise their authors’ invariably assume on the part of their customers.

Again, I make no pretense of a tutorial here. If you want to try any of this for yourself, you should start at https://www.openhab.org/docs/ and good luck to you!

Assuming you make it that far, the default and by far most useful option is the “Paper UI.” Among its virtues is that it is one that is installed by default. It is the only one that comes anywhere close to a “one stop shop” for both configuring openHAB itself and making use of its features for controlling home automation devices in your “smart home.”

Here, for example, is a snapshot of what the “control” view looks like on my system:

For anything to show up in that screen, however, you must go on an amazing journey of discovery both for yourself in learning to use openHAB and in teaching your openHAB instance to see and control your devices.

To start with, openHAB knows directly very little about how to talk to any actual home automation gear. It relies on “bindings” that are installed separately to control specific makes and models of devices. I have installed bindings for two types of devices, Wemo and Z-Wave

Installing a binding for a given type of device makes it possible to “discover” devices of that type actually installed in your system. Some bindings support automated discovery through the “Paper UI” while others require that you manually configure things. What is going on under the covers is that the “Paper UI” is modifying a collection of obscurely named JSON files hidden away in obscure locations on your host machine’s hard disk. (And, again, if you don’t already know what a “JSON file” is, then maybe you should consider a nice, off-the-shelf universal remote control from Radio Shack and be done with it. To be clear: that is not a criticism of you, but of openHAB and its ilk. The trouble is, the “Paper UI” is not yet even half baked in all the features to be configured using these JSON files, so you will find yourself going directly into them using a text editor from time to time. Not for the faint of heart!)

What you will have “discovered” automatically, manually through the “Paper UI” or by directly editing JSON files is a collection of Things.

Conceptually, a Thing is a particular piece of home automation gear like a Hue bulb, a Wemo outlet, a Z-Wave switch etc. Some types of Things corresponding to configuration “objects” for which there is no physical device. For example, a particular set of MQTT topics you want openHAB to send and receive is represented as a “MQTT Thing.”

In other words, it really, really helps to understand even the basic installation guide if you are already comfortable with the very “progammery” notion of “objects” in the sense of “object oriented programming” where an “object” can represent anything, and often represents a concept that has no physical manifestation.

All that said, a Thing in openHAB is entirely useless on its own. What actually matters from the point of view of having openHAB display and control the state of Things are Items.

“Er, um, what’s the difference between a Thing and an Item“, I hear you ask?

To a standard speaker of English, of course, there is no difference. But since we aren’t speaking English here but openHAB, the difference is both profound and annoying. An Item is, roughly speaking, the state of a particular parameter or control of a given Thing.

Consider a “smart” light bulb. The physical bulb is represented in openHAB as a Thing. Whether the bulb is currently on or off is an Item. The brightness level, color temperature, RBB or HSV color value etc. are all additional, separate Items. When you tell openHAB to turn on a given bulb, you are really telling it to send a command to a particular Item. When you ask openHAB what color a bulb is set to, you are inquiring the state of a different Item.

Ok, now that you have wrapped your head around Bindings, Things and Items, you’re good to go, right? If only! There is another even less easy for a lay person to grasp concept you must understand in order to navigate even the most basic features of openHAB and that is Channels.

A Channel is the “glue” connecting a Thing to an Item. What you see in the “control” section of the UI are actually sets of Channels, grouped according to the Things to which they belong, displaying the behavior corresponding to their associated Items. What could be simpler? (Answer: almost anything!)

The set of possible channels for a given Thing is determined by its type. A Wemo outlet such as that named “Laser Up Light” in my configuration, for example, supports two channels: “Switch” and “Power.” “Switch” is a control channel that not only reports as to whether or not the outlet is currently turned on but can also be used change that state by sending it control commands with the values, you guessed it, “OFF” or “ON.” The “Power” channel is read-only and reports the amount of power the outlet is currently drawing. A Hue bulb on the other hand does not have a “Power” channel but does have channels corresponding to things like brightness and color.

Some channel types are quite generic. Power outlets, smart bulbs and many other types of physical devices support “Switch” channels. But some Things aren’t designed ever to be turned off, so they dp not have a “Switch” channel. And, of course, Things like MQTT topics that do not correspond to actual devices have their own very unique types of channels.

An advantage of the “Paper UI” is that it can render standard channel types in conventional ways and automatically supply control widgets where appropriate. For example, any “Switch” channel will automatically have a little toggle widget for turning the device on and off through the UI.

But that is about the limit of the “Paper UI’s” utility. It is so far fram being ready for the average home user that simply deleting a Thing, Item or Channel after it has been added through the UI requires editing JSON files. (There are “delete” buttons in the UI associated with all these actions, but they do not work correctly in the current version at the time of this writing.) To see and control non-standard channel types requires even more heroic efforts. And that is assuming that the average home user could be expected to go through the courses in systems and software engineering necessary just to get to the point of needing to edit their configuration in these ways. Based on anecdotal evidence gathered by the author, there is little chance of the latter.

And that only gets us as far as using openHAB as a remote control that presents a web browser based UI for manually turning gear off and on and the like. Real home automation implies the ability actually to automate. At the time of this writing the “rules engine” add-in for openHAB is self-described as still being a “beta” feature and my own experiments with it failed to delight. See part 10 of this series for a description of using Node-RED as an automation engine that integrates both with openHAB and other services like IFTTT.

Adventures in Home Automation (Part 8 of an Open-Ended Series)

Stringify’s users are the latest vitcims of corporate short-sightedness and greed. It’s demise adds to the already long list of reasons to despise the people who run Comcast.

Part 3 of this series focused on the only viable implementation I’ve found to even the most trivial “smart home” scenarios. That implementation requires the combination of IFTTT and Stringify

Now that Comcast has killed Stringify, I am aware of no practical DIY “smart home” solution currently available.

The email from Comcast informing Stringify’s users that they were abandoning us included recommendations for some alternate products.

  • IFTTT is insufficient for reasons already discussed in other posts.

  • Yonomi is just another wholly inadequate partial solution.

  • WebCore still describes itself as at an “alpha” level of implementation years after its inception and after Samsung’s similar misbehavior to that of Comcast has ruined SmartThings’ ability to integrate with third-party products and services.