“Asynchrony” is a scary word. It means taking events as they come, managing somehow to avoid being overtaken by them.
Event-driven asynchrony is the foundation of serverless computing, which, as a programming framework, is tailor-made for the internet of things. When you consider event scenarios in an IoT context, the chief drivers are the never-ending stream of sensor inputs that—depending on their timing, sequencing, frequency, and values—can swing the runtime behavior of the system arbitrarily in every possible direction.
When you layer event-driven microservices interactions over these sensor-driven complexities, it’s clear that today’s IoT environments are a potential rat’s nest of asynchronous craziness just waiting to happen.
Bringing resiliency to distributed IoT microservices requires a high-level programmatic abstraction for keeping its fundamentally asynchronous substrate under control. That’s how I’m interpreting the recent InfoWorld article about an open-source programming language called P that Microsoft has introduced for programming asynchronous applications in embedded systems, AI applications, and cloud services.
One concept that jumped out at me was the notion of a “heisenbug,” which the article defines as “timing-related bugs that often disappear during an investigation of it.” The term “heisenbug” stems from the analogy of physics’ Heisenberg Uncertainty Principle, under which the attempt to observe a system inevitably alters its state.
Where computing environments are concerned, heisenbugs are equivalent to probe effects, in which attaching a test probe—or simply sending an asynchronous test ping—to a system changes its behavior. What that implies is that the very act of trying to isolate, analyze, and debug some systemic glitches will alter the underlying systemic behavior of interest—perhaps causing the bugs in question not to recur. One of the chief causes of heisenbugs are race conditions, under which a system behaves erratically when asynchronous input events don’t take place in the specific order expected by that system’s controlling program.
What programmer can possibly predict—much less write code sequences to deal with—every possible scenario of event-driven asynchronous interactions in the IoT and other fundamentally asynchronous computing environments? That’s why P, or an equivalent safe asynchronous event-driven programming language, is essential and should be a required tool in every IoT or microservices developer’s toolkit.
Such languages let developers specify the precise sequence of execution steps to be taken by compiled code, regardless of the order in which the underlying runtime environments process asynchronous event messages. Programmers build application logic using abstraction layers that generate asynchronous event-handling code as collections of interacting state machines. What these state machines do is defer the handling of specific events until their proper place in sequence.
It’s hard to see how distributed IoT-based microservices, such as the Iota distributed hyperledger blockchain, can be hardened without this sort of safe event-driven programming language. And it’s unclear how IoT’s RESTful APIs can offer predictable real-time response and programming efficiency without the ability to enforce distributed state transitions among distributed services interoperating over a loosely coupled event-driven fabric.
Of course, there will always be bugs galore in a complex distributed system such as the IoT. However, it will be easier to isolate bugs in the IoT’s underlying runtime platforms—such as the smart sensors that power the industrial Internet—if developers can leverage a programming environment that eliminates any glitches may otherwise spring from IoT’s asynchronous integration fabric.
This story, “How to write event-driven IoT microservices that don’t break” was originally published by InfoWorld.