Developing reliable sensors for fog computing

Article By : Patrick Mannion

Trends towards a more distributed approach to data processing and storage demand smarter sensors and new wireless sensor network architectures.

As the Internet of Things (IoT) evolves, decentralised, distributed-intelligence concepts such as “fog computing” are taking hold to address the need for lower latencies, improved security, lower power consumption, and higher reliability.

While the terminology is new, the basic premise of fog computing is classic decentralisation whereby some processing and storage functions are better performed locally instead of sending data all the way from the sensor, to the cloud, and back again to an actuator. This reduces latency and reduces the amount of data that needs to be sent back and forth. Reducing latency improves the user experience for consumer applications, but in industrial applications it can improve response times for critical system functions, saving money, or lives.

This distributed approach improves security by reducing the amount of data that needs to be transmitted from the edge to the cloud, which also reduces power consumption and data network loading to enhance overall quality of service (QoS). Fog computing also strives to enable local resource pooling to make the most of what’s available at a given location, and adds data analytics, one of the fundamental elements of the IoT, to the mix.

The nuances of fog computing, in terms of network architecture and protocols required to fully exploit its potential, are such that groups such as the Open Fog Consortium have formed to define how it should best be done.

Members of the consortium to date include Cisco, Intel, ARM, Dell, Microsoft, Toshiba, RTI, and Princeton University, and it is eager to harmonise with other groups including the Industrial Internet Consortium (IIC), ETSI-MEC (Mobile Edge Computing), Open Connectivity Foundation (OCF), and the OpenNFV.

As fog computing rolls in, the onus is upon designers to figure out how much intelligence should be at each node of the system for optimal performance. This implies then that sensors will need to start being more intelligent, with some level of built-in processing, storage, and communications capability. This has been coming for some time, but it seems to be reaching a tipping point, becoming a necessary option from sensor providers, though there are the usual cost, space, power, and footprint trade-offs.

While MEMS sensors have been a boon to designers with regard to small size and functional integration, on-going integration to meet the smart-sensor needs of fog computing naturally raises the question of reliability. To date, the integration of digital functions on MEMS sensors has enabled bi-directional communication, self-test, and the implementation of compensation algorithms.

[EETI IoT 01]
__Figure 1:__ *Increasing levels of digital integration, from basic analogue signal conditional (A) through to on-board MCUs (B), local memory, and ADCs (C) have helped make MEMS sensors more capable of implementing self-test and active compensation routines, but real-time reliability monitoring remains elusive.*

Such features are critical if MEMS sensors are to be trusted long term for monitoring electrical energy distribution, medical system functions, and industrial systems status and processes. Such MEMS sensor applications are so critical that researchers at the Universidad Veracruzana (Xalapa, Mexico) have looked into alternatives to the reliability assurance methodologies that depend on monitoring generic failure rates for reliability prediction. These methods, as the researchers point out, lack realism in their ability to predict reliability in various operational environments, from the arctic to the tropics.

As we tumble head first toward fog computing with ubiquitous smart sensors, ensuring the reliability of the data coming from these sensors becomes increasingly important. At the same time, the deployment of fog computing principles means that the communications infrastructure is being put in place to ensure better communication between nodes. These two factors make the university’s development of a real-time sensor failure analysis methodology even more interesting and applicable to the new sensing and networking paradigm.

In the proposed design, the team used a low-power 8-bit PIC18F4550 MCU, a 10-bit analogue-to-digital converter (ADC), a Texas Instruments INA333 instrumentation amplifier, and a HC-05 Bluetooth module to monitor sensor health (mean time between failure (MTBF)) and communicate that to a smartphone. Failures could be something as simple as a communications link drop.

The key here is that the MTBF for all sensors is stored locally in non-volatile memory, and as the sensor ages its reliability is constantly being recalculated and updated.

Adding more smarts to sensors is good, but as we become more reliant upon those sensors, having an improved awareness of sensor (and system) status provides the opportunity to ensure the data we use for our fog computing is, in itself, reliable.

Leave a comment