In the first part of IoT Explained - How Does an IoT System Actually Work?, I explained that there are four major components that are involved in any given IoT system. Those components are Sensors/Devices, Connectivity, Data Processing, and User Interface.
Here’s a quick recap of how they work together:
An IoT system consists of sensors/devices which “talk” to the cloud through some kind of connectivity. Once the data gets to the cloud, software processes it and then might decide to perform an action, such as sending an alert or automatically adjusting the sensors/devices without the need for the user.
But if the user input is needed or if the user simply wants to check in on the system, a user interface allows them to do so. Any adjustments or actions that the user makes are then sent in the opposite direction through the system: from the user interface, to the cloud, and back to the sensors/devices to make some kind of change.
The Internet of Things is made up of connected devices, i.e. anything that has the capacity to transfer data over a network. So by definition, an IoT system needs some kind of connectivity, especially if it uses the cloud.
However, there are certain cases where the data processing or the interaction with the sensor/device through the user interface can take place without any data first being transferred over an external network.
One reason is latency. Latency refers to how long it takes for a packet of data to get from the start point to the end point. Although latency doesn’t matter in the vast majority cases, for some IoT applications latency is critical.
Imagine you’re in a self-driving car and suddenly somebody loses control of their car in front of you. Would you want to wait for the self-driving car to send data to the cloud, have that data processed, then have instructions for what to do sent back to the car? No! Those milliseconds could mean life or death.
Even if you’re the one driving the car, you want the user interface (i.e the steering wheel) directly hooked up to the device (i.e the car) rather than waiting for your input to be transmitted externally, processed, and then sent back.
Another reason is that sending lots of data can become really expensive. Some IoT applications collect a ton of data but only a small fraction is actually important. Local algorithms can restrict what gets sent thus lowering costs.
A good example is a security camera. Streaming video takes a lot of data, but the vast majority of the footage might be of an empty hallway.
Rather than send data over a network for it to be processed in the cloud, an alternative approach is to process the data on a gateway (what’s a gateway?) or on the sensor/device itself. This is called either fog computing or edge computing (because you’re bringing the cloud “closer to the ground” and the computing is taking place at the edges of the IoT system rather than the center).
For the security camera, it could use machine vision to “watch” for anything abnormal and only then send that footage to the cloud.
For the self-driving car, the data processing all takes place in the onboard computer which allows for faster decision-making.
Every IoT system combines the four components I discussed in Part 1, Sensors/Devices, Connectivity, Data Processing, and User Interface. However, as you’ve seen in this IoT Explained Part 2, a specific IoT system can combine these components in different ways. It all comes down the specific situation that needs to be addressed.
Ultimately, IoT systems are meant to improve our everyday experiences and improve our efficiency in whatever way possible. And now you know how an IoT system actually works!
What type of use case are you building for? Whichever it is we are looking forward to learning more about your needs.
Our team of experts is here to help!