Explanations & Tutorials

Asset Tracking with Google Cloud Platform

High-value assets can often be misplaced or stolen. We review how Leverege uses GCP to create an asset tracking solution using IoT devices.

Michael Quinn

Asset tracking is a common use case for IoT solutions. When a company has high value assets which can be misplaced or stolen, it only makes sense to attach relatively low value IoT devices to them to track their every move. In this article, we'll review a hypothetical IoT problem and how we at Leverege would use GCP to create a solution.

The Problem

Imagine a fictitious bicycle rental company called Pedal Power in the picturesque beach community of Ocean City, USA. In the past, Gary (the owner of Pedal Power) has asked his customers to leave a driver's license with him at his boardwalk rental hut to ensure that they will return with his (expensive) bicycles. Most people return their bikes on time and pay the rental fee without incident, but the few times that Gary has been burned by renters who never return have really put a dent in his bottom line. In addition, the town of Ocean City has decided to create a bike free zone at the end of the boardwalk and will fine Gary any time one of his customers are caught in the no bike zone. Gary warns his customers about this new law, but some still go into the no bike zone and by the time he receives a fine in the mail, those customers are long gone.

The Solution

Hardware

Fed up with the status quo, Gary comes to Leverege for help. In consultation with Leverege, Gary considers several models of GPS enabled tracking devices to outfit on his bikes. Based on ease on installation and network availability, Gary decides to outfit all of his bikes with a battery-powered rechargeable tracker that uses cellular backhaul.

Ingestion

The first step to getting Gary's tracker data into GCP is ingestion. Leverege writes an ingestion server that runs on GCP's Kubernetes Engine, which is an extremely scalable and cost effective computing infrastructure that will allow Gary to pay for only the computing power he needs but allow him to scale to an extremely high volume of device messages in case his business goes regional or national one day.

The ingestion service will simply listen for device messages to come in over a standard HTTP REST interface and will make sure that only whitelisted devices are able to have their data processed. Device messages will then be unpacked and placed on a default queue for processing using Google Pub Sub. Pub Sub is a message queueing service which can handle extremely high volumes of messages and is built to be fault tolerant. If part of the cloud service that Leverege has created for processing and storing messages is temporarily unavailable, messages will remain in their queue and won't be lost. Pub Sub also allows multiple services to respond to events placed on a single queue, which is extremely important when it comes to message routing.

Message Routing

Each device type in an IoT system may have separate data routing needs. Imagine a system with separately reporting temperature and pressure sensors that are monitoring some industrial process. We may want to store the data from both device types, but temperature data may have special routing needs that pressure sensors do not. Perhaps we need to check the value of each reading from a temperature sensor to ensure that it isn't above a certain threshold and trigger alert if so. We will want to route the data for that device type to separate processes from the data of a pressure sensor. For this reason, we create predefined message routes per device type which consist of the names of Pub Sub topics and any options that need to be passed alongside the data. Message routes can run in parallel or serially.

In the case of Gary's bike rental shop, we currently have only one device type so all data for this system will follow a single route.

Storage

The obvious thing to do at this point is to store our data. We want a reliable, fast way to store all of Gary's most recent data so that viewing the location of all of his outstanding rentals on a map is a breeze. For this, we choose Google's Firebase Database which is a simple but powerful key-value store which is lightning fast. At any given time, the most recent state of Gary's devices will be store in Firebase, giving us a live view of his bike locations. Firebase's listening capabilities will also allow us to get instant updates the second that one of Gary's bikes changes position.

In addition, we want a long term historical view of data from each of Gary's devices so that we can have an audit trail of where each of his bikes were at any given time. For this we use Google's Big Query, which is a SQL-based big data platform. With Big Query, we can store years worth of data from Gary's sensors and query it in seconds.

We create two simple data writing services and add them to Kubernetes Engine and route all of Gary's data to both services to be written as it arrives.

Further Processing

At this point, we've ingested sensor data and stored it. With a little more work on a web app, we have everything in place to store and view all of Gary's bikes on a map and to know exactly where they are at any given time. This is great, but it's early August and Gary is very busy renting bikes. He doesn't want to spend all of his time staring at a map screen hoping that his customers haven't driven into the no bike zone or absconded altogether with his equipment.

To solve these problems, we will route Gary's data to a third source, Google Cloud Functions. Cloud Functions are a simple, scalable, functions as a service solution. They will allow Gary to pay for only a few function invocations at his current scale but leave open the possibility of millions of parallel function invocations from thousands of devices at scale. Cloud Functions can be triggered by a simple HTTP request or, as in this case, can listen to a Pub Sub topic.

The engineers at Leverege work with Gary to develop "geofences" or areas on a map which can be identified by their latitude and longitude boundaries. They create one geofence around the town's no biking zone and create a second geofence which is a 20 mile circle surrounding the bike hut. They also write a Cloud Function which checks each device message to see if the device's position either falls inside the no bike zone or outside the 20 mile perimeter and sends Gary text and email alerts immediately so he can take appropriate and timely action. Additionally, Gary has chosen a device which measures and transmits the device's velocity, so he also receives alerts for bikes moving over a certain speed (perhaps because they've been placed inside a vehicle and driven away).

Conclusion

Using Google Cloud Platform, Leverege was able to create a rock-solid and scalable solution to meet Gary's needs. Because the solution runs on GCP, it automatically gets all of Google's latest security and performance updates and has excellent uptime. Gary can now be sure that he will no longer be stuck paying the bill when one of his customers wanders into the no bike zone and he can alert the local authorities as soon as he suspects that one of his bikes has gone missing. He has already begun to consider a hardware upgrade that will allow him to send audio messages to all of his bikes when it is nearing closing time. He's also working with Leverege to develop a machine learning algorithm using Google Cloud AutoML to try to estimate how much longer a customer will have their bike out for rental based on their pattern of driving behavior. This will help Gary efficiently determine how many bikes he needs in his inventory and give estimates to customers who are waiting for a bike.


Michael Quinn

Director, Cloud Engineering

A former HR Professional, Michael is a self-taught software engineer who loves to apply the concepts of computer sciences to unique business problems. Aside from software, he loves running, his dog Emo and his wife Zoe. When he isn’t writing code, you can find him performing improvised comedy.

View Profile

More From the Blog