Ultra-wideband (WB) is a wireless technology that can be used for data transmission and positioning. Unlike other common wireless technologies like BLE or Wifi, UWB transmits low amounts of power spread over a wide range of frequencies.
You might be familiar with newer Wifi routers transmitting via the less crowded 5GHz band. What that actually means is that the FCC has allowed Wifi networks to operate in a newly allocated set of frequencies between 5.17 to 5.835GHz. Of course, your wifi router uses just one of the many channels which subdivide the 5GHz band.
By comparison, the allowable FCC frequency range for UWB is 3.1-10.6GHz. Officially, the FCC and ITU-R define UWB to be wireless signal transmission with a bandwidth which exceeds 500MHz or 20% of the center (arithmetic mean) frequency.
UWB competes directly with BLE for low energy and close-range wireless communications. However, because of the science behind wideband vs narrowband technologies, UWB promises way more accurate positions and higher data speeds over BLE. So, how does this science work and why hasn’t UWB replaced BLE?
To understand the benefits of UWB, we must first take a step back to understand the basics of signal processing.
When you see that trippy sound wave visualization in your media player of choice, you are looking at its representation of amplitude (or volume) over time--or in the time domain.
In the real world, sound (and electromagnetic) waves are usually composites of many waves, each of a different frequency, like notes in a chord. When engineers talk about transforming this wave to the frequency domain, they mean graphing out the distribution of frequencies which make up a wave. For example, a chord of two pure musical tones, A and C, would look like a spike at the A frequency and a spike at the C frequency. The mathematical procedure that translates a wave between these two domains is called the Fourier Transform.
So now that we have some familiarity with time and frequency domains, I’m going to introduce the tradeoffs between time duration and frequency ranges.
If someone were to sing a few notes of a song, you might be able to narrow down the list of possible songs they were singing, but your confidence would still be low. The longer they sing, the more confident you would be in determining which song they are singing. If they were to only sing one note, the list of possible songs is much larger.
The time and frequency domains work the same way. If we sampled a wave over a short duration of time, we would not be very confident of which frequencies composed that wave, so our distribution of frequencies would be very wide. The Fourier transform function is invertible--that is reversible--so if we went backwards, the logic still holds true. Therefore, if we have a large range of frequencies (like an ultra-wideband), we could send this waveform in a very sharp spike in time.
Just to recap: A wave condensed in a short interval of time correlates to a wide frequency range. And a wave spread out over a long period of time correlates to a concentrated range of frequencies.
So, why is it useful to be able to send sharp pulses of waves? Well, let’s go back to one of the main purposes of UWB: to accurately triangulate positions through precise distance measurements.
One of the first examples of distance determination through waves is radar. In fact, dolphins and bats have been using this long before the invention of radar. If you have any familiarity with animal echolocation, you know it sounds like repeated intervals of chirps or clicks. If your signals are sharp and short you can more accurately measure the time of flight between the original signal and its echo off an object and therefore more accurately deduce its distance from you.
This is how UWB technology is able to determine an object’s position within a few centimeters. By contrast, Bluetooth uses signal strength as a proxy for distance which is only accurate to a few meters.
Claude Shannon, the grandfather of Information Technology, developed a groundbreaking theorem which states the maximum data rate capacity of a communication channel. This maximum increases linearly with bandwidth and logarithmically to the signal to noise ratio.
Simply put, widening the frequency range increases the maximum data rate faster than increasing the signal to noise ratio. This allows UWB systems to achieve high data rates by nature of the technology rather than using complicated algorithms to increase signal to noise ratio. In the next few paragraphs I’ll describe an analogy to help you understand the Shannon capacity theorem intuitively.
Let’s say you are a prisoner trying to send a message to a fellow prisoner in the neighboring cell. You cannot see this prisoner or hear their voice. One way to communicate is by knocking on the wall dividing your cells. You could assign different sequences of knocks to different letters and very slowly send a message. If you were able to change the pitch of the knock on the wall, your knock sequences could be shorter, and the number of messages you can send in a given amount of time is higher. This is like increasing the usable bandwidth of a signal.
Now, let’s add some background noise to the scenario. As the background noise increases, the louder you would need to knock to get your message across to your neighbor. If you could not knock louder, you might have to send your message again to make sure your neighbor receives it. This is how the signal to noise ratio component in the Shannon capacity theorem works. The louder your signal is compared to the background noise, the better your data rate because you wouldn’t need to allocate as much bandwidth to correct errors with duplicative messages.
So now you are having deep conversations about life with your fellow prisoner, but a few other prisoners nearby have caught on and started sending their own messages via knocking. If they used the same pitches as the one you used in your messages, they would overlap or interfere with your communication. Unless there’s a systematic way of allocating who can use which frequencies, everyone would step all over each other. This is a huge reason why the FCC regulates frequency bands and why UWB benefits by operating in less congested frequencies.
The analogy above explains how the theoretical maximum data rate capacity (or Shannon limit) is a direct function of bandwidth and signal to noise ratio. That is, a higher bandwidth means a higher capacity, and a higher signal to noise ratio also means a higher capacity, though it’s more effective to increase bandwidth. In real-life applications we would never reach this limit, but the higher the max capacity is, the higher our practical data rates are.
Although wireless UWB is a promising technology in theory, there are several practical reasons which make it less popular than BLE for IoT.
The antennas that can operate in such a wide band tend to be complicated and expensive. When you multiply that by the number of beacons you need to cover your indoor area, this can be prohibitively costly for a company’s use case.
Because of its relative obscurity, there are also fewer hardware vendors which offer UWB beacons and scanners. Additionally, many implementations of UWB systems have failed to live up to its promised data rates, leading to a collapse of several UWB startups in the late 2000s.
While this technology still lives on in applications like proximity sensing car door locks, Apple’s 2019 inclusion of UWB chips in its iPhone 11 and iPhone 11 Pro phones has renewed optimism for UWB enthusiasts. If UWB’s failures so far have been engineering problems, there is plenty of untapped potential just waiting for the right company to make a breakthrough innovation.