Why Waze Needs More Development for Security

by Stephen Bryen

 

Waze is a very popular software that provides driving instructions free of charge to its users.  Originally developed in Israel where it was a huge hit, it soon spread globally.  The company was sold to Google.  In some user segments, especially taxis cabs and services such as Uber, Waze has become ubiquitous.  Its best feature is that it gives you alternative routes when you run into traffic congestion or some obstacle.

 

GPS has come a long way as a driving aid since the early 1990’s when families urgently shipped GPS devices to U.S. military personnel trying to cross desert areas in Kuwait and Saudi Arabia in the first Iraq war.  The Army then had few GPS radios, and driving in white-out conditions carried a lot of risk.  One wrong turn and a convoy could get stranded, or vehicles could get separated.  GPS helped fix that problem and even helped to save lives.

 

Early GPS depended on stored maps on the device.  It was important to regularly update those maps or find out that roadways had changed.  I remember during the big dig in Boston being directed by my rental car GPS to a complete dead end, since the original roadway had been removed.  I looked out facing the river and I was lost.

 

Nowadays GPS used on smartphones are connected to the Internet and maps are supposedly regularly updated, although many still have some significant glitches.  But what Waze revolutionized was crowdsourcing on traffic conditions.  For the first time, fresh timely information is provided to drivers and drivers themselves can also report roadway problems, advising other drivers instantly.

 

Waze is a great development, but is it safe?  The question on safety has three dimensions: (1) is Waze reliable for driving?  (2) Does Waze have any special security vulnerabilities?  And (3) finally can Waze serve as a platform for self-driving cars?  

 

There are questions about Waze as a reliable tool for driving.  While the overall impression of Waze is quite positive, not everyone is celebrating the tool as perfect.  The Burlington Free Press reported a recent case in January involving a jeep vehicle that ended up in a cold semi-frozen lake because Waze sent the wrong instructions.  When another vehicle using Waze followed the same instructions, it too was led to the lake (but did not fall in because the drivers were testing the instructions from Waze).  When asked about what happened, Google did not offer any explanation. But the truth is that Waze like other GPS products depends on government-supplied maps.  These are not always right, and this could very well be a case where a mapping error crept into Waze (and probably exists in other GPS products, although this remains to be proven).

 

A related problem is that Waze gives instructions to avoid traffic congestion that can push a lot of traffic through normally peaceful and quiet neighborhoods.  In Israel some villages, exposed to this kind of traffic indigestion have resorted to putting up signs blocking the entrances to their roadways.  While not exactly legal in Israel (probably also a problem in other countries too), Waze can generate as many problems as it ostensibly solves, and cause problems for people who are not prepared to handle large volumes of traffic.  Consider a diversion that passes by schools or hospitals or pedestrian crossings.

 

In short, Waze has some reliability issues and can generate unanticipated social problems.

 

But a bigger issue arises over the question of whether Waze has security vulnerabilities.

 

At the end of March, 2014 two Israeli Technion students, Shir Yadid and Maital Ben-Sinai Nimrod set out to spoof Waze.  They set up accounts for thousands of fake Waze users and sent false information using an APP that they developed. The idea was to generate a fake traffic jam and send Waze users elsewhere.  The trick worked and the students were successful (not earning a place of respect in the Heavenly Driver’s Club in Israel). The bottom line is Waze is commercial software that is crowd sourced and therefore can be unreliable or made unreliable.

 

(l to r) Shir Yadid and Meital Ben-Sinai Nimrod, who developed the program for an academic project, with Nimrod Partush, a doctoral student who came up with the idea while he was stuck in traffic with his adviser (far right) Prof. Eran Yahav.

This raises integrity questions in situations where Waze might be used for emergency services, for evacuations or for military use.  While these uses are far from top of the list in terms of Waze’s normal customer base, security is a consideration.  Smart and persistent hackers can cause trouble.

 

serious case in point is what happened in Israel when  an Israeli army driver and a squad commander entered the Qalandia Palestinian camp by mistake while using Waze. People in the camp attacked the Israeli military vehicle with firearms and molotov cocktails. The two soldiers, for unknown reasons, split up. Israeli security forces were sent in, there was serious violence, and of the ten member rescue squad made up of five soldiers and five members of the Border Police, one was moderately wounded. One armed Palestinian was killed. The first of the two missing soldiers was picked up almost immediately; the other was found after an hour long search.
 
The two Israeli army drivers used Waze for directions and for traffic updates. It is an open question whether they were explicitly authorized to do so; but it is clear they were not explicitly authorized not to do so. Waze has features that can warn about certain “no go” areas, and in addition Waze can be programmed to make sure that no direction through such an area is possible. According to news reports and Waze personnel, this special feature was not enabled on the phone used by the soldiers.
 
A related source of complaint is that Waze often shows sensitive sites such as police stations and security checkpoints.  At least in Israel this is a no-no and Israel has requested that they be removed from Waze.  But with crowdsourcing this is not so easy to achieve.

 

One cannot rule out that not only can Waze be spoofed using fake users, by a smart adversary or hacker can also broadcast fake maps and mislead drivers, creating traffic foul ups that can serve to help terrorists for example.  The idea is to make it impossible for emergency vehicles or additional police to get through because a traffic mess has been artificially created.  In a sense this has already happened with airborne drones, as the US learned when the Iranians got control over one of America’s most sophisticated drones, landed it on their territory and subsequently exploited the drone, a stealthy RQ-170.  

 

All of this leads to the question of self-driving cars.  Like any other vehicle on the road, a self-driving car has to navigate through myriad hazards.  No one wants to get into a self-driving taxi or bus and be stuck for hours in traffic.  A passenger may have no option and could actually be locked in a vehicle for his or her own safety (another unanswered issue on self driving vehicles, actually).  So Waze or a Waze like product would make sense to redirect the self-driving platform in case of an accident ahead or some other congestion such as a fire or a police emergency.  That’s fine if Waze cannot practically be spoofed.  But that is not the case today.

 

Waze is an exciting and useful product, but it makes sense for Google to think seriously about making it more secure and less error prone, especially as Google is strongly engaged in self-driving vehicle development.  Similarly the military should be wary of the product (as should emergency responders) if they begin to see hacking and intrusions happening.