Will we ever have cars that truly drive themselves?

    1 of 1 2 of 1

      Rarely does a technology emerge that experts claim can change, quite literally, everything. The smartphone was one. Self-driving cars are being heralded as the next.

      Reaching level five in autonomous technology—when vehicles can operate without any human intervention at all—will not just transform a morning commute. Able to travel constantly, self-driving cars have no need for parking spaces, meaning that huge swaths of land currently occupied by street parking or parkades can be appropriated for other uses—think public parks, gardens, or real estate. Almost no-one will own their own car, and the few who do will use it to earn money, renting out their vehicle while they’re not using it. Workers can use journeys to complete additional assignments rather than steer, and traffic jams will be a distant memory as far fewer cars occupy the roads.

      The technology has come a long way since Waymo, a company operating under Google’s parent business, Alphabet, first started its autonomous vehicle project in 2009. Recently hitting the milestone of 10 million miles driven on public roads in America, the company has racked up a lot more experience than its chief competitors Tesla, Uber, and nearly every significant auto engineer. The company became the first to offer a commercial ride service last month, though the experience still requires a Waymo employee in the drivers’ seat to take over the controls if necessary.

      And it is still necessary. Autonomous vehicle technology might be good, but it’s not good enough. Two high-profile crashes last year—one from a self-driving Uber Volvo XC90 and another from a Tesla Model X—resulted in fatalities. While some ambiguity exists around the driver’s culpability in the Tesla collision—an incident that saw the car drive itself into a concrete median while on Autopilot mode—the Uber crash is more clear cut. The vehicle struck and killed a woman named Elaine Herzberg in Tempe, Arizona, who was pushing a bicycle across the road. While initially it was suggested that the car did not recognize Herzberg as a hazard, it was later determined that it had—but because the software identified too many on-road threats, its auto-brake was disabled in order to avoid stop-start driving.

      The question of when—if ever—autonomous vehicles can truly drive themselves comes down to the technology they use to operate. The primary way that cars understand the road is by “seeing” important cues through cameras, and interpreting them with artificial intelligence. This phenomenon is called computer vision. The machines first convert the images it sees into shades of colours, and ascribes each shade a number. Then it identifies shades that are similar, and uses that to distinguish the foreground from the background—a technique that finds the edges of objects. Next, it determines where the corners of the object are, and any differences in texture. Finally, it compares the image to the huge database it’s seen before, and makes a guess as to what the object is.

      In short, the most vital component of self-driving tech is seeing edges and corners—the lines on a road, a curb, or the shape of a road sign. And while they might be easy to view in the sunny climes of Arizona and California—where the majority of testing has occurred—it’s much harder to figure that out in, say, winter in Toronto, when snow can obscure a curb, or the heavy rain of New Orleans, where flooding can make road markings difficult to see. That’s not to mention the limitations of LIDAR—the car’s second sensor, which fires out lasers to creates a 3D—which doesn’t work as efficiently in hot or cold weather.

      And then there’s the question of whether a car can ever be smart enough to replicate human decisions. Constructed with algorithms, self-driving vehicles must follow a predefined set of rules to operate. When anything deviates from those rules, the car can get confused. Trouble is, human drivers—and cyclists and pedestrians—have a tendency to go rogue.

      The incredibly difficult task of creating an algorithm that can deal with absolutely any eventuality (and do so better than a human) has led engineers to try and move the goalposts. Instead of building a technology that can account for any hazard, some want to regulate pedestrians instead. Arguing that it’s not the cars that are at fault, but rather the humans crossing the road, certain investors in the technology like machine-learning researcher Andrew Ng say that it will be available in the real world only if people use crosswalks correctly. In other words, self-driving cars will work properly on the proviso that humans behave like machines.

      It’s unsurprising, therefore, that self-driving efforts have begun to stall. Outside of Waymo’s commercial ride service in sunny Chandler, Arizona—a service, incidentally, available only to 400 riders and only possible with a human safety driver—Tesla has shelved its plans to drive a vehicle across the whole of the U.S., and Uber took a nine-month hiatus before resuming public road testing last month.

      Across the industry, companies are pushing on with gathering more data in the hopes that a greater information base will iron out the self-driving kinks. Despite that, there’s no proof that machine learning will ever get a car to the level of precision needed to safely, predictably, and consistently drive itself.

      Kate Wilson is the Technology Editor at the Georgia Straight. Follow her on Twitter @KateWilsonSays

      Comments