Oxford is a tricky city to navigate - especially if you're a robot, Jamie Condliffe discovers.
By Jamie Condliffe
'Oxford is a very challenging place to drive,' admits Dr Will Maddern. 'Even for human drivers.'
He should know: he leads the team developing RobotCar, the University's very own self-driving vehicle. While Google’s autonomous cars in California may steal much of the media limelight, similar vehicles can be found navigating the streets of Oxford on a daily basis too. They’ve been created by the Mobile Robotics Group, a research team led by Professor Paul Newman that specialises in building all kinds of autonomous vehicles – from those used in warehouses to some designed to work on the surface of other planets.
Above: RobotCar, the modified Nissan LEAF that the Mobile Robotics Group is teaching to drive
While the engineering challenges facing any autonomous vehicle are large, cars perhaps have it the worst – especially in Oxford. 'The narrow streets are occupied by huge numbers of pedestrians and cyclists as well as cars, buses and delivery trucks, making it difficult to predict where the next dynamic obstacle will come from,' Maddern explains. 'Almost every time we drive around town something new and unexpected occurs, such as a road closure due to roadworks or a large bus reversing down a one-way street.'
Humans, of course, are well equipped for the task. We have a host of sensors to monitor what’s happening, in the form of eyes and ears; a lightning-fast processing system, in the shape of the brain; and, though it can sometimes fail us, a large and easily accessible database of our most-travelled routes, otherwise known as our memories. Those can all be replicated using computers and digital sensors — but it’s not straightforward.
The University’s RobotCar, a heavily modified Nissan LEAF, is certainty packed with the technology to try. There are two laser rangefinders – referred to as LIDAR sensors because they’re a light-based version of the better-known radar system – at the front and back of the car, along with a three-lens camera at the front and three single-lens cameras with fish-eye lenses dotted around the rest of the vehicle. The data from those sensors is quickly processed by a computer aboard the car to create a rich and accurate 3D representation of the world around it.
Notable by its absence is GPS. But if you’ve ever tried to use your car’s satnav in a tunnel or forest you’ll know that GPS needs a clear view of the sky to work properly, which isn’t feasible for an entirely autonomous car. Nor, for that matter, does GPS offer the precision that self-driving cars demand: it’s accurate on the scale of metres, which simply isn’t sufficient to traverse a city safely. There are actually GPS sensors aboard RobotCar, but they’re only used to benchmark its performance after a journey.
Above: Combined LIDAR and stereo camera point-cloud map of Jericho Street, Oxford
Instead, the car uses an array of algorithms, developed by researchers at the Mobile Robotics Group, to work out where it is compared to a set of prior maps. These maps are created by driving the vehicle around the city much like a normal car, but with all the sensors capturing data as it goes. Over many trips, an incredibly accurate depiction of a city’s street can be built up. Then, when the car is driving itself, it can compare the images that are being generated in real time to those stored in its memory.
That sounds like an incredibly demanding problem to have computers crunch through on the timescales required to drive around a city, but Maddern and his team have found ways to speed things up. 'Instead of trying to use every pixel from every image from a camera, [we can] extract ‘interest points’, such as corners, edges and other features from the image,' he explains. Each of those interest points has a unique appearance, and it’s those that are identified and compared between images.
Above: Combined LIDAR and stereo camera point-cloud map of Beaumont Street, Oxford
For the most part it works well, providing the car with information to identify where it is and plan its next move. Incidentally, this is the easy part: the electronic accelerators which allow cruise control to work and the motorised power steering systems that have given rise to automatic parking render the mechanics of driving itself trivial.
Maddern does admit, however, that his car’s image recognition systems can struggle in low light and adverse weather conditions, where image quality worsens. But the best way to improve is much like any learner driver: though practice. 'Twice a week we drive RobotCar on a 10 kilometre loop around central Oxford, including sections on Broad Street, North Oxford, Jericho and Park End,' he explains.
'The goal is to collect sensor data that captures the city under a wide range of weather, lighting and traffic conditions. From this data we can construct a map consisting of many ‘experiences’ of each part of town under different conditions, which we can use to learn which parts of Oxford are stable and reliable for navigation, and which parts are dynamic and change over time.'
Above: Concept image of the LUTZ Pathfinder personalised taxi service in Milton Keynes
Those dynamic changes, of course, throw up their own unique challenge – often in the form of other road users, such as cars, pedestrians and cyclists. Indeed, such is the unpredictability of other road users that they currently represent the most important problem for those developing autonomous cars. Humans have sharply tuned instincts, so they instinctively know what to do when a child steps off the kerb and in to the road; robotic cars need to learn.
Google, for its part, has approached this problems by dealing with specific challenges one-by-one: what a cyclist looks like as they change lanes, say, or, the way a school bus stops and starts along its route. Others are taking a more holistic view, using artificial intelligence systems to have software teach itself what to do in a more general sense. 'The ideal approach to perception would be a system that does not require any manual labelling or supervision at all,' explains Maddern. '[Something that] can simply take a recorded video along with the actions of the driver and learn how to drive a car the way a human learns.'
For now, though, such human-like perception is one of the biggest barriers to the commercial success of self-driving cars – and that means any hopes you may have of owning a self-driving car will remain on hold, at least for a little while. While cars will undoubtedly become more autonomous in the coming years, those improvements will likely be in the form of increasingly intelligent cruise control systems for motorway driving. Beyond that, Maddern says there is 'much debate in the research community as to whether or not incremental steps towards autonomy will ever lead to fully autonomous vehicles that never require intervention from the driver.'
All’s not lost, though. Much of the insight generated by Oxford’s attempts to create a self-driving car is also being channelled into other forms of transport, too. In fact researchers from Maddern’s lab work directly on the LUTZ project, a scheme based in Milton Keynes where a series of two-seater electric-powered pods are being trialled as an autonomous taxi service. 'These systems give us the opportunity to rethink personal vehicle ownership in the future,' muses Maddern.
'Would you still need to own your own car when you could simply call up an autonomous pod on your smartphone, and never have to worry about parking, fuel, maintenance, registration or licences?'
All images supplied by Mobile Robotics Group