New technology gives autonomous vehicles ‘Xray’ vision to aid them in tracking pedestrians, cyclists, or other vehicles that might be obscured. 

Experts in Australia are now attempting to commercialize the technology, known as cooperative or collaborative perception (CP).

It involves the installation of roadside information-sharing units (‘ITS stations’) equipped with sensors such as cameras and lidar.

For example, at a busy intersection, vehicles might use these units to share what’s’seem’ with other vehicles. 

This gives each vehicle an X-ray-style vision that can see through buses to see pedestrians and a fast-moving van around corners that are about to run red lights. 

Example of a CP scenarios at an intersection. The car on the left would be able to alert the other car of what's happening - that a pedestrian is crossing the road

Example of a CP scenario at intersection. The car on the left would be in a position to alert the other vehicle of what’s happening, namely that a pedestrian crosses the road.

HOW DOES IT WORK 

The emerging technology for smart cars is called cooperative or collective percept (CP).

It involves roadside information-sharing units with additional sensors like cameras and lidar (or ‘ITS stations’).

These units allow vehicles to share what’s ‘in’ with other vehicles using vehicle-toX (V2X).

This allows autonomous vehicles access to different viewpoints. B

The ability to be connected to the same system significantly increases your perception. This allows connected vehicles to see things that they wouldn’t normally.

The technology was developed by engineers and scientists who said it could be beneficial to all vehicles, not just those with a connection to the system. 

Artificial intelligence (AI) powers autonomous vehicles. It’s trained to recognize pedestrians so that it can stop and avoid collisions.  

But they cannot be widely adopted until they are more safe than human drivers. 

It is therefore crucial that they learn how to respond to different situations with the same ability as humans to ensure their full integration.  

iMOVE is a government-funded research center that is executing the Australian project. With support from Cohda Wireless (transport software firm) and the University of Sydney 

After three years of research, they have released their final report. 

The technology’s applications are now being commercialised by Cohda, following the R&D work, which involved trials on public roads in Sydney.

Professor Eduardo Nebot, from the University of Sydney’s Australian Centre for Field Robotics, said that “This is a game-changer for both human-operated as well as autonomous vehicles” and that it will significantly improve safety and efficiency of road transport.

‘CP enables smart cars to break the practical and physical limitations of onboard perception sensor’ 

A vehicle equipped with the tech was capable of following a pedestrian who was visually blocked by a structure in one test.     

This image shows the view of the autonomous vehicle equipped with the technology. To the right is a fast-moving van obscured by a building, about to go through a red light. The X-ray style vision lets the vehicle detect the van and put on the brakes to avoid a collision

This image shows an autonomous vehicle with the technology. The vehicle is moving quickly and is being stopped by a building. The X-ray style vision lets the vehicle detect the van and put on the brakes to avoid a collision 

Right is a pedestrian 'about to make an error of judgement' by walking into the road. CP would allow a vehicle to brake in time to prevent a collision

Right, a pedestrian is “about to make an error in judgment” by walking into traffic. CP would allow a vehicle brake in time to avoid a collision.

Professor Nebot stated that this was done seconds before the driver’s local perception sensors. This could have been because the driver may have seen the same pedestrian around the corner. This would have given the driver extra time to react to the safety hazard. 

In a real-life setting, CP would allow a moving vehicle to know that a pedestrian is about to walk out in front of traffic – perhaps because they’re too busy looking at their phone – and brake in time to stop a collision.

In this sense, the X-ray vision is an example showing how an autonomous car would be able to surpass the capabilities of a regular car driven by a human. 

However, autonomous vehicle technology is still learning how to master many of the basics – including recognising dark-skinned faces in the dark. 

Professor Paul Alexander is the chief technical officer at Cohda Wireless. He stated that the new technology has the potential to increase safety in situations with both autonomous and human-operated cars.

Safety continues to be a major challenge for autonomous vehicles, which have undergone multiple trials globally. Some self-driving cars have been involved in human fatalities

Safety remains a major concern for autonomous vehicles that have been subject to multiple trials around the world. Some self-driving cars were involved in human deaths

FIVE LEVELS OF AUTONOMOUS DIVING 

Level 1 – A small amount of control is accomplished by the system such as adaptive braking if a car gets too close.

Level 2 – The system can control the speed and direction of the car allowing the driver to take their hands off temporarily, but they have to monitor the road at all times and be ready to take over.

Level 3 – The driver does not have to monitor the system at all times in some specific cases like on high ways but must be ready to resume control if the system requests.

Level 4 – The system can cope will all situations automatically within defined use but it may not be able to cope will all weather or road conditions. The system will depend on high-definition mapping.

Level 5 – Full automation. The system can handle all weather, traffic, and lighting conditions. It can go anywhere and at any hour, in any condition.

Notice: Level 0This term is often used to describe fully-controlled vehicles that are controlled by a human driver. 

‘CP enables smart vehicles to break physical and practical limitations on onboard perception sensors, and embrace enhanced perception quality and robustness’ he stated.

“This could allow for a lower per-vehicle cost to facilitate large-scale deployment of CAV [connected and automated vehicles]Technology.     

2021 was previously touted as the year fully automated vehicles would rollout on UK roads – but the technology is still in the trial phase.   

Last year, Oxbotica, an Oxford-based autonomous vehicle software firm, launched a test fleet of six self-driving Ford Mondeos in the city.

The vehicles were each fitted with a dozen cameras, three Lidar sensors and two radar sensors, giving the fleet ‘level 4’ – the ability to handle almost all situations itself. 

In the UK, a new lanekeeping technology that was approved under United Nations regulations was implemented in January. 

This effectively means that vehicles can now be fitted with an Automated Lane Keeping System. Keeps the vehicle in its lane and controls its movements over extended periods of times without the driver being required to do anything. 

The driver must be able and ready to take over driving control when the vehicle prompts. However, it would allow drivers the freedom to cruise on the motorway while texting or watching a film.

Potentially, car manufacturers would need to install shaking seats to alert drivers if they had to take over the vehicle. 

The ALKS system is classified by the UN as Level 3 automation – the third of five steps towards fully-autonomous vehicles.

A fleet of six self-driving Ford Mondeos navigated the streets of Oxford in all hours and all weathers to test the abilities of driverless cars as part of a trial in 2020

Six self-driving Ford Mondeos drove through Oxford at all hours to test the capabilities of driverless cars in a 2020 trial.

Safety remains a major concern for autonomous cars, which have been subjected to numerous trials worldwide.

Several self-driving cars have been involved in nasty accidents – in March 2018, for example, an autonomous Uber vehicle killed a female pedestrian crossing the street in Tempe, Arizona in the US.

According to reports, the Uber engineer was watching videos on her smartphone at the time.     

SELF-DRIVING CARS SEE’ USING LIDAR CAMERAS AND RADAR

Self-driving cars often combine normal two-dimensional cameras with depth-sensing LiDAR (light-sensing radar) units to see the world around them.

Others use visible light cameras to capture images of streets and roads. 

They are provided with a wealth information and vast databases containing hundreds of thousands of clips. These clips are then processed using artificial intelligence to accurately identify signs, people and hazards.   

Waymo uses LiDAR (light detection & ranging) scanning, where one or more lasers emit short pulses that bounce back when they hit obstacles.

These sensors act as the ‘eyes of the car’ and constantly scan the surrounding area looking for information.

The units provide depth information but are not able to detect small, distant objects without the help of a normal camera connected to it in realtime.

Apple unveiled details about its driverless car system in November 2013. It uses lasers to detect cyclists and pedestrians from a distance.

Apple researchers claimed that they were able spot pedestrians and cyclists using only LiDAR data.

They also stated that their method was superior to other methods of detecting three-dimensional objects, which use only LiDAR.

Other self-driving vehicles rely on a combination camera, sensors, and lasers. 

Volvo’s self-driving cars, which rely on 28 cameras, sensors, and lasers, are an example.

A network of computers processes data, which together creates a real time map of the environment’s moving and stationary objects.

Twelve ultrasonic sensors are placed around the vehicle to identify objects and enable autonomous driving at low speeds.

Wave radar and camera installed on the windscreen read traffic signs and road curvature. They can also detect objects such as other road users.

Four radars are located behind the rear and front bumpers to locate objects.

Two long-range radars are mounted on the bumper to detect fast-moving cars approaching from far behind. This is helpful on motorways.

There are four cameras: two on each side of the vehicle, one on each side of the grille, and one on each side of the rear bumper. These cameras monitor objects within close proximity to it and the lane markings.