Monday, October 1, 2018

Aeva Takes Autonomous Driving Sensors to a New Level

Aeva is a new startup founded in the backyard of Google's Mountain View, California headquarters. Co-founders Soroush Salehian and Mina Rezk, who both previously worked on the super secret self-driving car project at Apple called Titan, have announced their first product.

The product is a data collecting box that car manufacturers or third parties such as Uber can attach to cars to capture data to inform autonomous driving systems to make them safer. They say that the Aeva box is “part of the autonomous stack” that can easily be dropped into vehicles.

Soroush Salehian, co-founder of Aeva discussed his companies Aeva box with CNBC:

Aeva is for automobile car manufacturers or companies that are developing ride share autonomous taxis. We are providing this as part of the autonomous stack so that these customers can actually integrate this just like a drop in a placement into their vehicles as they develop their mass production vehicles.

The beauty of this system is that we directly measure the velocity map. We do not take multiple frames to infer each measurement.

When you typically see these kinds of sensors around the vehicles you typically see them with different kinds of boxes. You see maybe one box that is maybe the sensor head or the just a sensing system and then others that is the compute system. They're usually connected together and typically these boxes are hidden away in a trunk or something like this.

One of the unique things about technologies and the work that we've done in the past couple of years is that we have been able to integrate this system into this single compact package while it provides still all the capabilities of these different sensors and outputs into one box.

Mina Rezk, Aeva’s other co-founder describes how the system takes data:

We have our depth map in which the color represents the depth of objects of every of pixel on here. Then we have our reflectivity map in which every pixel represents the reflectivity of each object. We also have our own unique velocity map in which every pixel represents the velocity of these objects.

There is a lot of motion that can easily be picked up whether it’s moving away or coming towards us. It is much harder to pick that up in a depth map because you have to infer multiple frames to understand what is going. However, in the velocity map within one single frame, we can easily know what is moving away and what's coming towards us.

The post Aeva Takes Autonomous Driving Sensors to a New Level appeared first on WebProNews.



from
https://www.webpronews.com/aeva-takes-autonomous-driving-sensors-to-a-new-level/

No comments:

Post a Comment