top of page
The Future of Self-Driving Cars.

Project no.2

The unfortunate truth today is that over 1.35 million people die from traffic accidents every year. Putting that number into context, it's nearly 3,700 everyday, or at least 2 people every minute. To visualize that a bit, below is the location distribution of car accidents in the US over the past few years. It essentially covers the whole populous territory, with the blanks in the middle representing mountain regions. It's an issue not isolated to any location, but instead anywhere where there's enough humans and consequently cars. This is just a representation for the US, but I would assume we'd see a similar distribution for other countries.

us-accidents-map.png

LEADING CAUSES

Delving into some of the causes of accidents, this includes:

  • Distracted driving - 4x more likely with mobile phone usage

  • Speeding - 1% increase in mean speed produces a 4% increase in crash risk

  • Driving under the influence - risk level depends on the drug, but there is a 4x increase with amphetamines

What do all of these have in common? These are all human errors.

Over 90% of accidents today are due to driver error. One current solution to this is to control this with the use of computers. Which brings us to the advent of self-driving cars. The current state is not perfect, however with modern technologies, cameras and sensors equipped in our vehicles, this may enable better monitoring and awareness of the road, preventing common mistakes leading to fewer accidents.

 

FUNDAMENTAL TASKS

Starting with the extreme fundamentals in order to better understand from the ground-up the mechanisms needed for this complex technology. We break the topic into 4 smaller tasks:

  • Object Detection

  • Object Prioritization

  • Distance to Object

  • Direction of Object

 

STEP 1: OBJECT DETECTION

The first step is to detect the objects. An easy way to do this today is to use YOLO (You Only Look Once), a state-of-the-art real time object detection system. Given this runs on pre-trained weights, we are able to identify many common objects - humans, cars, traffic lights - as seen in the first image in the slideshow below. Given these have labels already, we can adjust the backend code to limit just to cars (next image).

STEP 2: OBJECT PRIORITIZATION

 

The next step is to detect a path to assess what objects matter. Given camera coordinates and a sample shot you can identify points of interest. The third image shows how we prioritize cars within the green lane. This lane is outlined by taking a sample where the car is running straight on a flat surface and marking the closest two points of the lane to the car and the point on the horizon.

birds-eye-view-intersection_edited.jpg

STEP 3: DISTANCE TO OBJECT

 

Distance to the object is also important when determining what cars to prioritize. With the known focal length and width of an object you can calculate the distance of the object from the camera. The images in the following slideshow shows an automatic distance output based on an input of the two diagonal points bounding the object detected in the earlier steps.

STEP 4: ORIENTATION OF OBJECT

 

Lastly, back to the original task of determining object orientation. Given we live in a 3-dimensional world, there's 3 axis of rotation - yaw, pitch and roll. For an object such as a car, the only one we care about is yaw, the rotation around the verticle axis indicating it's degree of left or right turn. Given labeled images where each cars angle is calculated it is possible to run a supervised method. Other possible solutions would be to identify objects with a 3D bounding box. This current method detects object with a 2D box, but we could infer orientation with a 3D box. Another model would be to detect headlights or taillights within each identified car and use that as a reference point to determine orientation with the 2D box. However these are all supervised methods which require thousands of hours of manual effort. In the long term we need to focus on developing unsupervised methodologies as well.

bottom of page