Ripped from the pages of an old science-fiction book, self-driving vehicles will soon be in our driveways instead of just our imaginations. It is very hard to estimate what the impact of self-driving cars to our daily life will be, but here are a few that are the most exciting ones:
- We will be able to use our time more efficiently. As a driver of a car you spend a lot of time watching the traffic and steering the vehicle through the traffic. If the car drives you instead, you can spend the time on something else (working, reading, learning, gaming, etc.) Business people could start to use the car more often for business trips for which they use the plane today. Imagine a sales person needs to meet a customer 600 miles away from home the next morning. He could leave home after dinner with his self-driving car and sleep while the car drives him. Even hotels could start to offer rooms without beds, but that still have showers and breakfast for travelers who slept in their cars while being driven to their meetings
- Self-driving cars can reduce the average number needed of cars per household. If a family has a self-driving car, it can drive one of the parents to work in the morning and return home empty to take the kids to school afterwards. Let’s assume the other parent is not working, if so they can be driven by the car to go shopping or workout. After school, the car will pick the kids up again and take them to after school activities, then go pick up the parent at work.
- Self-driving cars can improve the mobility of elderly or disabled people significantly. They would no longer need someone to drive them anymore, a self-driving car could do the job instead. Imagine the advantages for someone who is legally blind and the opportunities that become available if their car can transport them automatically.
- Self-driving cars can reduce parking lot sizes and free up urban space. If cars
can park themselves the parking spaces, then the space allotted per car can be much smaller (on average, upwards of 33% of city land is made into parking lots). We no longer need to be in the vehicle to park it, meaning there does not need to be space to open and close doors. Another bonus to self-parking cars is ease of parking. Say you go to the cinema on a Saturday afternoon. You can just have the car drop you off in front of the movie theatre and let it search its parking space by itself. After the movie, you could just call your car with your mobile phone and it come pick you up again.
- Self-driving cars will reduce the amount of traffic accidents significantly. Maybe close to zero. It is estimated that 90% of all accidents are caused by humans.
Many new cars are already implementing the basic self-driving functionality. The new Advance Driving Assist Systems (ADAS) being rolled out in the higher end automobiles providethe basic components of self-driving cars. All the systems below comprise the basic components of the self-driving car . According to the SAE levels of autonomous driving, you will see that some cars offer Level 3 already. The industry target is to deliver Level 4 and 5 self-driving cars on the roads by 2020. As a matter of fact, the Japanese government has challenged their car makers to bring self-driving taxis on the roads of Tokyo for the Olympic Games in 2020. However, some analysts project a bit farther out for adoption with 25% of the new cars produced in 2035 will be self-driving. We may reach a contribution of 50% of self-driving cars on the roads of the world by 2055. ADAS systems consist of:
- Adaptive Cruise Control: Maintains a safe distance from other cars through automatic adjustments of the vehicle’s cruise control system
- Automotive Night Vision: Increases a driver’s vision at night or in poor weather using a thermographic camera
- Traffic Sign, Pedestrian Recognition: Enables a vehicle to recognize traffic signs such as speed limit, school zone or cross walk and pedestrians as well as cyclists etc.
- Lane Departure Sensor: Warns a driver when the vehicle begins to move out of its lane without signaling
- Parking Assistance: Assists drivers with parking the vehicle
- Backup Cameras: Aids backing up and alleviates the rear blind spot
- Collision Avoidance: Alerts drivers to potential collisions, helping to reduce the severity of accidents
- Automatic Electronic Braking: Automatically varies the force applied to a vehicle’s wheels based on road conditions, speed, loading, etc.
- Smart Headlights: Automatically tailors headlamp range, helping to ensure maximum visibility without impacting other drivers
Any autonomous vehicle system can be broken into four main functional elements; Sense, Perceive, Plan, and Control. The hardware and software complexity of these functional elements varies depending on the level of autonomy the system provides.
A vehicle operating with autonomous features must be able to sense and perceive physical aspects of the driving environment to make control decisions. Examples of sensors employed in an automobile can include LIDAR, cameras, radar, ultrasonic sensors, and GPS. Sensors for low level autonomy that are typical to current production vehicles include radar for adaptive cruise control, brake assist, and collision avoidance. Cameras are used for lane departure, parking assist, and backup in addition to stop sign, speed limit and pedestrian detection. High levels of autonomy will require LIDAR to paint a 360 degree, 3D image of the driving environment to be used for object detection and classification. In addition, a greater number of high definition cameras will be needed to increase accuracy and distance of object detection and recognition. Increasing the number and complexity of the sensor arrays will also increase the complexity of the perception algorithms and the compute power needed to execute the algorithms.
Perception is the autonomous system’s ability to collect data and extract relevant information from an environment. Environmental perception involves applying context to an environment, e.g. object location, road sign detection/marking, drivable areas, velocities, and prediction of an object’s future state. As an example, LIDAR can be used to create a dynamic 3D map of an environment. Raw clustering point data from the sensor is applied to two algorithm steps, segmentation and classification. Edge based, attributes based, region based, model based, and graph based segmentation algorithms process clustering points into multiple homogeneous groups. These segmentation clusters can then be classified as a bike, pedestrian, road sign, building, school bus, etc. Detection algorithms use the automobile’s vision system to identify less complex objects such as lane markings and road surface. The other component to perception is localization. For an autonomous system to be able to react safely to an environment it must comprehend the vehicle’s position and orientation. Again, a complex problem that typically requires the fusion of multiple sensors that may include GPS and inertial navigation hardware.
The planning subsystem is responsible for compiling information from the perception engine, considering inputs on mission/behavior/motion, and making decisions. The planning framework must be robust enough to handle a wide range of urban driving scenarios. The mission planner typically handles high level objectives related to route, e.g. road selection, pickup/dropoff tasks, schedule, etc. The behavior planner makes real-time decisions to ensure proper object interaction and rule compliance. Examples of output from the behavior planner are commands to change lane, overtake vehicle, proceed through intersection, etc. The motion planner is responsible for generating appropriate paths and actions that meet local objectives (typically reaching a location while avoiding obstacle collision). Multi-dimensional motion planning requires a high level of computational complexity.
The control block brings everything together to execute the competency of the autonomous system. It provides necessary inputs to the hardware that generates desired motions. An example control structure in an autonomous system is feedback control. Measured system response is used to actively compensate for deviations from the desired behavior model. Another example is model predictive control. System modeling is used to perform predictive control for a short time horizon. Systems may employ one of these control methods or combinations to achieve functional goals.
Micron develops and produces memory (components and systems) for the automotive industry. Memory is used to store code and data and to run programs for infotainment systems, the electronics in the dashboard, the control units of the engine and for Advanced Driver Assistance Systems (ADAS). To meet the memory requirements of next generation automotive systems. Micron is working closely with our partners in the automotive industry to define and develop memory solutions to for future self-driving cars.
Things that were only seen in science fiction movies a decade ago are now becoming part of everyday life. As the level of autonomy in vehicles continues to increase to the point of no human interaction, the demand on the hardware and software will require swift innovation to keep pace. Memory will continue to play a huge role in the capabilities and performance of these systems. These are exciting times as we prepare for the next big technology boom.