Created by Materia for OpenMind Recommended by Materia
16
Estimated reading time Time 16 to read

By customizing and democratizing the use of machines, we bring robots into the forefront. Pervasive integration of robots in the fabric of everyday life may mean that everyone could rely on a robot to support their physical tasks, just like we have come to rely on applications for computational tasks. As robots move from our imaginations into our homes, offices, and factory floors, they will become the partners that help us do so much more than we can do alone. Whether in how we move, what we build, where we do it, or even the fundamental building blocks of creation, robotics will enable a world of endless possibility.

Imagine a future where robots are so integrated in the fabric of human life that they become as common as smartphones are today. The field of robotics has the potential to greatly improve the quality of our lives at work, at home, and at play by providing people with support for cognitive tasks and physical tasks. For years, robots have supported human activity in dangerous, dirty, and dull tasks, and have enabled the exploration of unreachable environments, from the deep oceans to deep space. Increasingly, more capable robots will be able to adapt, to learn, and to interact with humans and other machines at cognitive levels.

The rapid progress of technology over the past decade has made computing indispensable. Computing has transformed the way we work, live, and play. The digitization of practically everything, coupled with advances in robotics, promises a future where access to high-tech machines is democratized and customization widespread. Robots are becoming increasingly capable due to their ability to execute more complex computations and interact with the world through increasingly richer sensors and better actuators.

A connected world with many customized robots working alongside people is already creating new jobs, improving the quality of existing jobs, and saving people time so they can focus on what they find interesting, important, and exciting. Today, robots have already become our partners in industrial and domestic settings. They work side by side with people in factories and operating rooms. They mow our lawns, vacuum our floors, and even milk our cows. In a few years, they will touch even more parts of our lives.

Medicines on the shelves of a Consis robotic cabinet. This cabinet can automate up to ninety percent of the products dispensed by pharmacies on a daily basis, occupies only two square meters, and can store as many as 900 boxes.

Commuting to work in your driverless car will let you read, return calls, catch up on your favorite podcasts, and even nap. The robotic car will also serve as your assistant, keeping track of what you need to do, planning your routes to ensure all your chores are done, and checking the latest traffic data to select the least congested roads. Driverless cars will help reduce fatalities from car accidents while autonomous forklifts can help eliminate back injuries caused by lifting heavy objects. Robots may change some existing jobs, but, overall, robots can make great societal contributions. Lawn-care robots and pool-cleaning robots have changed how these tasks are done. Robots can assist humanity with problems big and small.

The digitization of practically everything, coupled with advances in robotics, promises a future where access to high-tech machines is democratized and customization widespread

The objective of robotics is not to replace humans by mechanizing and automating tasks, but, rather, to find new ways that allow robots to collaborate with humans more effectively. Robots are better than humans at tasks such as crunching numbers and moving with precision. Robots can lift much heavier objects. Humans are better than robots at tasks like reasoning, defining abstractions, and generalizing or specializing thanks to our ability to draw on prior experiences. By working together, robots and humans can augment and complement each other’s skills.

A Decade of Progress Enabling Autonomy

The advancements in robotics over the past decade have demonstrated that robotic devices can locomote, manipulate, and interact with people and their environment in unique ways. The locomotion capabilities of robots have been enabled by the wide availability of accurate sensors (for example, laser scanners), high-performance motors, and development of robust algorithms for mapping, localization, motion planning, and waypoint navigation. Many new applications are possible thanks to progress in developing robot bodies (hardware) and robot brains (software).

The capabilities of robots are defined by the tight coupling between their physical bodies and the computation that comprises their brains. For example, a flying robot must have a body capable of flying and algorithms to control flight. Today’s robots can do basic locomotion on the ground, in the air, and in water. They can recognize objects, map new environments, perform pick-and-place operations, learn to improve control, imitate simple human motions, acquire new knowledge, and even act as a coordinated team. For example, the latest soccer robots and algorithms are put in practice at a yearly robot soccer competition called RoboCup.

Recent advances in disk storage, the scale and performance of the Internet, wireless communication, tools supporting design and manufacturing, and the power and efficiency of electronics, coupled with the worldwide growth of data storage, have impacted the development of robots in multiple ways. Hardware costs are going down, electromechanical components are more reliable, tools for making robots are richer, programming environments are more readily available, and robots have access to the world’s knowledge through the cloud. We can begin to imagine the leap from the personal computer to the personal robot, leading to many applications where robots exist pervasively and work side by side with humans.

The objective of robotics is not to replace humans by mechanizing and automating tasks, but, rather, to find new ways that allow robots to collaborate with humans more effectively

Transportation is a great example. It is much easier to move a robot through the world than it is to build a robot that can interact with it. Over the last decade, significant advances in algorithms and hardware have made it possible for us to envision a world in which people and goods are moved in a much safer, more convenient way with optimized fleets of self-driving cars.

In a single year, Americans drive nearly three trillion miles.1 If you average that out at 60 mph, that adds up to almost fifty billion hours spent in the car.2 That number grows exponentially when considering the rest of the globe. But the time spent in our cars is not without challenges. Today, a car crash occurs every five seconds in the United States.3 Globally, road traffic injuries are the eighth leading cause of death, with about 1.24 million lives lost every year.4 In addition to this terrible human cost, these crashes take an enormous economic toll. The National Highway Traffic Safety Administration has calculated the economic cost in the United States at about $277 billion a year.5 Putting a dent in these numbers is an enormous challenge—but one that is very important to tackle. Self-driving vehicles have the potential to eliminate road accidents.

Imagine if cars could learn… learn how we drive… learn how to never be responsible for a collision… learn what we need when we drive? What if they could become trusted partners? Partners that could help us navigate tricky roads, watch our backs when we are tired, even make our time in the car… fun? What if your car could tell you are having a hard day and turn your favorite music on to help you relax, while watching carefully over how you drive? What if your car also knew that you forgot to call your parents yesterday and issued a gentle reminder on the way home. And imagine that it was easy to make that call because you could turn the driving over to the car on a boring stretch of highway.

The Da Vinci surgical robot during a hysterectomy operation

Recognizing this extraordinary potential during the past couple of years, most car manufacturers announced self-driving car projects. Elon Musk famously predicted we could fall asleep at the wheel in five years; the Google/Waymo car has been in the news a lot for driving several million accident-free miles; Nissan promised self-driving cars by 2020; Mercedes created a prototype 2014 Model S Autonomous car; and Toyota announced (September 2015) an ambitious program to develop a car that will never be responsible for a collision, and invested $1 billion to advance artificial intelligence.

There is a lot of activity in this space across a big spectrum of car capabilities. To understand where all the various advances fall, it is useful to look at the National Highway Traffic Safety Administration (NHTSA) classification of five levels of autonomy: Level 0 does not include any support for automation; Level 1 includes tools for additional feedback to the human driver, for example using a rear camera; Level 2 includes localized active control, for example antilock brakes; Level 3 includes support for select autonomy but the human must be ready to take over (as in the Tesla Autopilot); Level 4 includes autonomy in some places some of the time; and Level 5 is autonomy in all environments all the time.

An alternative way to characterize the level of autonomy of a self-driving car is according to three axes defining (1) the speed of the vehicle; (2) the complexity of the environment in which the vehicle moves, and (3) the complexity of the interactions with moving agents (cars, people, bicyclists, and so on) in that environment. Researchers are pushing the envelope along each of these axes, with the objective to get closer to Level 5 autonomy.

Over the last decade, significant advances in algorithms and hardware have made it possible for us to envision a world in which people and goods are moved in a much safer, more convenient way with optimized fleets of self-driving cars

Due to algorithmic and hardware advances over the past decade, today’s technology is ready for Level 4 deployments at low speeds in low-complexity environments with low levels of interaction with surrounding pedestrians and other vehicles. This includes autonomy on private roads, such as in retirement communities and campuses, or on public roads that are not very congested.

Level 4 autonomy has been enabled by a decade of advances in the hardware and algorithms available to the robots. Most important is the convergence of several important algorithmic developments: map making, meaning the vehicle can use its sensors to create a map; localization, meaning the vehicle can use its sensors to figure out where it is on the map; perception, meaning the vehicle can perceive the moving objects on the road; planning and decision-making, meaning the vehicle can figure out what to do next based on what it sees now; and reliable hardware, as well as driving datasets that enable cars to learn how to drive from humans. Today, we can do so many simultaneous computations, crunch so much more data, and run algorithms in real time. These technologies have taken us to a point in time where we can realistically discuss the idea of autonomy on the roads.

However, we do not have Level 5 autonomy yet. Technological challenges toward Level 5 autonomy include: driving in congestion, driving at high speeds, driving in inclement weather (rain, snow), driving among human drivers, driving in areas where there are no high-density maps, and responding to corner cases. The perception system of a vehicle does not have the same quality and effectiveness as the human eye. To be clear, there are some things that machines can do better than people, like estimate accurately how quickly another vehicle is moving. But robots do not share our recognition capabilities. How could they? We spend our whole lives learning how to observe the world and make sense of it. Machines require algorithms to do this, and data—lots and lots and lots of data, annotated to tell them what it all means. To make autonomy possible, we have to develop new algorithms that help them learn from far fewer examples in an unsupervised way, without constant human intervention.

Afghan eXplorer, a semiautomatic mobile robot developed by the Artifical Intelligence Lab of the Masachusetts Institute of Technology (MIT), can carry out reporting activities in dangerous or unaccessible surroundings

There are two philosophies that are driving research and development in autonomous driving: series autonomy and parallel autonomy. Parallel autonomy concerns developing driver-assist technologies where the driver is at the wheel, but the driver-assist system monitors what the driver does and intervenes as needed—in a way that does no harm—for example to prevent a collision or to correct the steering angle that keeps the car on the road. The autonomy capabilities of the car grow incrementally but operate in parallel with the human. The parallel autonomy approach allows the car to operate anywhere, anytime. Series autonomy pursues the idea that the human or the car are in charge, but not both. When the car is in autonomous mode, the human does not contribute in any way to the driving. The car’s autonomy capabilities also grow incrementally, but this car can only operate according to the capabilities supported by its autonomy package. The car will gradually operate in increasingly more complex environments.

There are two philosophies driving research and development in autonomous driving: series autonomy and parallel autonomy. The latter concerns developing driver-assist technologies where the driver is at the wheel, but the driver-assist system monitors what the driver does and intervenes as needed

Today’s series autonomy solutions operate in closed environments (defined by the roads on which the vehicle can drive). The autonomy recipe starts by augmenting the vehicles with drive-by-wire control and sensors such as cameras and laser scanners. The sensors are used to create maps, to detect moving obstacles, such as pedestrians and other vehicles, and to localize the vehicle in the world. The autonomous driving solutions are map-based and benefit from a decade of progress in the area of simultaneous localization and mapping (SLAM). The maps are constructed by driving the autonomous vehicle on every possible road segment, collecting features with the sensors. The maps are used for each subsequent autonomous drive, to plan a path from start to goal, to execute the path while avoiding obstacles, and to localize the vehicles as it executes the path.

Most self-driving car companies only test their fleets in major cities where they have developed detailed 3D maps that are meticulously labeled with the exact positions of things like lanes, curbs, and stop signs. These maps include environmental features detected by the sensors of the vehicle. The maps are created using 3D LIDAR systems that rely on light to scan the local space, accumulating millions of data points and extracting the features defining each place.

If we want self-driving cars to be viable global technology, this reliance on detailed prior maps is a problem. Today’s autonomous vehicles are not able to drive in rural environments where we do not have maps—in other words, on the millions of miles of roads that are unpaved, unlit, or unreliably marked. At the MIT CSAIL, we began developing MapLite as a first step for enabling self-driving cars to navigate on roads that they have never been on before using only GPS and sensors. Our system combines GPS data—like the kind you would find on Google Maps—with data taken from LIDAR sensors. Together, these two elements allow us to autonomously drive a car on multiple unpaved country roads and reliably detect the road more than 100 feet (30 meters) in advance. Other researchers have been working on different map-less approaches with varying degrees of success. Methods that use perception sensors like LIDAR often have to rely heavily on road markings or make broad generalizations about the geometry of road curbs. Meanwhile, vision-based approaches can perform well in ideal conditions, but have issues when there is adverse weather or bad lighting. In terms of “Level 5 autonomy”—that is, autonomy anywhere any time—we are still some years away, and this is because of both technical and regulatory challenges.

Autonomous vehicles can take many different forms, including golf carts, wheelchairs, scooters, luggage, shopping carts, garbage bins and even boats. These technologies open the door to a vast array of new products and applications

While progress has been significant on the technical side, getting policy to catch up has been an understandably complex and incremental process. Policy makers are still debating the level at which autonomous vehicles should be regulated. What kinds of vehicles should be allowed on the road, and who is allowed to operate them? How should safety be tested, and by whom? How might different liability regimes shape the timely and safe adoption of autonomous vehicles, and what are the trade-offs? What are the implications of a patchwork of state-by-state laws and regulations, and what are the trade-offs in harmonizing these policies? To what extent should policy makers encourage the adoption of autonomous vehicles? For example, through smart-road infrastructure, dedicated highway lanes, manufacturer or consumer incentives? These are complex issues regarding the use of autonomous vehicles on public roads. At the same time, a form of autonomy that is already deployable now is “Level 4 autonomy,” defined as autonomy in some environments some of the time. The technology is here for autonomous vehicles that can drive in fair weather, on private ways, and at lower speeds.

Environments such as retirement communities, campuses, hotel properties, and amusement parks can all benefit from the Level 4 autonomy technologies. Autonomous vehicles can take many different forms, including golf carts, wheelchairs, scooters, luggage, shopping carts, garbage bins, and even boats. These technologies open the door to a vast array of new products and applications, from mobility on demand, to autonomous shopping and transportation of goods, and more efficient mobility in hospitals. Everyone would benefit from transportation becoming a widely available utility, but those benefits will have a particular impact on new drivers, our senior population, and people affected by illness or disability.

The technology that is enabling autonomy for cars can have a very broad societal impact. Imagine residents of a retirement community being transported safely by automated golf carts. In the future, we will be able to automate anything on wheels—not just the vacuum cleaners of today, but also lawn mowers or even garbage cans.

If we want self-driving cars to be viable global technology, this reliance on detailed prior maps is a problem. Today’s autonomous vehicles are not able to drive in rural environments where we do not have maps

The same technology that will enable this level of automation could even be put to use to help people dealing with disabilities—like the blind—experience the world in ways never before possible. Visual impairment affects approximately 285 million people worldwide, people who could benefit enormously from increased mobility and robotic assistance. This is a segment of the population that technology has often left behind or ignored, but, in this case, technology could make all the difference. Wearable devices that include the sensors used by self-driving cars and run autonomy software could enable visually impaired people to experience the world safely and in ways that are much richer than the walking stick.

Robotics will change the way we transport people and things in the very near future. But, soon after, it will do more than deliver things on time; it will also enable us to produce those things quickly and locally.

Challenges in Robotics

Despite recent and significant strides in the field, and promise for the future, today’s robots are still quite limited in their ability to figure things out, their communication is often brittle, and it takes too much time to make new robots. Broad adoption of robots will require a natural integration of robots in the human world rather than an integration of humans into the machines’ world.

Reasoning

Robots can only perform limited reasoning due to the fact that their computations are carefully specified. For today’s robots, everything is spelled out with simple instructions and the scope of the robot is entirely contained in its program. Tasks that humans take for granted, for example asking the question “Have I been here before?” are notoriously difficult for robots. Robots record the features of the places they have visited. These features are extracted from sensors such as cameras or laser scanners. It is hard for a machine to differentiate between features that belong to a scene the robot has already seen and a new scene that happens to contain some of the same objects. In general, the data collected from sensors and actuators is too big and too low level; it needs to be mapped to meaningful abstractions for robots to be able to effectively use the information. Current machine-learning research on big data is addressing how to compress a large dataset to a small number of semantically meaningful data points. Summarization can also be used by robots. For example, robots could summarize their visual history to reduce significantly the number of images required to determine whether “I have been here before.”

Additionally, robots cannot cope with unexpected situations. If a robot encounters a case it was not programmed to handle or is outside the scope of its capabilities, it will enter an error state and halt. Often the robot cannot communicate the cause of the error. For example, vacuum-cleaning robots are designed and programmed to move on the floor, but cannot climb stairs.

BBVA-OpenMind-ilustración-Daniela-rus-robotica_una-decada-de-transformaciones-3
French Navy searchers and ROVs (remotely operated vehicles) participate in Operation Moon, exploring the wreck of Louis XIV’s flagship of the same name at a depth of ninety meters

Robots need to learn how to adjust their programs, adapting to their surroundings and the interactions they have with people, with their environments and with other machines. Today, everybody with Internet access has the world’s information at their fingertips, including machines. Robots could take advantage of this information to make better decisions. Robots could also record and use their entire history (for example, output of their sensors and actuators), and the experiences of other machines. For example, a robot trained to walk your dog could access the weather report online, and then, based on previous walks, determine the best route to take. Perhaps a short walk if it is hot or raining, or a long walk to a nearby park where other robotic dog walkers are currently located. All of this could be determined without human interaction or intervention.

Communication

A world with many robots working together requires reliable communication for coordination. Despite advances in wireless communication, there are still impediments in robot-to-robot communication. The problem is that modeling and predicting communication is notoriously hard and any robot control method that relies on current communication models is fraught with noise. The robots need more reliable approaches to communication that guarantee the bandwidth they need, when they need it. To get resilient robot-to-robot communication, a new paradigm is to measure locally the communication quality instead of predicting it with models. Using the idea of measuring communication, we can begin to imagine using flying robots as mobile base-stations that coordinate with each other to provide planet-scale communication coverage. Swarms of flying robots could bring Internet access everywhere in the world.

Communication between robots and people is also currently limited. While speech technologies have been employed to give robots commands in human language (for example, “move to the door”), the scope and vocabulary of these interactions is shallow. Robots could use the help of humans when they get stuck. It turns out that even a tiny amount of human intervention in the task of a robot completely changes the problem and empowers the machines to do more.

Currently, when robots encounter something unexpected (a case for which it was not programmed) they get stuck. Suppose, instead of just getting stuck, the robot was able to reason about why it is stuck and enlist human help. For example, recent work on using robots to assemble IKEA furniture demonstrates that robots can recognize when a table leg is out of reach and ask humans to hand them the part. After receiving the part, the robots resume the assembly task. These are some of the first steps toward creating symbiotic human-robot teams where robots and humans can ask each other for help.

Design and Fabrication

Another great challenge with today’s robots is the length of time to design and fabricate new robots. We need to speed up the creation of robots. Many different types of robots are available today, but each of these robots took many years to produce. The computation, mobility, and manipulation capabilities of robots are tightly coupled to the body of the robot—its hardware system. Since today’s robot bodies are fixed and difficult to extend, the capabilities of each robot are limited by its body. Fabricating new robots—add-on robotic modules, fixtures, or specialized tools to extend capabilities—is not a real option, as the process of design, fabrication, assembly, and programming is long and cumbersome. We need tools that will speed up the design and fabrication of robots. Imagine creating a robot compiler that takes as input the functional specification of the robot (for example “I want a robot to play chess with me”) and computes a design that meets the specification, a fabrication plan, and a custom-programming environment for using the robot. Many tasks big and small could be automated by rapid design and fabrication of many different types of robots using such a robot compiler.

Toward Pervasive Robotics

There are significant gaps between where robots are today and the promise of pervasive integration of robots in everyday life. Some of the gaps concern the creation of robots—how do we design and fabricate new robots quickly and efficiently? Other gaps concern the computation and capabilities of robots to reason, change, and adapt for increasingly more complex tasks in increasingly complex environments. Other gaps pertain to interactions between robots, and between robots and people. Current research directions in robotics push the envelope in each of these directions, aiming for better solutions to making robots, controlling the movement of robots and their manipulation skills, increasing the ability for robots to reason, enabling semantic-level perception through machine vision, and developing more flexible coordination and cooperation between machines and between machines and humans. Meeting these challenges will bring robots closer to the vision of pervasive robotics: the connected world of many people and many robots performing many different tasks.

University student Andrew Marchese demonstrates the movement of a robotic fish during a show at MIT’s Artificial Intelligence Lab in April 2013. The robotic fish simulates the movement of living fish and employs the emerging field of soft robotics

Pervasive, customized robotics is a big challenge, but its scope is not unlike the challenge of pervasive computing, which was formulated about twenty-five years ago. Today we can say that computing is indeed pervasive, it has become a utility, and is available anywhere, anytime. So, what would it take to have pervasive integration of robots in everyday life? Mark Weiser, who was a chief scientist at Xerox PARC and is widely referred to as the father of ubiquitous computing, said of pervasive computing that: “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.”

For example, electricity was once a novel technology and now it is a part of life. Robotic technologies have the potential to join the personal computer and electricity as pervasive aspects of everyday life. In the near future, robotic technologies will change how we think about many aspects of everyday life.

There are significant gaps between where robots are today and the promise of pervasive integration of robots in everyday life. These gaps concern the creation of robots, their computation and capacity to reason, change, and adapt for increasingly more complex tasks in increasingly complex environments, and their capacity to interact with people

Self-driving car fleets have the potential to turn transportation into a utility, with customized rides available anywhere, anytime. Public transportation could become a two-layer system: a network of large vehicles (for example, trains, buses) providing backbone transportation for many people over long distances, and fleets of transportation pods providing the customized transportation needs of individuals for short hops. Such a transportation network would be connected to the IT infrastructure and to people to provide mobility on demand. The operation of the backbone could include dynamically changing routes to adapt to people’s needs. Real-time and historical transportation data are already used to determine the most optimal bus routes and location of stops at a fine granularity. Mobility on demand may be facilitated by state-of-the-art technologies for self-driving vehicles. Taking a driverless car for a ride could be as easy as using a smartphone. The robot pods would know when people arrive at a station, where the people are who need a ride now, and where the other robot pods are. After driving people to their destination, the robot pods would drive themselves to the next customer, using demand-matching and coordination algorithms to optimize the operations of the fleet and minimize people’s waiting time. Public transportation would be convenient and customized.

Notes

Quote this content
Listening
A Decade of Transformation in Robotics
Mute
Close

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved