In a laboratory on the University of Bridgeport campus, a pair of engineers waits patiently as a whirring 3-D printer guides melted filament into something that looks a lot like the head of kid’s Transformer action toy. Later, when the printer comes to a halt and the cooling filament solidifies into hard plastic, the two will insert miniature camera lenses into the head’s vacant eye sockets.
The lenses, five-megapixels and 1.8 millimeters in circumference, are about one tenth the size of lenses found in a typical digital camera. Nonetheless, they are part of a powerful project involving the fast-evolving fields of machine learning and artificial intelligence (AI).
“It’s a race,” said Adham Baioumy showing off the head he’s just pulled from the 3-D printer. “We want to build the first fully autonomous humanoid.”
Baioumy is a mechanical engineering major from the University of Washington but he’s come to Connecticut to develop a next-generation robot with UB engineering research postdoctoral fellow Ahmed El-Sayed, PhD.
Though not yet fully assembled, the robot nonetheless looks as personable and helpful as it is designed to be. As for its name?
“It will either be ‘HERO’ for Humanoid Emergency Response Operator—” says Baioumy.
“—or ‘HORUS,’ for humanoid operated respond for unmanned search,” finishes El-Sayed.
In plainer terms: HERO/HORUS is destined to save lives and take other sophisticated actions during fires, floods, or other disasters.
In this, the robot is not unique. Emergency-responders have deployed drones, robots, and other disaster technology for years. After Hurricane Irene, for example, robots equipped with cameras located victims stranded on rooftops and trapped in flooded homes. The problem? Control systems sometimes fail. “There are robots, but they are remotely controlled over a server,” El-Sayed explains. “But if a server goes down, the robot doesn’t work.”
To bypass such glitches, El-Sayed and Baioumy are hoping to develop autonomous disaster technology that is truly revolutionary. Instead of a robot tricked out with control systems, they want HERO/HORUS to move, respond, and even think without assistance.
“We want it to work entirely on its own,” says El-Sayed, “like a human.”
Consider their design. The robot’s multi-jointed legs bend in eight places. “That gives it a higher degree of freedom so to climb stairs and go through difficult terrain without getting stuck. If it’s a wheeled vehicle, it won’t be as nimble,” says Baioumy.
Its Transformer-like head, now fresh off the printer, swivels 180 degrees, left to right. Rotator cuffs and ankles turn a full 360 degrees. Lobster-like pincers grab and hold objects a tad more clumsily than a human hand.
Engineering and building the robot’s hard, moving parts is relatively easy, says Baioumy. Developing its intangible decision-making capabilities, however, involves exploration into the areas of AI and machine learning. Those fields, filling with engineering pioneers hoping to chart new paths, rely upon algorithms to acquire, shift through, and process data before independently predicting outcomes based without the assistance of human hardcoding.
“It’s thinking,” sums up El-Sayed, who is programming hundreds of algorithms to run an ARM core processor and a modern neural chip that will act as HERO’s brain core.
El-Sayed cites one possible scenario, a goal, for their work. “Let’s say HERO/HORUS is stuck in a burning room and runs up against glass. Based on data retrieved from sensors, such as the miniscule lenses affixed to its eyes, the processor determines that the glass is a barrier,” he says. “Data then is relayed to the neural processor, which determines that HERO should try to find a way around the barrier or break it. When the window breaks, oxygen is added to the fire. Sensors send data to the processor, which determines the fire has increased in size. But it’s not until this data reaches the neural processor that HERO/HORUS might learn that breaking a window in a fire can be a bad choice.
It’s a complicated vision, one that may sound like a sci-fi movie, but it is coming in the foreseeable future, says Tarek Sobh, UB’s executive vice president and dean of the College of Engineering, Business and Education.
“Humanoid robots are eventually going to replace manual labor altogether. Many organizations are working toward replacing humans with machines, even within the academic arena,” says Sobh, who is also the founding director of the Interdisciplinary Robotics, Intelligent Sensing, and Control (RISC) Laboratory.
At Boston Dynamics in Massachusetts, for instance, engineers have created SpotMini, an autonomous robotic dog that opens doors. Creating a HERO/HORUS capable of performing not one but numerous independent decisions without any prompting from a human operator is bound to be more complicated.
“It will require giga- if not terabytes of data and hundreds of algorithms. A human can easily determine when to break a window—say, if there’s a need to save a life. For the robot, it will require large amounts of data, training, and design scenarios that will help the machine train on its own. Eventually, several generations of autonomous humanoid robots would be developed based on contextual learning,” El-Sayed says. “This is a long-term project.
Media contact: Leslie Geary, (203) 576-4625, firstname.lastname@example.org