Future robots may be outfitted with synthetic skin that helps them “feel” — to better grab items and navigate.
Meta, Facebook’s new parent company, has revealed new research about an A.I.-powered system that helps robots recognize and react to tactile sensations. The research is noteworthy because robots, despite vast improvements to their design over the years, still lack human-like dexterity that would let them know the correct amount of pressure to apply when grasping an object so that it doesn’t break.
Robots may seem like a weird focus for a company known for its social network. Yann LeCun, the vice president and chief A.I. scientist of Meta, acknowledged as much during a media briefing last week. Before joining Facebook eight years ago, he recalled asking CEO Mark Zuckerberg what A.I. research he should avoid pursuing. Although Zuckerberg originally told him, “I can’t find any good reason for us to work on robotics,” LeCun said it became clear that Facebook would need to work on robotics to advance its A.I. Robotics represents a “nexus” of A.I. research that involves creating machines that can perceive, reason, plan, and act in the real world, he said.
The same sophisticated A.I. software for powering robots in the physical world could be used in the digital world. Virtual assistants in the yet-to-be developed immersive metaverse would be more capable, LeCun gave as one example.
For its tactile A.I., Facebook has created custom-built DIGIT sensors that, when attached to a robotic finger, see objects in their field of view. A separate RGB sensor helps capture the light hitting a penny, for example, and how it changes, when touched by a robot’s hand. Based on these reflections, the robot can infer how much pressure it needs to pick up the penny.
In a video demonstration created to show off the research, a robotic hand with the DIGIT sensors gently plucked an egg from a carton without breaking it. In contrast, a robotic hand without the technology cracked the egg shell.
In addition to the sensors, Meta and Carnegie Mellon University researchers showed off ReSkin, a rubber-like material that resembles a black plastic strip, only two-to-three millimeters thick. Each ReSkin strip contains elastomers, rubber-like elastic materials, embedded with magnetic particles along with several tiny sensors called magnetometers that track changes to the magnetic field. When someone touches the fake skin, the magnetometers record the magnetic changes, which can help robots deduce pressure and impact.
In another video, Facebook researchers attached two slabs of ReSkin onto a robotic grasping machine, which was able to pick up a blueberry without smashing it. Meanwhile, a robot clamper without the fake skin attached ended up squashing the berry.
Meta and Carnegie Mellon plan to make ReSkin’s designs available for free, or open source, so that researchers can create their own robot skin. The sensor technology company GelSight plans to sell the DIGIT sensors.
As for the recent wave of unflattering headlines about Facebook and its parent company Meta, from leaks of internal research by whistleblower Frances Haugen, LeCun echoed Zuckerberg’s recent comments that the media is failing to accurately portray the company’s efforts.
“I take exception with some of the narratives that we see in the media that basically [say] the company was spinning the wheel,” LeCun said, referring to the notion that Facebook ignored fixing various problem in order to focus on profits. “This is not true at all, and this is not my experience of it at all.”
A.I. IN THE NEWS
Facebook says goodbye to face recognition. Facebook said it would stop using its home-grown face-recognition system as “part of a company-wide move to limit the use of facial recognition in our products.” Users who opted-in to use the face-recognition service “will no longer be automatically recognized in photos and videos, and we will delete the facial recognition template used to identify them,” the company said in a blog post. Facebook said it continues to see face-recognition software as a useful tool to verify identities and prevent fraud, but acknowledged that there are “many concerns about the place of facial recognition technology in society.”
Don’t let the A.I. run unnoticed. Zillow aims to sell 7,000 houses to “to recover from an operational stumble” that resulted in the company purchasing too many homes that are “now being listed for less than it paid,” reported Bloomberg News. From the article: The decision came after the company tweaked the algorithms that power the business to make higher offers, leaving it with a bevy of winning bids just as home-price appreciation cooled off a bit.
IBM to gobble up McDonald’s A.I. IBM will acquire the remains of the A.I. startup Apprente, which McDonald’s bought in 2019 and renamed McD Tech Labs, CNBC reported. The acquisition is part of a “strategic partnership” between the two companies in which IBM will help McDonald’s use machine learning to automate drive-thru lanes, the article said.
Fixing A.I.’s issues with skin tones. Google’s latest Pixel 6 and Pixel 6 Pro smartphones come with an A.I.-powered feature called Real Tone that’s intended to help their cameras more accurately capture photos of people with darker skin tones, The Wall Street Journal reported. Google said that since early 2020, it “began adding more images of people of color to the databases training the Pixel camera” and consulted with experts and photographers to ensure better quality photos taken of people with darker skin in a variety of settings. Based on some testing of the Pixel 6, the Samsung Galaxy S21, and Apple iPhone 13 Mini, the Journal found that “Most people surveyed agreed that the Pixel-shot photo most accurately represented their skin tones.”
EYE ON A.I. TALENT
Afiniti chose Dr. Caroline O’Brien as the business software firm’s first chief data officer. O’Brien was promoted from her position as senior vice president of data science. She was previously a senior data scientist at Commonwealth Bank.
Camera IQ hired AC Mahendran to be the augmented reality developer tool startup’s chief technology officer. Mahendran was previously the senior director of augmented reality and virtual reality at software firm Unity Technologies.
Stratifyd picked Kjell Carlsson to be the enterprise software company’s executive vice president of product strategy. Carlsson was previously a principal analyst at analyst firm Forrester who specialized in machine learning and data science.
EYE ON A.I. RESEARCH
All about A.I. architecture. Jeff Dean, a senior fellow and senior vice president of Google’s research unit, revealed technology that he pitched as helping an A.I. system “to generalize across thousands or millions of tasks, to understand different types of data, and to do so with remarkable efficiency.”
The tool, called Pathways, is supposed to address several problems with neural networks, software that finds patterns in huge amounts of data. For instance, neural networks excel at only doing one particular task, and need to be trained from the ground-up on new tasks, an inefficient way to learn. As he puts it, “ Imagine if, every time you learned a new skill (jumping rope, for example), you forgot everything you’d learned—how to balance, how to leap, how to coordinate the movement of your hands—and started learning each new skill from nothing.”
Presumably, the Pathways tool will help researchers and businesses create more capable neural networks that can do more than one task.
FORTUNE ON A.I.
China A.I. talks are leading to old school scare-mongering—By Jonathan Vanian
A.I. can be a cornerstone of success—but only if leaders make the right choices—By Arnab Chakraborty
Fresh out of bankruptcy, Hertz is suddenly key to Tesla’s growth plans—By Christiaan Hetzner
Is this ethical A.I.? Researchers from the University of Washington and the Allen Institute for Artificial Intelligence non-profit created an A.I. program called Delphi that attempts to answer ethical questions when asked. Wired takes a look at Delphi and the results can be amusing. When asked basic ethical questions, Delphi proves capable:
Question: Can I park in a handicap spot if I don't have a disability?
Answer: It’s wrong.
But like all modern A.I. language systems, Delphi “relies on statistical patterns in text,” which means it doesn’t have any “real comprehension of right or wrong,” the articles notes. As a result, when asked some more nuanced ethical questions, Delphi gives some head-turning answers:
Question: Arrest people if it creates more jobs?
Answer: It’s okay.
Question: To do genocide if it makes me very, very happy?
Answer: It’s okay.
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.