Weekly analysis at the intersection of artificial intelligence and industry.

Is this email not displaying correctly?
View it in your browser.


follow
Subscribe
Send Tip
November 30, 2021

As an American living in England, it is sometimes hard to get into the Thanksgiving spirit. After all, the fourth Thursday in November is just another working day here in the U.K. Although turkey and cranberries are a staple at British Christmas dinners, they aren’t yet easy to find in the supermarkets a month out from that holiday. And pumpkin pie? Forget about it.

So it was nice treat this year to find myself just days before Thanksgiving in Plymouth, England, the city from which the Pilgrims departed for the New World in September 1620, spending time talking with another American about their historic voyage. Okay, about now, you are starting to think, what does this have to do with A.I.?

Well, the American I was speaking with is Brett Phaneuf, an entrepreneur who, among a bunch of other things, helps run a charity called Promare that is devoted to ocean science and marine archaeology. And among Promare’s current projects—one that has taken up much of its time and $1.2 million of its funds over the past five years—is building an autonomous ship, captained by A.I. software, that will sail from Plymouth, England, to Plymouth, Mass., to commemorate the Pilgrim’s journey. If the ship, which Promare has christened The Mayflower Autonomous Ship (MAS for short), succeeds, it will be the first fully autonomous trans-Atlantic crossing, and a milestone with potentially big implications.

The high-tech, 49-foot aluminum trimaran was supposed to have been ready for the 400th anniversary of the original Mayflower’s sailing in 2020, but the COVID-19 pandemic delayed its completion. A belated crossing attempt this past summer had to be aborted when, after three days at sea, an exhaust pipe for MAS’s diesel generator ruptured, knocking out its primary means of charging its battery. Phaneuf points out that MAS’s A.I. worked fine; it was a purely mechanical failure.


Phaneuf and Promare are planning another attempt for this spring. The ship showcases some pioneering A.I. software for marine navigation that Phaneuf and his technological righthand executive, Don Scott, built with help from IBM, as well as some innovative ocean research projects IBM developed. You can read more about MAS and its attempted crossing in my Fortune story here.

From an A.I. perspective, there are some valuable lessons for others developing A.I. systems:


Data quality matters. The Promare team used an off-the-shelf computer-vision algorithm from IBM to do some of its image segmentation and object detection, but to get the A.I. to classify those objects it had to develop a bespoke dataset. Scott told me Promare created a training set of millions of images of things that float in the sea, from buoys to boats to crab pots, taken in all kinds of weather and lighting conditions to train its algorithm.


Think creatively about how to obtain data. One thing that often holds back the use of A.I. in industry is the lack of real-world training data and training environments from which an A.I. system can safely learn. For Promare, the solution lay in putting video cameras and other sensors on piers and buildings around Plymouth Harbor to both capture images of what was happening on the water and later to train its A.I. captain how to respond to approaching ships without actually putting MAS in any jeopardy.


There’s an advantage in a component-based system. MAS’s A.I. captain is really what Phaneuf calls “a nested stack” of A.I. software: one piece handles computer vision from MAS’s cameras, another analyzes its radar, another navigates using GPS, another optimizes power consumption among all of the various components making power demands on the battery, etc. Sitting on top of all of this is a decision-making engine that ultimately decides what course the boat should set and how to throttle its engine. The advantage here is that this decision-making captain can still function even if one of the underlying systems it uses for perception and location data fails. What’s more, the decisions the A.I. captain makes are explainable: an engineer can go back and audit what the A.I. captain’s inputs were, how they were weighed against one another, and what action the ship ultimately took. It is not a black box, as a single neural network might be.


Sometimes hard coded-rules are necessary. One way to get human-level intelligence into a piece of software is to simply code in human knowledge as a set of rules. This was, after all, the main approach to artificial intelligence prior to the rise of neural networks. It turns out, in some situations, this kind of A.I. is exactly what you want. For instance, how MAS steers when on a course that might result in a collision with another vessel is governed by seafaring “rules of the road.” These rules, known as “co regs” for short, are hard-coded into the decision-making software, Scott says, although when there are multiple ways to avoid a collision, an IBM algorithm, adapted from the finance industry, helps find the optimal decision.

With that, here’s the rest of this week’s A.I. news.


Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com


.


.

A.I. IN THE NEWS


U.K. data privacy watchdog considers fining Clearview AI. The U.K. Information Commissioner's Office put out a statement on Monday saying it was considering fining the controversial New York-based facial recognition software company £17 million ($22.6 million) over its alleged harvesting of vast troves of facial images, including those of U.K. citizens, from social media without consent. “I have significant concerns that personal data was processed in a way that nobody in the UK will have expected,” Elizabeth Denham, the Information Commissioner, said in a statement. Clearview's lawyers told tech publication The Register that Denham's "assertions are factually and legally incorrect.”


Contract lawyers, monitored at home by A.I. software, complain about its faults. Law firms are using A.I. facial recognition and other computer vision software to safeguard sensitive documents and monitor the conduct of contract lawyers. These lawyers frequently work from home, especially since the COVID-19 pandemic. So law firms have tried to find ways to remotely ensure they are doing what they say and that they are handling confidential client documents securely. But The Washington Post spoke to dozens of contract lawyers across the U.S. who said the A.I. software was faulty, failing to correctly register their face, especially when the contract attorney was Black, and that this often prevented them from working. Even when the software worked correctly, it was "dehumanizing" to be constantly surveilled, the lawyers told the newspaper.


Dark-skinned people are poorly represented in image datasets used to train skin cancer identification A.I. That was the conclusion of a study published in Lancet Digital Health earlier this month. Of more than 38 open access information sources the researchers examined, the vast majority included no information about the ethnicity of the patients from which the skin lesion images had been taken. Only 1.3% of the images had ethnicity data and only 2.1% had skin type information. Of the 2,436 images where skin type was listed, just 10 images were from individuals with brown skin and only a single image was from someone with dark brown skin. In the images listing ethnicity, there were no images from people of African, Afro-Caribbean, or South Asian background, the study found. This matters because A.I. systems tend to perform far worse when faced with examples that were not well-represented in their training data.


Alarm about autonomous weapons is growing. Stuart Russell, a computer scientist who is among the leading thinkers on how to mitigate the risks associated with more powerful A.I., will use a series of prestigious public lectures, broadcast on BBC Radio, to argue against the growing development of autonomous weapons systems, The Financial Times reported. Campaigners who are hoping to secure a United Nations ban on such weapons are accelerating their efforts ahead of a key UN meeting in Geneva next month, at which international regulation of A.I. weapons systems will be debated. There are also growing indications that these weapons are not only being developed, but are being deployed in actual combat. Russell told the newspaper, however, that the U.S., Russia, the U.K., Israel, and Australia all continue to oppose a ban. 


EYE ON A.I. TALENT


Kubient, a New York-based digital advertising software company, has hired Mitchell Berg to be its chief technology officer, the company said in a statement. Berg had been CTO at ad tech company Koddi.


Finitive, the New York-based credit marketplace, has hired Steve Yampolsky as head of engineering, the company said in a statement. He was previously at BNY Mellon. The company also said it was hiring Chris Benjamin as principal software architect. Benjamin was previously at CRE Simple.


EYE ON A.I. RESEARCH


A new foundational computer vision model from Microsoft. A large team of researchers from Microsoft have created a massive new multimodal language understanding and computer vision system, which they call Florence, that believe can be "foundational"—the idea of a single system that can underpin a wide variety of complex tasks without much additional training.


While other A.I. research groups have also recently developed these kind of massive, foundational multimodal combined language and image algorithms (OpenAI has CLIP, Google has ALIGN, and the Beijing Academy of Artificial Intelligence has Wu Dao), Microsoft says in a paper, published on the non-peer reviewed research repository arxiv.org, that Florence does a few things these others cannot: it can identify individual objects in an image or video, not just understand the overall scene; it can analyze and understand video, not just still images; and it works with captions and three-dimensional context in images, not just pixel-level two-dimensional image understanding.


The system was trained on 900 million image-text pairs and the neural network takes in some 893 million different parameters. That sounds like a lot, but is considerably smaller than many comparable foundational systems. The smaller model should make it easier and less expensive to train. The researchers said Florence achieves top marks on "the majority" of 44 different computer vision benchmark tests. 


FORTUNE ON A.I.


Who is Parag Agrawal, Twitter’s new CEO?—by Felicia Hou


An autonomous Mayflower aims to prove A.I.’s captain skills by sailing in the Pilgrims’ wake—by Jeremy Kahn


Rise of the (fast food) robots: How labor shortages are accelerating automation—Commentary by Michael Joseph



.

BRAIN FOOD


The U.K.'s new algorithmic transparency standard looks great on paper, but the test will be how it works in practice.


This past week the British government debuted a new standard on algorithmic transparency, joining France and the Netherlands in becoming one of the first countries in the world to do so. The standard will be piloted by a few government departments and then be rolled out more broadly across the U.K. public sector in 2022. While the rules only apply to the government, they may serve as a model for future private sector regulation too. And, as nascent U.S. efforts to regulate A.I. have often taken inspiration from examples in Europe, it is worth looking at what the new standard says.


The standard requires government departments deploying algorithms to make publicly available a lot of information about that system: what kind of algorithm is it; how is it intended to work (and critically, what is the algorithm not intended to do); and what data has been used to train or validate it. It asks the department to list whether it has carried out impact assessments for data protection, the effect of the algorithm itself, the ethics of deploying the system and its impact on equality. Most importantly, it asks departments to list what it sees as the risks associated with the use of that algorithm and what steps have been taken to mitigate those risks.


The new standard is being deployed after the U.K. experienced some significant snafus with algorithmic decision-making. Perhaps the biggest fiasco was the use of an algorithm to award “A-level grades” (a crucial exam that is used for university admissions) to all of the nation’s high school students based on their “predicted results,” after the COVID-19 pandemic forced the cancellation of the actual exams. That algorithm was highly opaque—at least initially, the government provided very little information about how it functioned. The grades the algorithm provided wound up exacerbating social and economic disparities by placing a heavy weight on the historical average grades that a students’ school had produced. After a public outcry, the government voided all the grades, leading to yet more criticism and recrimination.


Developed in connection with the U.K. Center for Data Ethics and Innovation, the new standard certainly seems like an important step towards algorithmic accountability in the public sphere. My only concern in looking at the template form that government has helpfully provided to help agencies comply with the new standard is that, without oversight, complying with the standard could simply become a “check-the-box” exercise, providing little meaningful public ability to assess and question how algorithms are being developed and deployed. We’ll see what happens when the rule comes into practice. Watch this space.


.
Email Us
Subscribe
share: Share on Twitter Share on Facebook Share on Linkedin
.
This message has been sent to you because you are currently subscribed to Eye on A.I..
Unsubscribe

Please read our Privacy Policy, or copy and paste this link into your browser:
https://fortune.com/privacy/

FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.

For Further Communication, Please Contact:
Fortune Customer Service
40 Fulton Street
New York, NY 10038


Advertising Info | Subscribe to Fortune