Why this law firm only works on artificial intelligence

August 17, 2021, 3:11 PM UTC

As businesses continue to adopt artificial intelligence technologies, corporate lawyers and in-house data scientists should prepare to get better acquainted. Lawmakers are increasingly indicating that A.I. regulations are coming, which means that businesses will need to ensure that their machine learning systems aren’t violating laws governing privacy, security, and fairness. 

One upstart law firm specializing in A.I.-related legal matters is betting that companies will be increasingly investigating the various ways their machine learning systems could put their businesses in legal hot water. The bnh.ai law firm, based in Washington D.C., pitches itself as a boutique law firm that caters to both lawyers and technologists alike.

Having a solid understanding of A.I. and its family of technologies like computer vision and deep learning is crucial, the firm’s founders believe, because solving complicated legal issues related to A.I. isn’t as simple as patching a software bug. Ensuring that machine learning systems are secure from hackers and that they don’t discriminate against certain groups of people requires a deep understanding of how the software operates. Businesses need to know what comprised the underlying datasets used to train the software, how that software can potentially alter over time as it feeds on new data and user behavior, and the various ways hackers can break into the software—a difficult task considering researchers keep discovering new ways miscreants can tamper with machine learning software.

One of the problems companies face, however, is that data scientists and lawyers don’t really speak the same language, bnh.ai Managing Partner Andrew Burt explained. 

“The gap is like really, really wide, and it’s really, really deep” between data scientists and lawyers, he said. “Frankly, it’s uncomfortable. Lawyers don’t like being put in positions where it’s extremely hard to understand what’s going on. It can be very intimidating to sit across from a data scientist who just spouts a bunch of statistical terminology and math.”

Likewise, the same is true for data scientists who may be intimidated by lawyers who themselves speak in esoteric jargon, often saying “Latin things,” he said. 

That said, Burt believes the “future of technology is dependent on those meetings” between attorneys and data scientists. Lawyers need to understand the technical nitty gritty of A.I. systems to be able to convey the potential legal risks to data scientists in a way that’s realistic and helps give them blueprints for how to troubleshoot and address the complicated systems. And unlike traditional software that’s relatively a “set it and forget it” kind of product, companies must continuously monitor machine learning software because it’s ever changing, thus posing future risks to their businesses.

Burt concedes that the initial meetings between data scientists and lawyers can be “awkward” for reasons including the notion that technologists “don’t want lawyers in their business” and “they don’t want to be thinking about deeply ambiguous problems with no real solutions yet.”

He said his co-founder Patrick Hall once told him that lawyers will feel accomplished if they “sit in a room and talk” about legal issues. Data scientists, on the other hand, will feel like “they just wasted their time” if they attend a meeting where people talk but no one writes code.

Despite the differences between the two professions, they can find common ground, at least in Burt’s experience. They just need a little help getting on the same page. Once there, they can work on issues like figuring out the best ways to segment populations in datasets to adhere to current fairness rules, and when might be the best time to re-train a machine learning model to ensure that it’s powered by the most relevant and appropriate data.

Burt believes that when it comes to A.I. and business, “it’s bad practice to wait until something bad happens to think about risk.” The most visionary CEOs will have considered the legal ramifications of A.I. long before they get caught in the cross hairs of regulators.

“Those are really the two threads,” he said, “betting big on A.I. and caring about risk.”

Jonathan Vanian 
@JonathanVanian
jonathan.vanian@fortune.com

A.I. IN THE NEWS

Tesla’s autopilot software to get scrutinized. The U.S. National Highway Traffic Safety Administration is investigating Tesla’s Autopilot technology following 11 accidents in which Tesla cars “crashed into emergency vehicles when coming upon the scene of an earlier crash,” CNN reported. Tesla drivers involved in the accidents allegedly activated their vehicles’ Autopilot or “traffic-aware cruise control,” the article said. From the report: The safety agency said its investigation will allow it to "better understand the causes of certain Tesla crashes," including "the technologies and methods used to monitor, assist, and enforce the driver's engagement with driving while Autopilot is in use."

Samsung moves into A.I. chips. Samsung has tapped the services of semiconductor design company Synopsys to use machine learning software to design some of its computer chips used in smartphones and Samsung-branded headsets, according to a report by Wired. Other companies like Google and Nvidia have used similar A.I. techniques to help design computer chips. Using an A.I. subset known as reinforcement learning—in which computers learn by trial-and-error—companies have been able to more quickly dream up novel chip designs involving the placement and wiring of components on a circuit board.

Facebook cuts off access. Researchers from the AlgorithmWatch non-profit said that they shut down a project researching Instagram’s opaque newsfeed algorithm after receiving a “a thinly veiled threat” from Facebook. Facebook representatives allegedly told AlgorithmWatch members that their research project breached the company’s terms of service, by collecting certain kinds of Facebook data, a claim that AlgorithmWatch disagrees with. Facebook told The Verge that company representatives did meet with AlgorithmWatch, “but denied threatening to sue the project.”

Big bucks for A.I. and pharma. XtalPi, a healthcare startup that operates in both the U.S. and China, landed $400 million in funding as it researches how to use deep learning to discover new drug molecules, according to a report by the healthcare news publication Fierce Biotech. The startup’s partners include Pfizer, 3D Medicines, GeneQuantum Healthcare, Huadong Medicine, and Signet Therapeutics, the article noted.

EYE ON A.I. TALENT

Sweetgreen picked Wouleta Ayele to be the restaurant chain’s chief technology officer. Ayele was previously the senior vice president of technology for Starbucks.

Business software firm Interactions hired Anoop Tripathi to be the company’s CTO. Tripathi was previously the senior vice president of Automation Anywhere.

Coinbase landed Andrei Lyskov as a data scientist, according to a LinkedIn posting. Lyskov was previously a data scientist at Apple.

EYE ON A.I. RESEARCH

A.I. to predict the clicks. Google researchers published a paper, to be accepted at the upcoming ACM Recommender System Conference, that explains how deep learning can predict the most likely element—like a web link—that a user will click on via a mobile app. The researchers believe that the A.I. system can be used to create more intuitive user interfaces for mobile apps that are speedier and more responsive than current apps.

The Google researchers created their deep learning click-prediction system using a dataset “of over 20 million clicks, which form click sequences from more than 4,000 Android users using over 13,000 unique apps on their smartphones.”

From the paper: As a proof of concept, we created a prototype feature named Next Click Overlay that presents the UI element that is mostly likely to be clicked at the bottom of the screen. This design does not alter the layout of an existing interface, and introduces a small amount of cognitive overload for the user to glance over the predicted item. If the prediction is correct, the user can reach the next click single-handedly.

A.I. and “smelly” software collide. Researchers from the Birla Institute of Technology and Science and Curtin University in Australia published a paper about using deep learning to spot “code smell,” generally referring to sloppy software code that can lead to poor performing apps and programs.

From the paper: A code smell is generally detected by inspecting the source code and searching for sections of the code that can be restructured to improve the quality of code. This method is inefficient, especially if developers have to crawl through potentially thousands of lines of code, which can consume a significant amount of time and money to the organization. Based on the internal organization and anatomy of the software, a robust model can be created, which can make this excruciating process a lot simpler.

The paper was accepted to the 35th International Conference on Advanced Information Networking and Applications.

FORTUNE ON A.I.

Employees may need to keep up ‘the pretense of working’ as automation spreads, says A.I. expert Kai-Fu Lee—By Nicholas Gordon

Can you predict the future?—By Sheryl Estrada

No lockdown, no problem—Deliveroo delivers a timely surge in sales—By Sophie Mellor

China wants stricter state control over just about everything—and the costs are mounting—By Clay Chandler and Yvonne Lau

BRAIN FOOD

A.I. goes surreal. The New Yorker probes the Twitter account known as @images_ai, created by a twenty-year-old student at Northwestern University named Sam Burton-King. Using the same so-called generative adversarial network (GAN) technology used to create so-called deepfake photos, Burton-King has created some mesmerizing and surreal computer-generated artwork including an “Art Deco Buddhist temple.” Regarding that artwork, the article explains that “perhaps it resembles the archetypal Chinese Buddhist temple crossed with a McDonald’s—a fleeting, half-remembered image from a dream frozen into a permanent JPEG on social media.”

From the article: “The art is in discriminating the good from the bad,” Burton-King said—figuring out which words to input, how big to make the images, and when to stop the generative process. But the most compelling aspect of the account might be its ability to enact an artistic fever dream, a kind of magic spell: “You can just type something and have it manifest in front of you,” Burton-King said. “I think that’s the main appeal for everyone.” It doesn’t even require coding fluency; @images_ai published a tutorial for anyone to perform the trick using open source tools online.

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet