Skip to Content



Google Cloud VP addresses common A.I. concerns

November 08, 2021 00:00 AM UTC
- Updated November 08, 2021 22:50 PM UTC

Dr. Andrew Moore of Google Cloud speaks on how A.I. can be more accessible for all.

Yeah. Andrew google has invested a tremendous amount in cutting edge ai for its own business, but also to build out these tools that can offer to its customers, including its cloud service customers. But there are a lot of companies that are wary of this, they're worried about getting locked into one cloud provider for instance, and they're also a little bit concerned about tools that they have not trained themselves on their own data. I'm curious to google. How are you addressing those concerns? And and do you think that's a legitimate worry for these companies have? I think it is by far the most common thing now for companies to go with a multi cloud approach and speaking both for myself as a academic practitioner and for google. Uh I think that's incredibly healthy. So uh we don't like to see it when there's a system where every part of the stack depends on uh every other part of the stack which requires this kind of lock in. What What I have found. And this has been a remarkable thing for me just over the last 18 months is how difficult it is to ask a customer over a customer internally to suddenly produce a petabyte of data to train a machine learning model for that customer. So uh we and google because of this, we've been working so hard on the question of how to reduce data requirements for users of ai And if you don't mind, I want to give an example of this yet. So uh visual inspection, Very important thing in manufacturing is a non-0 sum improvements to have fewer defects in parts and finished products. But if every time you want to train up an ai to spot floors, you have to spend months sort of showing it positive and negative examples that really reduces the speed of uh usage of this kind of thing. So, uh, some of my more uh cantankerous engineers got fed up with this uh, and really pushed hard on the question of what is a generic notion of a fault? Is that saltiness as a general concept? And just trained up a eyes to across the board, get an idea of whether something is faulty or not without having been trained in a particular discipline. And to my surprise and horror because they really were cantankerous about it. They were right, we actually can do this kind of thing. And so now, uh, we are not in a position where the first thing we have to talk to with the customer is persuade them to give us like a million examples of a good product and 1000 examples of a bad product. It's now more like 10 of each. And that's been a huge Bennett. Well, that's interesting. I mean, we've seen this with computer vision, as you say, obviously that's happening with language where you have these sort of foundational models. Um, but and and google is using these, it's got Bert, it's got mum the planes the search results. But as you know, there's been a tremendous concern about well what if there's bias kind of baked into these foundational models, you know, what if I'm learning that embedding of what is a flaw look like there's some bias that you're not aware of and then you're going to use that to train, you know, with a few shot learning, you know, for your specific use case. What what happens if you kind of ingest that bias? This has been kind of controversial, you know, within google itself, what is google doing to kind of address that concern about bias? So this is genuinely hard problem. And I would love to be able to tell you that there's a great google approach at the moment. I would say we are extremely conservative but our approach is very people intensive. So if you take uh an implementation of an AI pipeline, maybe something for loan approvals or job applications or uh research, health research study, uh eligibility and so forth. These are all extremely dangerous ones where it's perfectly possible for the training data to inherit. Uh some of the uh biases in the world. Each of these cases has to have folks looking not just at the technology piece but at the change whatever change management, remember I said ai is change change management is involved in it to ask what are the ways in which this could make things even worse or make things better for our sort of attempts to have a inclusive world. So of course, we have statistical tests to do various kind of stratified analyses to to give a warning light if there's an obvious relationship, unexplainable correlation between the protected attributes and an outcome. But that's definitely not enough right now, and I will push back very hard if someone says there's going to be an automated engine for this.