[MUSIC PLAYING] JIA LI: Hi, everyone I’m Jia Here today, I’m talking about how, at Cloud AI, we’re inspired by customer needs AI has the potential to change every industry, from transportation to health care, from retail to education However, there is a big gap between what’s possible with AI and what’s within reach for our customers This gap reveals a lack of awareness on both sides On one side, many traditional companies lack an understanding of AI, which makes it hard for them to benefit from it Machine learning development is an extremely complex cycle From collecting data, designing model, tuning model parameters, updating the model, evaluating it, and iterate the process, every single step would require machine learning expertise Unfortunately, of the world’s 21 million developers, only one million or so have data science background And thousands of them have deep learning or machine learning expertise And on the other side, technologists who are driving the development of AI lack an understanding about traditional industries As a result, many problems that could benefit from AI remains unsolved At Google Cloud, we’re trying to bring our customers and the AI experts together on the shared platform We try to help our customers to understand AI and make its capability more accessible Through this process, we developed the first-hand understanding of their world, the challenges they are facing And by combining this with the AI expertise at Google, we can address their problems in powerful new ways By understanding the diverse range of needs in today’s enterprise environment, we’re delivering the power of AI in the form that we hope to meet our customers where they are We provide AI technology across a wide spectrum of solutions On one side, for the most advanced machine learning experts, we provide the most powerful machine learning tools for them to build their vision, like TPUs, TensorFlow, Kubeflow, Cloud ML Engine, et cetera On the other are the customers who understand the value of AI, but don’t have the expertise to put it in use For them, we offer simple tools that deliver immediate results, like the Machine Learning APIs And in between, we offer a growing range of solutions that blend the ease of use and sophisticated capabilities, such as Contact Center we announced yesterday Furthermore, we believe in delivering maximum results for every customer, regardless of their expertise Consider Cloud TPU, for example, the custom chips that dramatically accelerate machine learning tasks With minimal amount of code, advanced user can run their TensorFlow models on TPUs and experience an immediate boost But even novices can rely on the performance boost

from the latest Google hardware, as APIs like Translation API can automatically leverage TPUs behind the scenes No matter what’s your level of scale, we want to ensure you benefit from our most powerful technology We call this democratizing AI, but there’s much more we can do to make AI more accessible To explain where our technology is going next, I’d like to share a bit of the history that I’ve witnessed in my own career in computer vision Image classification is among the greatest success stories in our field In recent years, it has been going through a lot of significant improvement And now, we’re thinking about how do we expand the success of image recognition to other areas? But we are facing a lot of challenges in doing so The first challenge is data Years were spent collecting millions of images to train algorithms to understand what they see The second is algorithm development It took hundreds of researchers decades to refine the algorithm behind this single goal A big part of democratizing AI will be finding better ways to replicate what has made image recognition such a success This is the promise of AutoML AutoML makes it possible to customize our most advanced machine learning models to your customized and specific use case, all without writing any machine learning code Our first release came earlier this year in the form of Cloud AutoML Vision, which makes it possible to train the Cloud Vision API to recognize entirely new image categories Keller Williams Realty is using AutoML Vision to bring to market the most advanced home search experience for consumers By training a custom model to recognize common elements of furnishing and architecture, customers can automatically search home listing photos to find their specific preferred features, like granite counter tops, or even more general style, like modern And here at Next, we have introduced two additional AutoML features– AutoML Natural Language and AutoML Translation With AutoML natural language, customers can train custom models to accurately recognize domain-specific content in their text For example, in one of our customer, Hearst Newspapers, they’re one of the world’s largest publishers of magazines and newspapers They have been always looking for better ways to manage their content At Hearst Newspaper, they are looking forward to leverage Cloud ML, AutoML Natural Language to customize domain-specific text models for their newspaper accountant And that has been providing high accuracy for their needs, without writing any machine learning code AutoML Translation helps to recognize jargon and terms specific to your domain And the result is actually to translation to capture the context and nuance that your customer will expect So customers all over the world have

been finding ways to improve translation with AutoML For example, Nikkei, a media company based in Japan, is evaluating AutoML for translating international news articles By leveraging AutoML, it meets the customization needs from them And they have been very impressed by the accuracy AI is still a very nascent space And we know that each of you face some unique challenge We are eager to learn them and help solve them That will require us to keep a close conversation with you Let’s talk about the problems in your industry and what are the ideal outcome look like And let’s work together to use the latest AI technology to make that vision a reality Thank you Now, I’d like to welcome Stefan to the stage to share how we are partnering with SAP to bring AI and enterprise software to the physical world using Google Cloud Thank you STEFAN NUSSER: Thanks, Jia [APPLAUSE] Good afternoon, everyone My name is Stephan Nusser, I’m product manager for Cloud AI in Europe Here at Google Cloud, one of our primary objectives is democratizing AI, lowering the barriers to entry and making it as easy as possible for customers all over the world to put its remarkable capabilities to use As AI becomes more accessible, other technologies benefit as well We see an opportunity here to use the combined power of AI and the Cloud to make automation more accessible to industry and to businesses of all sizes To understand how, let’s take a look at [NON-ENGLISH],, Industry 4.0, and the state of automation today There’s an exciting transformation on the way in the industrial sector Manufacturing operations and the entire supply chain are being digitized This is known as Industry 4.0 It involves sensorizing machines and also the environment Industrial sector companies want to use that sensor data to drive machine learning and AI This promises to enable new levels of optimization and learning The ultimate goal is to make processes that are currently manually operated be controlled much more efficiently Automation plays an important part in this process Now historically, automation has been highly customized in silos with high upfront costs And as a result of that, vendor lock-in It’s also inflexible If you can afford automation, you’ll structure your entire environment around it Today, we’re seeing exciting trends in automation Collaborative robots, like the ones shown on this slide that are equipped with modern sensors and open software layers that are easier to program, these collaborative robots promise to transform how we do automation, especially in less-structured environment where these robots can work side-by-side with humans So how do we digitize the physical world in less-structured environments, like the warehouse that you see on this slide? It’s easy to capture sensor readings from big machines, but it’s much harder to digitize goods in storage or containers of goods, like bins, or pallets, or shelves, or boxes, especially if human workers are in the same workspace and manually move these goods and containers around? We believe we can solve some of these challenges with the help of collaborative robots and by harnessing the power of AI and the Cloud And we’re not the only ones who see this opportunity Let me tell you a little about our partner, SAP SAP is the world leader in enterprise applications They manage business processes of large and medium-sized companies across the world Think orders, parts, or inventory levels Their roots are in Europe and in the industrial sector SAP is working with their customers to gradually transition workloads from on-premise to the Cloud That, combined with their deep industry expertise, makes them a unique and strong GCP partner Industry 4.0 is a huge opportunity for SAP to optimize not only business processes, but to take this optimization closer to the physical world, based on, for example, real time awareness of good, pallets, or trolleys in a warehouse Shorter feedback loops combined with readily available automation will translate into increased efficiency

and agility for SAP’s customers To capture this opportunity, SAP is looking for ways to manage and orchestrate collaborative robots from different vendors and to tap into insights about the physical world for better optimization The Cloud AI engineering team in Munich, in Germany, is working on an open Cloud Robotics platform that adds critical infrastructure for collaborative robots It builds on mature GCP technologies and provides foundational capabilities for Cloud-enabled automation solutions Our Cloud Robotics platform securely connects robots with the Cloud It enables distribution of software and other digital assets to the robots, using Kubernetes containers It provides infrastructure for collecting log data, for uploading it, monitoring it, and creating dashboards using Stackdriver It also provides infrastructure for sensor data collection, aggregation, transfer to the Cloud, and processing without data management biplanes, Bigtable, BigQuery, Dataflow, and more Our objective here is to solve these common infrastructure problems for the entire industry Our Cloud Robotics platform will be open source and will be built on open APIs We’ll also provide the interfaces needed for customers to port their data to other platforms So there’s no window locking This Cloud Robotics platform will be available to early adopters at the beginning of next year On top of this infrastructure, we will offer Cloud services that leverage our AI platform and solve several critical customer pain points for the mission with collaborative robots Our SLAM service uses data from sensors, like lighters or depth cameras It allows robots to create a map, localize themselves relative to that map, and recognize static landmarks We further analyze that sensor data to discover walls, doors, shelves, tables, chairs, and other essential elements of the robot’s workspace Using that same data, we can also discover known objects in their pose in the workspace This would allow the robot to detect and plan operations on these items Where are we going with this? There is a broader vision here Our infrastructure enables reusable assets on the robot and in the cloud It can support not only the services I just described, but also serve as a foundation for third-party software Our plan is to work with our partners to grow this platform into an ecosystem, using an app store for easy software distribution SAP as a developer, and their customers, benefit from easier creation and deployment of robotic automation solutions They’ll also be able to tap into Cloud-based analytics of sensor data to expand their awareness of the physical world With this platform, we’re enabling developers to build innovative, reusable software and services for whichever use cases or verticals they want to tackle This will ultimately make automation drastically simpler and reduce the cost of custom integration It’ll also allow one-off processes to be automated seamlessly through the orchestration of existing hardware and software components Collaborative robots and more accessible automation will complement local physical labor and increase productivity where needed This is how automation and AI can enhance human skills right here in local labor markets Now, please welcome Rajen Sheth, Director of Product Management, who’s seen this principle in action every day [APPLAUSE] RAJEN SHETH: Hi, everyone So you heard a lot about the great technology that we have And what I want to talk about a little bit more is how customers are approaching this and how you can think about approaching this So what you’ve seen is that there is a spectrum of technologies, as Jia has mentioned And really, what the trade-off here is the flexibility that you have versus the ease of use And so what we’re really trying to do is give you a variety of options so that you can do many different things And I want to explain this a little bit and talk a little bit about how our customers are using this So on one hand, you have our cloud TPUs, which are very, very, very powerful And many of our services are underlied by Cloud TPUs But we’re now providing access for people that do have a lot of machine learning expertise to be able to utilize that to do some pretty amazing things Beyond that, you have TensorFlow and Cloud Machine Learning Engine where you can build your own models Beyond that, you have Cloud AutoML, which can do a variety of things to be able to customize our models And then, you have our APIs, our building blocks that you can bring together that are Google’s models where we’re bringing the best of Google’s technology And all of these let you actually really add great value And I think one thing that’s there is that we shouldn’t mix ease of use versus value

There are a lot of things you can do by just calling our APIs that give you a ton of value And we’ll talk about that a little bit So we’ll start at the TPU end A great use case for this is eBay And this is something that they’ve recently been doing, which is they want to be able to do visual product search and do this across a variety of different types of categories in many, many different types of image And so they had 55 million images in there in their training product set, and they have 1 billion product listings to search for And so they use Cloud TPUs And Cloud TPUs are very, very good with image recognition They increased their image recognition accuracy by 10%, which is a huge difference for them and makes a huge difference in their business But then they also increased their speed-up time and training time by 100x by using Cloud TPUs Now, going with TensorFlow, there’s a lot that you can do with TensorFlow It gives you a lot of flexibility And a great example of this is Ocado Ocado’s done a variety of things with our products But in one case, they were using this for fraud detection And they built a TensorFlow model for fraud detection Their the UK’S largest online grocer, and there are lots of transactions happening all of the time And so they need to be able to detect fraud So using TensorFlow, they were able to get a 15x increase in fraud detection precision And this is just one of many things that they could potentially do to help automate their business Now looking at AutoML, I think one of the biggest things with AutoML is that you can have customers that don’t have a lot of expertise in machine learning do really, really powerful things And a great example of this is Blum So Blum is a European manufacturer of furniture And they have hinges that are of all kinds of different types So they were able to use AutoML to be able to look at these hinges and classify these hinges They have no in-house ML expertise, and they’ve never used GCP But in five weeks, they were able to create a model with 91% accuracy Now, we talked a little bit about the building blocks And this is a lot of where Google has brought our best technologies to bear And one of the most powerful ones is translation And so I’d like to welcome up Adela Quinones from Bloomberg to talk to us about how they have been using the Translation API [APPLAUSE] So Adela, can you tell us a little bit more about Bloomberg and what Bloomberg does? ADELA QUINONES: Sure So Bloomberg is a financial media and technology company We’re a news organization You might have seen Bloomberg Television or Bloomberg Radio And a lot of people think of us that way But at our core, we’re really a technology company We have 19,000 employees globally, 5,500 of which are engineers So that’s a quarter of our company in engineering and we a lot of expertise in machine learning that we really focus on solving proprietary problems, problems around finance and data And so we’re really focused on those use cases, from a technology perspective RAJEN SHETH: Makes sense And what was the problem that you were facing where you used our products? And how were you solving it before? ADELA QUINONES: Yeah So our newest product ingests about two million stories a day We bring in content that we produce ourselves, about 5,000 stories a day, and also content from premium providers, like the “New York Times”, and the “Washington Post”, and also social media, like Twitter So it’s a lot of content coming to our systems And for our clients, milliseconds matter A few milliseconds can make the difference between a great investment and a not-so-great investment And so we wanted to make that content available to our clients in real time in the language that was most relevant to them News is local It breaks in local language, and we wanted to be able to make the content available in the language that our clients understood So I’ll give you an example When the municipal bond crisis was happening in Puerto Rico, there were reporters in the courtroom taking notes and publishing stories and tweets in Spanish Not a lot of our clients speak Spanish, so we needed to be able to make that content available to our customers in English or their native language RAJEN SHETH: That makes sense And because they were in Puerto Rico, you could increase the speed of getting that content directly to your clients ADELA QUINONES: Exactly, we wanted to really narrow the time from when a piece of news is available in any language, and we make it available to the client in a language that they can understand RAJEN SHETH: Makes sense And so how did Google Cloud help you solve this? ADELA QUINONES: Yeah So in 2015, we looked at what was available from a translation perspective And we really wanted to find a partner that could scale, because we have a lot of content, that could operate at great speed, because our own requirements are hundred milliseconds or less, from the time the news is published to the time we alert our users about it, and that could also produce translations in lots of different languages, because we’re seeing growth all over the world And we want to be able to support and think

about our future And Google really checked all the boxes on all three fronts And I think another really nice-to-have was the fact that it was so easy to work with the Google team and to connect to the Google API So from the moment we decided we wanted to integrate with Google to the moment when it was available on our system, I think it was less than two weeks time So it was a pretty remarkable turnaround RAJEN SHETH: Makes sense And you touched on this a little bit, but why Google versus other alternatives? ADELA QUINONES: Yeah I mean I think, for one, the quality of Google Translate was really high We integrated Google translate in 2015, before neural network translation was introduced And at that point, we found that the quality in many languages was very, very good And in some cases, it was good enough for [INAUDIBLE] And then when Google introduced neural network translation, that really changed the game The quality was just really, really great And it allowed us actually to expand our use of Google Translate beyond just on-demand news translation to other types of news translation RAJEN SHETH: Makes sense Great Well, we really, really appreciate that And it’s a very great story ADELA QUINONES: All right Thank you RAJEN SHETH: Thank you [APPLAUSE] So part of what we’re really focusing on is not only helping customers, but also, in that vein, doing good with AI And so I’d like to welcome up Fei-Fei Li, our chief scientist, to talk to you about how we’re doing good with AI [APPLAUSE] FEI-FEI LI: Good afternoon, everyone So the progress of AI has been breathtaking At Google, we are all in in trying to build good AI technology and products to deliver to all of you so that it can make a positive impact I want to close today’s session by talking about some of our shared responsibilities of AI and how to make it a benevolent technology You already heard so much about our efforts in different aspects of our technology development, our products, and across the globe So it’s not only just an aspiration that we believe a technology like this should bring benevolent impact to our human society, it’s also something we are putting in action So I want to share with you some of the actions we’re taking To start with, let’s just look at an AI for A good example, particularly helping all of us to protect some of our most precious natural resources So the World Wildlife Fund and the government of Nepal have built a network of cameras to track endangered animals, like tigers and rhinos, in the world But they are really easy to miss These animals don’t come out so fast So they only come out sometimes a few seconds So considering there are dozens of cameras all rolling at once, it’s a lot of work for humans to manually monitor all of them and try to detect, and track, and spot these animals That’s where machine learning can help I’m now going to show you two videos to demonstrate how generic object tracking and detection technology can help to automate and assist this very difficult and tedious process You’re going to see a tiger is walking out of the fog And then, as it comes closer to the camera, a machine learning generic object detection algorithm will call out where the tiger is So I feel this tiger looks hungry OK, this one impressed me, because the first time I looked at it, I didn’t even see the rhino coming out of the bushes Again, it’s quite small and in a very, very cluttered environment, but where our generic object detection algorithm was able to spot the rhino, even if part of the body was being occluded, and it’s going to a different angle, and becoming smaller and smaller So it just warms my heart that machine learning can do this for our natural resources I don’t know how many of you were at Diane’s keynote yesterday She mentioned that one of her friends was doing wildlife preservation in the oceans And today, we are bringing Professor Janet Mann onto the stage to share with us the story of her work, collaborating with Google Cloud AI, on preserving wildlife in ocean Welcome, Professor Janet [APPLAUSE] JANET MANN: Oh, I need that

[LAUGHS] Thank you Thank you, it’s a delight to be here So you just heard about identifying species using AutoML I’m going to talk about identifying individual animals, tracking wildlife using ML So tracking individuals is a critical part of biology because natural selection acts on the variation between individuals It’s also important for conservation and management And it also appeals to the public, in terms of their attraction to individual animals and interest in them A poignant and sad example that you are probably all familiar with was the Free Willy, also known as Keiko, where people spent over $20 million on a lost cause, really, to introduce him to the wild At the same time, the killer whales off the coast of Washington State and British Columbia have been declining and are on the verge of extinction So you can use the individual to engage people So this was an example that wasn’t very good, but it shows how important the individual is in understanding these animals So there’s lots of species that can be tracked through time– chimpanzees by their faces, zebras by stripes, and cheetah by the stripes on their tails, manta rays by the spots on their bellies, and whale sharks also by their spot patterns, and even toads by the gland markings on the Wyoming toad on their backs So these are individually distinctive, but stable through time How about individuals that change through time? So elephants, they are identified by their ears And the tears change through time Sperm whales by their tail flukes– and actually, most whale species by their flukes And those patterns change through time And Risso’s dolphins by the scarring and dorsal fins So for over 30 years, I’ve been studying dolphins in Western Australia Pretty far, remote place And we’ve been tracking their residential population And we’ve been tracking over 1,700 individuals from birth to death They just live there This is Bytefluke And as you can see, her fin has changed It is called Shark Bay where I work And so the dolphins do get bitten up by sharks, and their fins do change over time Bytefluke, by the way, 10 years ago had a calf that we named Google And Google is still alive and well He’s 10 years old, figuring out who his friends are And then, here’s Zombie, a daughter of Phantom, born in 2011 And as you can see, Zombie’s fin changed quite dramatically in four years So we know these animals, and we can track them And basically, how do you track animals that you don’t know? So I’m going to take you to another part of the world, but first I want to say, we gave tens of thousands of photographs to the Google AutoML team to help us build something that could track animals that we know a lot less about So that was the training data set And so take it to the other side of the world– trhe Potomac Chesapeake Yes, there are dolphins in the Potomac And because it is the Potomac, we’ve named them after presidents, vice presidents, founding fathers, first families So this is Hubert Humphrey, who we’ve seen for three years in the Potomac And to find out where these animals go, because they’re only there for about six months of the year, we want to see where they winter And fortunately, scientists have been collecting dorsal fin photographs from animals all over the Western Atlantic And so this is put together in the Mid-Atlantic, Bottlenose Dolphin Catalog And so our challenge was, could we match the Potomac Chesapeake dolphins to these other locations So I’m going to show you an example, using AutoML And this way, we could get critical problems in their biology So this is George Mason, who was one of the key authors in the Bill of Rights, in case you’re interested But he was photographed in the Potomac in 2015, 2017 This is what it looks like So we ran the image through AutoML, and we tested it against the Mid-Atlantic Bottlenose Dolphin Catalog And as you can see here, the first two that came up were actually George Mason

And he was seen off the coast of North Carolina in 2008 and 2010 So this is a 10-year difference, practically, from when this animal was seen And this is how effective this has been So Google AutoML has enabled us to identify individual dolphins in seconds And to do this– we usually do our matching by eye for animals that we know But for animals that we don’t know, we can use this technology to map them onto animals in other locations and track migrating individuals And this informs us a lot about the populations for conservation purposes So I want to thank Google AutoML for helping us with this And I’m going to give the stage back to Fei-Fei Thank you FEI-FEI LI: Thank you, Janet [APPLAUSE] That is just amazing It’s not the first time I saw it, but my heart is beating when looking at the amazing result So we spent the past two days sharing with you our excitement about how AI can empower industries, make life work better, when it’s applied right But the power does demand responsibility And it is our collective responsibility to make this a benevolent technology Earlier this year, Google published AI principles to guide our company’s work and development of AI to do the things that we believe are important and adhere to our values So today, I’m going to welcome onstage two colleagues of mine who will share a conversation with you, with us, about how we can develop responsible AI, both at Google, as well as inviting you to this dialogue about why it’s important to all of us in the enterprise business about developing responsible AI technology So please, Tracy and Shannon [APPLAUSE] So I’m going to start with Shannon, AND I’m going to read her bio, word by word, because this is really impressive So Professor Shannon Valor is a technology ethicist and a McAdam professor of philosophy at Santa Clara University, where she has taught since 2003 Shannon has expertise in ethical implications of AI and robotics She serves on the board of directors of the nonprofit Foundation for Responsible Robotics, is a member of the IEEE global initiative on AI, and has received multiple awards She’s the author of 2016 book from Oxford University Press, “Technology and the Virtues– a Philosophical Guide to a Future Worth Wanting” And I’m so excited to announce now that Shannon has joined our Google Cloud team to work with us in terms of developing responsible AI So welcome, Shannon SHANNON VALOR: Thank you, Fei-Fei [APPLAUSE] FEI-FEI LI: And sitting next to Shannon is Tracy Frey I have to say, Tracey is one of my favorite colleagues at Google I’m not just saying this because we’re on stage, but her energy, her sense of moral responsibility, and her expertise in understanding the problems, and the challenges, and opportunities that AI can bring to our customers is just phenomenal And she is our Cloud AI’s go-to-market strategist So we don’t have a whole lot of time, but we’re going to have a conversation here with you and hope to open that conversation for many years to come in this journey together about the importance of this topic So I’m going to start with Shannon Why does ethics of AI matter to any company, including enterprise companies? SHANNON VALOR: Well, I think you have to recognize that public trust is one of the most important things that any corporation needs to think about, needs to earn And public trust in technology is on a shaky foundation these days And so lots of companies are having to think about, OK, well, how do we cultivate and build up that public trust in technology in a way that’s

sincere and earned? And I think that one of the things that we have to recognize is that we’ve been, for a while, in a reactive mode where we see ethical issues that crop up in society as a result of the way certain technologies are developed or deployed, and then we jump in and try to fix it But what we really have to do is step back and start getting into a more anticipatory mode where we’re seeing the road ahead a little bit, where we’re thinking about where the problems will crop up And we do this in all other kinds of engineering, and technology, and development with other kinds of issues, thinking ahead about where you might have some stress points or where you might have some complications So now, we just have to bring the ethical expertise into that process FEI-FEI LI: Great Yeah, so turn a reactive understanding to a proactive action of embedding ethical and responsible principles into the AI and technology So to go a little more concrete, because we want to actually turn that into action, what are the key ethical principles for an enterprise company who is practicing AI? SHANNON VALOR: Well, I think one of the things you have to think about are what are going to be your stated goals and values? And right now, I think, especially given where the public trust is right now and public confidence, I think fairness and transparency have to be two of the stated goals of any enterprise, of any organization that wants to earn the public trust And then, I think you also have to think about values like safety and security, which sometimes we forget that those are moral values, right? Safety and security are about protecting things that matter for people And so thinking about those as values that we’re committed to and that we have systems in place to preserve is really important I think we also have to recognize that the other kinds of attributes that we might have in play in our decisions, even things we often think of as just mechanical attributes, like speed, or precision, or efficiency, that these are values too And we have to have them in the same conversation with the decisions we’re making about the ethical values that we’re entrusted with protecting Because often, there are some very challenging trade-offs that happen between values, like speed, and efficiency, and precision, and other kinds of things we care about, like fairness, transparency, and so on So I think understanding that there are no ethically neutral decisions that get made when you’re talking about technologies that shape people’s lives, that influence the institutions in society, there’s no way to be ethically neutral or to detach yourself from that set of questions So you have to just dive in and try to get it right FEI-FEI LI: To embrace it SHANNON VALOR: Absolutely FEI-FEI LI: Yeah So actually, speaking of practicing these ethical AI principles, so Tracy, we rolled out AI principles earlier this year And we together, and many of our colleagues, are working hard to make responsibility a part of our process So I would love for you to share with us and share with our partners and enterprise colleagues what we are doing TRACY FREY: Yeah, sure And a lot of it is a lot of what Shannon was saying around starting at the beginning, and how do we think about it from the earliest stages? So within Cloud AI, we’ve really taken the responsible use of AI and made it a goal of our development process And alongside the AI principles– that I hope everyone’s had a chance to read, if not, they’re certainly easy to find on our website– but alongside those, we actually did our own human-centered approach within Cloud AI to say, what are the core values of this group? These are the set of humans who are going to be creating this technology to use at scale, so what does this group care about the most? What do we think is critical? And what came out of that was courage, integrity, compassion, and impact Now, given how rapidly this market is expanding– I mean, just even the size of this room or the size of this conference, the level of interest that we see and how quickly it’s being used– given that and the fact that many customers really need to use their own data as a part of any deployed solution, well, that really makes the assessing of fairness and ethical impact, especially at scale of technology in enterprises, it makes it really complex And so what we’ve done is, we’ve really committed to evaluating the work of Cloud AI carefully and really in depth, with the goal of aligning it with those principles, from the time the development starts, all the way through to hearing back from our customers and partners when it’s out in the world FEI-FEI LI: So can you give us an example that a current Cloud AI product that has reflected this? TRACY FREY: Yeah And certainly, I think in terms of where it is in our product,

we’re still at the very beginning of a lot of this But even in AutoML for example, we worked really hard to create an inclusive machine learning guide that’s available now to all of our customers who are accessing AutoML And we worked hard to build it into the product So as you’re building the model, you can then reference that guide And it’s infused in our documentation, as well And then eventually, we’re working hard on tools and testing of our own models, to transparency, and all the kinds of things that we can offer to you, the kinds of tools that we can build to give to our customers around how they can assess their own models and think about what is in their data already and how to think about it from an ethical standpoint FEI-FEI LI: Yeah One thing– I think we all share this experience– AI is a very nascent technology But even with its nascency, the importance of fairness and responsible development and uses is already very much important in our products and on the mind of many people So even at Google, a lot of these are the first steps we’re taking to make responsible AI products So I want to ask, back to Shannon, that you talk about public trust– public trust is not something that should be a goal It should be a consequence of what we do, that we earn that You said that So you heard from Tracy that Google is already starting to take steps on that How do we sustain this? How do we sustain our AI principles? And how do we earn this long-term, especially technology like AI and many others changes rapidly? SHANNON VALOR: Yeah And I think that’s the key is to understand that this is a long game, and not something that has a quick, easy technical fix, or a quick, easy social fix And it’s also important to recognize that technologies evolve, people evolve with them Technologies also influence other technologies, interact in surprising ways sometimes with other technologies It’s a complex web that’s constantly shifting, which means that to manage what’s going on in that web in a responsible way, you have to always be there And you have to always be paying attention And you have to be constantly seeing this as an unending task, something that you’re not looking to finish and tie a bow on, turn your back on, and walk away, but something that you’re expecting that the field of interaction between technology, and society, and people is always going to be shifting And managing that in a trustworthy way is something that will never stop being our responsibility FEI-FEI LI: Yeah That’s great So one last question, Shannon So you’ve been an academician for most of your life and working on books, and theories, and philosophies of technology ethics But now, you’re taking a step into the industry world and putting all your expertise and knowledge into practice Does that excite you? SHANNON VALOR: Yeah FEI-FEI LI: What do you feel about it? SHANNON VALOR: Sure Well, I’ve always been more of a practice-oriented philosopher than a theoretician That’s partly why I came around to study these topics FEI-FEI LI: I didn’t know philosophers are all practice-oriented SHANNON VALOR: We do exist, practice-oriented philosophers do exist There may be a little fewer than they should be, but we’re out there But you know, I’ve been teaching engineers and computer scientists for over a decade about these issues and teaching working engineers, even, about how to develop ethical principles and apply them in different professional contexts Most of my research partners are people who are involved in engineering and computer science, as well So it’s an area that I’m very comfortable in But in general, I also think that if philosophers aren’t available to step up when society needs them, then what good are we? So I’m kind of with Plato on that one Those of you who might have read “Plato’s Republic”, there’s this whole thing about when society needs your help, you have to put the book down and step up and do the work So I’m excited to be part of Google’s work now FEI-FEI LI: Well, I can tell you, we are all excited to be working with you and getting your advice Like you said, it’s a long run, and it’s a journey we’re going to share together So our time is up And I don’t think there’s any better way of ending this session with the very quote that Shannon has inspired me several years ago when I was in the audience listening to one of her talks, is that there is no independent machine values Machine values are human values And the technology we create are eventually

going to be the technology that’s going to impact all of us, our communities, our families, and our future generations And we really want to invite you into this journey to develop benevolent and responsible AI Thank you, so much [APPLAUSE] [MUSIC PLAYING]