Ethics — what ethics? For Microsoft, it’s full speed ahead on AI
Microsoft is all in on artificial intelligence (AI) for a very simple reason: it believes its lead in the technology will make it the tech world’s top dog again. Microsoft is far behind Google in search and browsers, but it believes AI can help it eat into that lead, or even make Microsoft dominant.
The stakes couldn’t be higher. Microsoft CFO Amy Hood recently told analysts that a single percentage point of market share in search advertising is worth $2 billion. In the first quarter of 2023, Google had an 85% search market share, compared to 9% for Bing. So, there’s a lot of upside for Microsoft to reap billions if it can edge closer to Google. And that’s just in search. Microsoft sees more billions in additional AI-driven money in the cloud, productivity tools and more.
Standing in Microsoft’s way isn’t just competitors. Governments around the world, ethicists, and many AI researchers believe the fast-evolving technology, if not handled properly, poses a significant danger to society and democracy and could even imperil the future of humankind.
All those things worry AI pioneering researcher Geoffrey Hinton, the “Godfather of AI” who recently quit his decade-long job at Google in protest against the unfettered use of the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said after leaving Google.
He's not alone. In March, more than 1,000 AI researchers, tech leaders, and others signed an open letter warning that AI creators are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.” They asked for a pause on development of any AI tool that is more powerful than GPT-4, developed by OpenAI, in which Microsoft has invested $13 billion.
Then, in early April, 19 leaders of the 40-year-old academic society Association for the Advancement of Artificial Intelligence released a letter warning about the dangers AI poses. Among those signing it was Eric Horvitz, chief scientific officer at Microsoft, and a former president of the society.
Microsoft’s response to all this? Either a big shrug or the middle finger, depending on how you feel about the company.
It’s not that Microsoft isn’t aware of the dangers. It’s concerned enough that it established several internal groups about them, including the Office of Responsible AI. But those groups seem to be little more than window dressing.
The company once had an ethics and society team embedded directly in its AI organization, a team that had real teeth. By 2020, it included 30 engineers, designers and philosophers, and worked with the AI development group to make sure AI was handled ethically.
That team, according to Platformer, translated the Office of Responsible AI’s vague-sounding pronouncements and made sure they were put into effect in shipping products.
One member of the group explained: “People would look at the principles coming out of the office of responsible AI and say, ‘I don’t know how this applies.’ Our job was to show them and to create rules in areas where there were none.”
But as Microsoft accelerated AI development, it slashed the team’s staffing. In October 2022, four months before Microsoft released its Bing AI chatbot, Microsoft cut the team to a skeleton crew of seven. The company made no bones about why it acted: it was rushing to launch its AI chatbot, and wanted nothing, including ethical considerations, to stand in its way.
Platformer got its hands on audio from a meeting in which John Montgomery, Microsoft corporate vice president of AI, told the team why it was being cut.
“The pressure from [CTO] Kevin [Scott] and [CEO] Satya [Nadella] is very, very high to take these most recent OpenAI models and the ones that come after them and move them into customers hands at a very high speed,” he’s quoted as saying in the audio. The ethics team, he claimed, stood in the way.
One member of the team responded, “While I understand there are business issues at play…, what this team has always been deeply concerned about is how we impact society and the negative impacts that we've had. And they are significant.”
Montgomery dismissed the complaint: “Can I reconsider? I don't think I will. ‘Cause unfortunately the pressures remain the same. You don't have the view that I have, and probably you can be thankful for that. There's a lot of stuff being ground up into the sausage.”
Even after the cuts to the team, the AI “sausage” apparently wasn’t being ground up fast enough for Microsoft. On March 6, it eliminated the team entirely.
At the same time Microsoft targeted people inside the company worried about AI, the rest of the world became increasingly worried, too. In early May, the White House invited AI leaders of Microsoft, Google, Open AI (the maker of ChatbotGPT), and the AI startup Anthropic to meet with Vice President Kamala Harris. President Biden came to the meeting briefly and warned, “What you’re doing has enormous potential and enormous danger.” Harris amped that up, declaring, “The private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products.”
Microsoft didn’t bother to pretend it cares about the president’s and vice president’s concerns. On the same day as the White House meeting, it made the Bing chatbot available to anyone who wanted to use it, eliminated the safety guardrails it had put around it, and opened it up to developers.
The company’s message was loud and clear: for Microsoft, it’s full speed ahead on AI, despite its potentially dangerous effects. After all, there’s sausage to be made.