As generative AI tools such as OpenAI’s ChatGPT and Google’s Bard continue to evolve at a breakneck pace, raising questions around trustworthiness and even human rights, experts are weighing if or how the technology can be slowed and made more safe.

In March, the nonprofit Future of Life Institute published an open letter calling for a six-month pause in the development of ChatGPT, the AI-based chatbot created by Microsoft-backed OpenAI. The letter, now signed by more than 31,000 people, emphasized that powerful AI systems should only be developed once their risks can be managed.

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter asked.

Apple co-founder Steve Wozniak and SpaceX and Tesla CEO Elon Musk joined thousands of other signatories in agreeing AI poses “profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

In May, the nonprofit Center for AI Safety published a similar open letter declaring that AI poses a global extinction risk on par with pandemics and nuclear war. Signatories to that statement included many of the very AI scientists and executives who brought generative AI to the masses.

Jobs are also expected to be replaced by generative AI — lots of jobs. In March, Goldman Sachs released a report estimating generative AI and its ability to automate tasks could affect as many as 300 million jobs globally. And in early May, IBM said it would pause plans to fill about 7,800 positions and estimated that nearly three in 10 back-office jobs could be replaced by AI over a five-year period, according to a Bloomberg report.

While past industrial revolutions automated tasks and replaced workers, those changes also created more jobs than they eliminated. For example, the steam engine needed coal to function — and people to build and maintain it.

Generative AI, however, is not an industrial revolution equivalent. AI can teach itself, and it has already ingested most of the information created by humans. Soon, AI will begin to supplement human knowledge with its own.

Geoff Schaefer, head of Responsible AI, Booz Allen Hamilton

Geoff Schaefer is head of Responsible AI at Booz Allen Hamilton, a US government and military contractor, specializing in intelligence. Susannah Shattuck is head of product at Credo AI, an AI governance SaaS vendor.

Computerworld spoke recently with Schaefer and Shattuck about the future of AI and its impact on jobs and society as a whole. The following are excerpts from that interview.

What risks does generative AI pose? Shattuck: “Algorithmic bias. These are systems that are making predictions based on patterns in data that they've been trained on. And as we all know, we live in a biased world. That data we're training these systems on is often biased, and if we're not careful and thoughtful about the ways that we're teaching or training these systems to recognize patterns in data, we can unintentionally teach them or train them to make biased predictions.

“Explainability. A lot of the more complex [large language] models that we can build these days are quite opaque to us. We don't fully understand exactly how they make a prediction. And so, when you're operating in a high-trust or very sensitive decision-making environment, it can be challenging to trust an AI system whose decision-making process you don't fully understand. And that's why we're seeing increasing regulation that's focused on transparency of AI systems.

“I'll give you a very concrete example: If I'm going to be deploying an AI system in a medical healthcare scenario where I'm going to have that system making certain recommendations to a doctor based on patient data, then explainability is going to be really critical for that doctor to be willing to trust the system.

“The last thing I'll say is that AI risks are continuously evolving as the technology evolves. And [there are an] emerging set of AI risks that we haven't really had to contend with before — the risk of hallucinations, for example. These generative AI systems can do a very convincing job of generating information that looks real, but that isn't based in fact at all.”

While we cannot predict all the future risks, what do you believe is most likely coming down the pike? Schaefer: "These systems are not imputed with the capability to do all the things that they're now able to do. We didn’t program GPT-4 to write computer programs but it can do that, particularly when it’s combined with other capabilities like code interpreter and other programs and plugins. That’s exciting and a little daunting. We’re trying to get our hands wrapped around risk profiles of these systems. The risk profiles, which are evolving literally on a daily basis.

“That doesn't mean it's all net risk. There are net benefits as well, including in the safety space. I think [AI safety research company] Anthropic is a really interesting example of that, where they are doing some really interesting safety testing work where they are asking a model to be less biased and at a certain size they found it will literally produce output that is less biased simply by asking it. So, I think we need to look at how we can leverage some of those emerging capabilities to manage the risk of these systems themselves as well as the risk of what’s net new from these emerging capabilities.”

So we're just asking it to just be nicer? Schaefer: "Yes, literally."

These systems are becoming exponentially smarter over short periods of time, and they’re going to evolve at a faster pace. Can we even rein them in at this point? Schaefer: “I’m an AI optimist. Reining it in is, I think, both not possible and not desirable. Coming from an AI ethics standpoint, I think about this a lot. What is ethics? What is the anchor? What is our moral compass to this field of study, etc. And I turn offen to the classical philosophers, and they were not principally concerned with right and wrong per se, the way we normally conceive of ethics. They were principally concerned with what it meant to live a good life.... Aristotle termed this Eudaimonia, meaning human happiness, human flourishing, some kind of a unique combination of those two things.

“And I think if we apply that…lens to AI systems now, what we would consider to be ethical and responsible would look quite different. So, the AI systems that produce the most amount of human flourishing and happiness, I think we should consider responsible and ethical. And I think one principal example of that is [Google’s] DeepMind’s AlphaFold system. You're probably familiar with this model, it cracked the major challenge in biology of deciphering protein folds, which stands to transform modern medicine, here and into the future. If that has major patient outcomes, that equals human flourishing.

"So, I think we should be focused just as much on how these powerful AI systems can be used to advance science in ways we literally could not before. From improving services that citizens experience on a daily basis, everything from as boring as the postal service to as exciting as what NOAA is doing in the climate change space.

“So, on net, I’m less worried than I am fearful.”

Susannah Shattuck, head of product, Credo AI

Shattuck: “I also am an optimist. [But] I think the human element is always a huge source of risk for incredibly powerful technologies. When I think about really what is transformational about generative AI, I think one of the most transformational things is that the interface for having an AI system do something for you is now a universal human interface of text. Whereas before, AI systems were things that you needed to know how to code to build right and to guide in order to have them do things for you. Now, literally anybody that can type, text [or] speak text and can interact with a very powerful AI system and have it do something for them, and I think that comes with incredible potential.

“I also am an optimist in many ways, but [that simple interface] also means that the barrier to entry for bad actors is incredibly low. It means that the barrier to entry for just mistaken misuse of these systems is very low. So, I think that makes it all the more important to define guardrails that are going to prevent both intentional and unintentional misuse or abuse of these systems to define.”

How will generative AI impact jobs? Will this be like previous industrial revolutions that eliminated many jobs through automation but resulted in new occupations through skilled positions? Schaefer: “I take the analysis from folks like Goldman Sachs pretty seriously — [AI] impacting 300 million-plus jobs in some fashion, to some degree. I think that’s right. It’s just a question of what that impact actually looks like, and how we’re able to transition and upscale. I think the jury is still out on that. It’s something we need to plan for right now versus assuming this will be like any previous technological transition in that it will create new jobs. I don’t know that’s guaranteed.

“This is new in that the jobs that it's going to impact are of a different socioeconomic type, more broad based, and has a higher GDP impact, if you will. And frankly, this will move markets, move industries and move entire educational verticals in ways that the industrial revolution previously didn't. And so, I think this is of a fundamentally different type of change.”

Shattuck: “My former employer [IBM] is saying they're not going to hire [thousands of] engineers, software engineers that they were originally planning to hire. They have made…statements that these AI systems are basically allowing them to get the same kind of output [with fewer software engineers]. And if you've used any of these tools for code generation, I think that is probably the perfect example of the ways in which these systems can augment humans [and can] really drastically change the number of people that you need to build software.

“Then, the other example that's currently unfolding right now, is there is a writer strike right in Hollywood. And I know that one of the issues on the table right now, one of the reasons why the writers are striking, is that they're worried that ChatGPT [and other generative AI systems] are going to be used increasingly to replace writers. And so, one of the labor issues on the table right now is a minimum number of writers, you know, human writers that have to be assigned to work on a show or to work on a movie. And so I think these are very real labor issues that are currently unfolding.

“What regulation ends up getting passed to protect human workers? I do think that we're increasingly going to see that there is a tension between human workers and their rights and truly the incredible productivity gains that we get from these tools.”

Let’s talk provenance. Generative AI systems can simply steal IP and copyrighted works because currently there’s no automated, standardized method to detect what’s AI generated and what’s created by humans. How do we protect original works of authorship? Shattuck: "We've thought a lot about this at Credo because this is a very top-of-mind risk for our customers and you know they're looking for solutions to solve it. I think there are a couple of things we can do. There are a couple of places to intervene right in the AI workflow, if you will. One place to intervene is right at the point where the AI system produces an output. If you can check AI systems' outputs effectively against the world of copyrighted material, whether there is a match, then you can effectively block generative AI outputs that would be infringing on somebody else's copyright.

“So, one example would be, if you're using a generative AI system to generate images, and that system generates an image that contains probably the most copyright fought-over image in the world — the Mickey Mouse ears — you want to automatically block that output because you do not want Disney coming for you if you accidentally use that somewhere in your website or in your marketing materials. So being able to block outputs based on detecting that they're already infringing on existing copyright is one guardrail that you could put in place, and this is probably easiest to do for code.

“Then there's another level of intervention, which I think is related to watermarking, which is how do we help humans make decisions about what generated content to use or not. And so being able to understand that an AI system generated a piece of content reliably, through water marking, is certainly one way to do that. I think in general, providing humans with tools to better evaluate generative AI outputs against a wide variety of different risks is going to be really critical for empowering humans to be able to confidently use generative AI in a bunch of different scenarios.”

IT World