Follow Datanami:
March 13, 2023

ChatGPT Brings Ethical AI Questions to the Forefront

(kentoh/Shutterstock)

Many companies have taken a relatively slow and methodical approach to working through AI ethics and transparency questions up to this point. Without any national regulations to adhere to, there is no hurry, after all. But the sudden rise of ChatGPT and the surge in interest in AI over the past couple of months has forced companies to accelerate their AI ethics work. According to experts in the field, much work remains.

Chatbots are the most visible application of large language models such as ChatGPT, and many companies have adopted chatbots to service customers and reduce the impact on human representatives. In many cases, the chatbots–or conversational AI interfaces, as the industry prefers to call them–can successfully understand customer questions and respond with appropriate answers. Chatbots have progressed immensely over the past five years.

But AI technology itself has progressed far beyond just chatbots, as the surge in interest in ChatGPT shows. Today, many companies are exploring how they can incorporate generative AI products such as Google’s PALM, Open AI’s GPT-3.5 and DALL-E, and Stable Diffusion into various aspects of their businesses.

For example, they’re using AI to transcribe recordings of meetings and provide summaries of what transpired in them. Sales and marketing professionals are using them to generate emails and handle other tasks previously done by humans. And journalists are also using it for collecting and summarizing news.

Can you believe the AI news? (mstanley/Shutterstock)

AI has a great potential to for “thinking augmentation” for journalists through tasks like content discovery, document analysis, text summarization, and SEO, said Nicholas Diakopoulos, associate professor in communication studies and computer sciences at Northwestern University.

“A lot of the excitement is also on the use of GPT for public facing output, so actually doing the writing,” Diakopoulos said during an Associated Press webinar last week. “And this is where it gets a little dicey and gets tension with issues like accuracy, possible copyright, and plagiarism.”

Some of these concerns might be resolved with better prompting, he said, and the technology is bound to improve. “But you’re definitely going to want to have humans in the loop to check content carefully before publication,” Diakopoulous said.

A Long Way to Responsible AI

A majority of Americans don’t trust AI. According to a MITRE study released last month, only 48% of citizens said they think AI is safe and secure, while 78% said they were very or somewhat concerned that AI could be used for a malicious intent.

These fears are real, and show that we’re still at the very beginning of the journey toward ethical use of AI, according to Sray Agarwal, a data scientist and principal consultant at Fractal (formerly Fractal Analytics).

“We are not there yet. We are not anywhere [near] responsible AI,” he said. “We have not even implemented it holistically and comprehensively in different industries in relatively simple machine learning models, so talking about get implementing it in ChatGPT is a challenging question.”

Fractal is one of the larger and more experienced consulting firms treading the AI waters. The company, which is co-located in Mumbai and New York and employs more than 4,000 workers, has successfully implemented AI, machine learning, and advanced analytics at some of the biggest Fortune 500 firms in the country. Even among its client base, ethical AI is still in its infancy.

Where are you on your ethical AI journey? (hafakot/Shutterstock)

“When it comes to ethical AI and responsible AI, we are very vocal about it with our clients,” Agarwal said. “If they don’t ask for it, we tell them about that, that, hey you may want or need ethical AI or responsible AI. We come out with an assessment of how good, bad, and ugly [their ethical AI practices] are. We have artifacts which we have built which can be easily deployed at clients’ facilities, and we are even ready to do a consulting work for them and tweak it if required as per their need, as per the sensitivity and as per their requirement.”

Up to this point, most companies appear to be looking to ChatGPT for things like facilitating content generation. It’s not being used for more risky use cases, such as recommending medicine or making decisions about enrollment in college, he said.

“I don’t see anybody using it blindly to take a decision,” he said. “They’re not being used right now because there would be a lot of concerns. There will be regulatory concerns. There will be legal concerns.”

Agarwal is bullish on the potential for generative AI to make a big, positive impact on business in the future, despite the concerns around its introduction. Just as the introduction of ATMs decades ago was met with fierce resistance that it would displace jobs, AI will end up having a beneficial impact, he said.

But in its current form, generative AI is not ready for prime-time. It needs more guardrails to prevent harm, he said.

“Anything which is technologically advanced which me and you don’t understand as a layman, we need to have guardrails around it,” he said. “We need to have something which will ensure that, hey I don’t understand this, but this is safe.”

A Framework for Ethical AI

While every use case is different, consumers in general have a right to know when AI is being used to make a decision that will have a meaningful impact on them, according to Triveni Gandhi, who is Dataiku’s responsible AI lead.

“Generally, giving customers an assurance that the AI that’s affecting some decision about them was built in a regulated way with oversight” is important, Gandhi tells Datanami, “just like the way that we trust that banks are doing what they need to do for their internal risk management and frameworks that align with regulations around financial services.”

Do you need an AI Bill of Rights? (kmls/Shutterstock)

While the European Union is marching toward regulation in the form of the EU AI Act, the US currently doesn’t have any AI regulation, beyond the rules that already exist in regulated industries like finance and healthcare. Despite the lack of US regulation, Dataiku, which has offices in Paris and New York, encourages its customers to behave as though the regulations already exist.

“We’re trying to get our customer to start thinking at least in that direction, start getting prepared for something coming down the line,” Gandhi said. “We have standards on everything, so why not also AI?”

ChatGPT is offered within the Dataiku platform, and Gandhi realizews that ChatGPT’s sudden popularity is accelerating the conversation around the need for ethical AI. Dataiku uses an ethical AI framework for work it does in-house, and it encourages its customers to take a similar approach, whether it’s the new framework released by NIST in January or a framework from another provider.

“I think the NIST framework is a really nice place to start,” she said. “The AI Bill of Rights that [the White House] put out last year was also quite good.”

Many of the potential harms of using a large language model are the same as what companies could potentially face with traditional machine learning on tabular data, but some are different, Gandhisaid.

“It can be as simple as ‘Oh the information your giving out is wrong’ to ‘This person on the other end of the screen thinks that it’s talking to a sentient robot and is now suffering an existential crisis,’” she said. “There’s a variety of harms in there. Knowing that, then, how do you go about building to minimize those as much as possible?”

As part of the framework Dataiku uses, it goes through a checklists to ensure that questions of reliability, accountability, fairness, and transparency are covered. “Everything needs to be documented and made very clear that you are interacting with a model, that a model made a decision about some sort of outcome,” Ghadni said. “The same thing with a chatbot. I would suggest very clear language that you are speaking to a chatbot, that this is built off of a language model. We’re not guaranteeing anything here. You should not take this at face value. That transparency aspect is very important.”

Once users document the potential risks of AI, the next step is deciding what indicators to use to alert company stakeholders of the risk, and what specific thresholds should be used to determine if an AI is misbehaving. Mapping the ethical AI values to the business indicators is a very important step on the journey to ethical AI, Gandhisaid, and it varies company to company.

Once that mapping is complete, then it becomes easier for an MLOps solution such as Dataiku’s to automate other activities in terms of managing the model, detecting data creep, retraining the model, and ultimately retiring it. Another important step is having a well-balanced governance team, or an ethics review board, in place to review AI activities. This team is typically involved with setting the thresholds that are critical to avoiding harm, and initiating action when harm is detected, Gandhi said.

“I’m optimistic about the approaches that people are starting to take towards understanding AI,” she said. “Five years ago, not many people would have cared about this stuff. But I think the market is shifting and people are seeing that, OK we need to think about what are we doing with our data and our models and even the choices we make. I think that as a result, builders of AI are definitely more attuned to these question and they’re actually asking that question more upfront.”

As with any new technology that brings potential benefits such as greater efficiency and profit, companies should also weigh the potential consequences and drawbacks. Having an ethical lapse is a possiblity with any form of AI, including generative technologies like ChatGPT.

Related Items:

Conversational AI Poised to Be Major Disrupter

Americans Don’t Trust AI and Want It Regulated: Poll

NIST Puts AI Risk Management on the Map with New Framework

Editor’s note: This article was corrected. Dataiku works with ChatGPT. Triveni Gandhi’s name was misspelled. Datanami regrets the errors.

Datanami