Happy Birthday, ChatGPT!
The JFK assassination. The moon landing. 9/11. Rarely does an event so monumental occur that you will forever remember where you were and what you were doing when it happened, but OpenAI’s launch of ChatGPT on one year ago is such an event.
Just like Jesus Christ’s birth nearly 2,024 years ago, the launch of ChatGPT on November 30, 2022 changed the course of history, or so it’s being said. Like the birth of the savior that literally reset the Western calendar, people are dividing the world into things that occurred before ChatGPT and everything that has happened since then.
In the pre-ChatGPT world, artificial intelligence (AI) was something that super smart data scientists could work with, but it didn’t really have any tangible impact on the lives of regular people. Deep learning was a thing–for companies with IT budgets measured in the tens or hundreds of millions of dollars.
In the post-ChatGPT world, AI is suddenly front and center with every interaction consumers have computers and phones. From term papers and email pitches to novel chemical compounds and feature films, anything that can be expressed in words and pictures is now subject to AI. Everyone from businessmen to grandmothers are throwing huge mounds of text and pixels at AI models and then asking the model to answer questions about it, to manipulate it, and to derive something new from it.
While the large language models (LLMs) at the heart of ChatGPT were getting better and better in the years leading up to ChatGPT’s immaculate reception, nobody was predicting what it had in store for us. Instead of the exotic playthings of computer scientists at OpenAI, Google, Meta, and a handful of other companies with the wherewithal to work with these incredibly complicated models, the launch of ChatGPT meant that LLMs and generative AI suddenly were suddenly made available to billions of people with just a few clicks.
ChatGPT was the spark that lit the AI world on fire. Since consumers became aware of what AI can do, they want more of it, and businesses are scrambling to deliver it. The ready availability of incredibly sophisticated LLMs accessible via simple API calls has had a profound impact on the world.
Driven by the potential savings from relegating the billions of manual tasks that drive businesses every day, such as reading and writing words, tens of thousands of companies ditched their old business models and scrambled to come up with new ones built around GenAI. Tens of millions of jobs and trillions of dollars in new revenue are suddenly up in the air.
McKinsey & Company predicted the productivity boost of GenAI could make employees at some jobs, such as software engineers and customer service representatives, up to 45% more efficient, saving or generating billions of dollars annually for companies that figure it out.
While other AI companies scrambled to launch their own GenAI offerings, OpenAI widened its lead. According to a survey ExtraHop conducted of its own users, OpenAI owns 80% of the share of GenAI services, followed by GitHub Copilot at 12%, Google Bard at 7.3%, and Microsoft OpenAI slightly below 1%.
“In the year since OpenAI’s ChatGPT was publicly released, generative AI has taken the world by storm,” said Jamie Moles, Senior Sales Engineer at ExtraHop. “ChatGPT has overwhelmingly taken the lead, showing clear dominance over other services.”
Fairly or unfairly, OpenAI is benefiting from a first-mover advantage. It dominates the market because it dramatically simplified how users can leverage AI to help their business, says Matt Rider, VP of Security Engineering EMEA at Exabeam.
“We no longer need to carefully structure our data,” Rider says. “We can simply chuck a load of information at ChatGPT without much thought and still gain value from the output. Instead of carefully researching a topic on Google for hours, constructing search-engine-friendly queries, and flipping through numerous websites, we now only need to type one question into a generative AI-powered chatbot and it seems to finally understand us.”
Like most revolutions, there are unintended consequences of AI’s big moment. Fears of nuclear-style annihilation, as wild as it sounds, are not out of the question. Geoffrey Hinton, one of the “the Godfathers of AI,” gave several interviews earlier this year where he suggested just that. “These things are getting smarter than us,” he said.
Those types of fears apparently led the board of OpenAI to nearly nuke its own company a couple of weeks ago. Disagreements over the path for a potentially groundbreaking new technology, dubbed Q-Star, were apparently behind the ouster of OpenAI CEO Sam Altman before he was eventually hired back last week.
Besides a Terminator-style robot apocalypse, though, there are more immediate risks to businesses, such as tendency for LLMs to hallucinate and for them to leak personal data. The LLMs underlying ChatGPT have essentially been trained on the entire Internet, which is fully of information that’s both true and factually false. Because they work in a probabilistic manner, you’re not guaranteed to get the same answer every time.
LLMs can leak private data that they were trained on. Users may also inadvertently upload sensitive data to the LLMs. For these and other reasons, JPMorgan Chase, Deutsche Bank, Samsung, and Apple banned their employees from accessing ChatGPT in 2023.
The desire to get a competitive edge with GenAI is leading some companies to take shortcuts. In some cases, that means utilizing a pre-built LLM instead of doing the hard work to train their own.
“Every company on the planet is looking at their difficult technical problems and just slapping on an LLM,” Matei Zaharia, the Databricks CTO and co-founder and the creator of Apache Spark, said earlier this year.
Databricks, like other AI companies, is angling to take share from OpenAI. It launched its own language model, called Dolly, and acquired MosaicML and its LLM “factory” earlier this year. AWS, which launched its GenAI model offering Amazon Bedrock just two months ago, is also playing catch up.
Just about everybody seems to be playing catch up, including Google, which invented the transformer model that most LLMs are based on in the first place. OpenAI has a massive lead at the moment, but it’s unlikely to stay that way as billions flow into the space.
If there’s one thing that’s certain, it’s that there’s a lot of uncertainty in GenAI and what 2024 will hold. Companies are mostly in the proof of concept (POC) phase with their GenAI development, and as they address the security and privacy concerns–not to mention thorny issues like ethics and regulation, and oh did we mention the massive GPU shortage?–things will get interesting next year.
“A year into the ChatGPT-induced AI revolution, will we soon be surrounded by dramatic GenAI success stories or will we see the fastest collapse into the trough of disillusionment of a technology to date?” asks Kjell Carlsson, head of AI strategy, Domino Data Lab. “Both! AI-savvy enterprises are already augmenting their most valuable employees and, occasionally automating them, and the trend will gain momentum as clear, repeatable GenAI use cases mature and investments in MLOps and LLMOps bear fruit.”
“Meanwhile most POCs dazzled by the mirage of democratized, outsourced GenAI crash headfirst into the realities of operationalizing production-grade GenAI applications, leading to widespread disillusionment,” Carlsson continued. “It turns out that human intelligence about AI is the most important factor for GenAI success, and ‘Generalized Pre-trained Transformer Models’ are more valuable when they are specialized for specific use cases and verticals.”
It’s been an eventful 12 months since ChatGPT entered our life. There’s no telling what Year 2 of the AI Era will bring, but it’s setting up to be epic.