When GenAI Hype Exceeds GenAI Reality
The huge amount of hype around generative AI since ChatGPT launched 11 months ago has been unlike any piece of technology people have seen in a generation. But despite the massive investment of awareness, time, and budget in GenAI projects, few organizations seem to be meeting their business goals with GenAI. That’s a reason why experts are predicting a modest pullback in 2024.
G2 was founded over a decade ago to provide analyst groups like Gartner, IDC, and Forrester with some friendly crowd-sourced competition. The group currently tracks more than 150,000 individual software offerings, ranging from enterprise CRM and CAD software to security and HR software.
That broad view of the software market gives G2 a unique perspective. And according to G2 Principal Analyst Matthew Miller, the perspective indicates that the past year has seen a surge in vendors adding GenAI capabilities to products listed on G2, which is not unexpected.
“We’re keeping our eyes peeled to figure out what exactly is the correct number, but currently think around 1,600 products across 200 categories have generative AI features,” Miller says. “If we’re looking at product-adds on G2, [it was] certainly hot and heavy from the end of the year last year, and we’re continuing to get a lot of products added to the categories” in 2023.
While the interest in GenAI is certainly there with both the software selling and software buying communities, the delivery of business benefits so far doesn’t seem to have lived up to the initial billing, Miller says.
“One of the things that we’re seeing is it hasn’t, at least to date, made a big difference,” he says. “I haven’t seen a big impact.”
Outside of a few product categories that seem tailor made for GenAI, such as photo manipulation products (i.e. Photoshop), and some text summarization and chatbot applications, the promised benefits around GenAI largely have failed to materialize, Miller says. That will likely lead to a vendor pullback in GenAI offerings in 2024, he tells Datanami.
“Since we’re not seeing necessarily a huge jump in the needle for whether or not these things are really helping users meet their requirements, I think a lot the products, a lot of the categories, will probably drop the generative AI features,” Miller says. “Because if they’re not doing much, they’re costing the vendors money. They’re not necessarily overly helpful. They’re gimmicky. We’ll probably see some dropping of these features and then we’ll see the features growing in places where it really is helpful and used.”
The past 11 months have provided a great learning experience for software makers and software users to figure out what works with GenAI and what doesn’t, Miller says. Some of the well-documented problems associated with large language models, such as bias and hallucinations, has made organizations hesitant to expose them to outside users, he says.
“From the perspective of the sellers, I think the perspective very much has been ‘Alright let’s throw this technology at our software, at our category, and let’s see what sticks,’” he says. “I don’t think the journey will come to a conclusion [next year]. But I think we will see some of those categories were just not so helpful, we’ll see things dissipate. Across those categories, we’ll see generative AI is no longer there, because it’s not useful.”
Having a language interface where you can query the system can be helpful in some situations. But if it’s not well-integrated into the application, it leaves the user spinning their wheels and not any closer to accomplishing their task, Miller says.
“From the perspective of the users, you can now type in some text and get back some wonky data. What’s the need and what’s the point of that?” he says. “From the side of the users, they’re not really getting any good value out of it. It’s not at all helping them meet the requirements that they need for their business.”
AI Caution Is Warranted
Kjell Carlsson, a former Forrester analyst who is now an AI strategist for Domino Data Lab, has been following the development of GenAI technology for years. Since the launch of ChatGPT nearly a year ago, he has watched the technology explode into the public realm like few have.
“What ChatGPT really brought to the world wasn’t that the model was that much more impressive or anything like that, it’s that this was a killer generative AI application and everybody could use it, everybody could get that wow factor from playing with it themselves,” he says. “As always with these things, it’s often the sociological phenomenon that moves things as much as the technology development.”
While the capability of AI technology has clearly progressed and is doing some amazing things–for example, helping pharmaceutical companies to predict possible drug candidates to pursue based on LLM’s apparent capability to understand chemical equations–organizations still need more experience when it comes to picking GenAI use cases, Carlsson says.
“It’s difficult just to assess that that alignment between the business value and the sweet spot of the technology,” Carlsson says. “What should we be using it for? That sense of well, what can the data support? What can get us in trouble? How could these things misperform? You’re not going to get that as a developer who’s never worked with this before. That’s something that you get from working with data, trying to solve business problems with data, and knowing firsthand the ways in which that can fail.”
Carlsson is helping Domino customers set up virtuous cycles of GenAI development. That often starts with figuring out which projects should you chase and which ones should you ignore? In general, he is advising customers to pursue GenAI projects that are internal facing, that utilize pre-existing data, and that augment existing users, and cautioning Domino clients against projects that are external facing, that require new data, and which replace workers.
“I’m advising them against developing those external-facing customer service chatbots,” Carlsson says. “Not because you couldn’t theoretically do it. It’s just as an organization, nobody in the organization is going to be willing to sign on the bottom line that yes, I’m going to accept responsibility when the first customer goes in and gets this thing to say something anti-Semitic, or worse.”
The good news is that some companies are having success with all sorts of AI technology, including GenAI, Carlsson says. The common characteristics that define these organizations don’t revolve so much around technological capability (although that is important). What really moves the needle organizationally is having a data governance structure in place an
d having leadership that understands what it means to practice responsible AI, he says.
“It’s good business to practice responsible AI,” Carlsson says. “What’s held it back before is that leadership hasn’t understood the risks, hasn’t understood what to do. The folks who have understood that haven’t been empowered to do it, and now we’re seeing more and more leaders who have that mandate. They understand the risk, and they have the knowledge to make up teams and technology to support them.”
Five years ago, the executive leadership to succeed at AI largely didn’t exist, but it has developed since then. It’s a small number of firms that have figured it out, but it’s a non-zero number, which makes it worth calling out, he says.
“I wish I could say that, yes, this is the case that every company that I speak with,” he says. “It’s pretty much a pretty rarified set of companies that have done this in verticals where they’ve been had the data, they’ve been monetizing their data. So still, if you look across the US economy, we’re still talking about a small number of companies. But it’s the important ones.”