Follow Datanami:
November 14, 2018

AI Needs To Be Easier, But How?

(Ryzhi/Shutterstock)

Companies today are scrambling to take advantage of the rapid evolution of artificial intelligence technologies, such as deep learning. They’re driven in part by fear of being left behind, and hopes of getting ahead of competitors. While AI is moving quickly, there are still substantial barriers to implementation, which provides incentive for the data science community to make AI simpler.

There is still more talk than action on AI. According to a PwC report issued this week, 53% of firms say that they are planning their investment and use cases in AI. Less than 20% say they have at least one use case and a plan, but only 4% have said they’ve successfully implemented the technology. Even worse, only 3% say they’ve implemented and are measuring ROI.

Those percentages jibe with the failure rate of big data projects, which Gartner analyst Nick Heudecker recently pegged at 85%. The most successful big data practitioners continue to be the Web giants like Google, Facebook, Twitter, Amazon, and Netflix, which developed many of the core technologies enabling the AI revolution, as well as the Fortune 500 firms that have millions to invest.

Businesses face several major obstacles to succeeding with AI, including a mix of technical, architectural, and personnel-oriented challenges. The general pattern of utilizing machine learning (ML) and deep learning (DL) are fairly well-established at this point, but businesses still struggle with the basics, which include:

  • Finding an appropriate business use case where big data analytics or AI can have a meaningful impact;
  • Capturing data pertinent to the project, and storing it in an organized fashion;
  • Exploring the data to discover trends and anomalies that can be exploited in a programmatic manner;
  • Prepping and cleaning the vast training data that will be fed to the ML and DL algorithms;
  • Building and deploying the predictive models powered by ML and DL;
  • Monitoring the results of the models, and tweaking or retraining as needed.

Actually getting business value out of this general recipe involves getting many other details right. For starters, businesses must attract the personnel to build such a system, which typically involves a mix of data scientists, analysts, and engineers. Having a chief data officer (CDO) or a chief analytics officer (CAO) can help in building the right employee culture.

Figuring out how to assemble all the pieces together in a workable manner continues to be a source of great debate. In terms of architectures, there are several ongoing tug-of-wars taking place to win the hearts and minds (and pocketbooks) of AI-enthusiasts on the corporate board, which are weighing decisions like:

  • Should you architect your system on premise with the Hadoop ecosystem of projects and products, or should you build on one of the public cloud environments, like AWS, Microsoft Azure, or Google Cloud?
  • What frequency of data and decision-making does your AI project require? Is it a strategic project, where batch updates of data will suffice? Or is this a tactical program that requires the freshest, most real-time data?
  • What’s your religious persuasion when it comes to data science languages? Are you a Python shop that will utilize tools like Numpy and Scikit-learn? Are perhaps you’re heavily into R? Or do you have established relationships with vendors like SAS or Mathworks?
  • How important is open source to you? Are you willing to invest in engineering and scientific experts who have the skills and patience to turn open source technology into a feasible business solution? Or is a shrink-wrapped proprietary solution more your speed?
  • What level of governance, privacy, and security will your AI project require to avoid running afoul of new regulations like GDPR or angering your customers?

There are no easy answers for any of this, of course, and each company must tackle the challenges in due course. Companies are investing billions of dollars to build AI systems in the hopes that win trillions in new business. But each AI project is unique, and there are few (if any) shrink-wrapped solutions available.

However, there is some good news on the horizon. The field of data science is progressing quickly too, and there is a large and vibrant community of developers, users, and vendors who are committed to improving the quality and accessibility of data science tools and technology on a daily basis.

AI technologies like ML and DL hold great promise to radically transform how companies interact with their customers. That promise has given rise to a great deal of hype, some of which is justified, but much of which is not. There will inevitably be many instances where companies stumble and fall with their AI initiatives, some of which we have covered here in Datanami.

In the long run, however, it’s tough to argue that AI technologies like ML and DL won’t be transformational in their impacts — not only on business but society as a whole. There is simply too much potential benefit, including for businesses and their customers.

We have already achieved some successes with AI. Anybody who has utilized services offered by Web giants is a beneficiary. However, in 10 years, our AI capabilities as a whole will be far beyond what we currently have. Enough progress will have been made in simplifying techniques that basic data science and AI processes will be taken for granted.

In the meantime, businesses that want to benefit from AI capabilities today will have to overcome the complexity of AI on their own. Considering the churn expected in the Fortune 500 over that timeframe and the stiff competition that nimble startups can provide for established brands, today’s companies may not have a choice.

Related Items:

What’s Driving the Cloud Data Warehouse Explosion?

Focus on Business Processes, Not Big Data Technology

Exposing AI’s 1% Problem

Datanami