Follow Datanami:
January 24, 2023

The Case for Incrementalism in AI

Jason Smith

(Lightspring/Shutterstock)

Artificial Intelligence (AI) continues to offer the promise for businesses to optimize efficiency and, through abilities to simultaneously evaluate large and disparate datasets, discover new opportunities. In today’s corporate world, for a business model to not include AI as a strategic business objective, is about as incongruous as a mathematician with arithmophobia.

A company’s decision on how to apply AI significantly impacts the outcome. That decision comes down to using AI as a tactic or as a series of tactical improvements within a broader strategy. Organizations that take a singularly tactical approach often wait to see if and how AI’s impact will deliver results, and without a strategy that ties back to the business the result is often stagnation and delays. Whereas, having the strategy already in alignment with tactics and impact goals, enables a top down and long view of leveraging AI to address specific and measurable outcomes.

To be considered innovative, the pressure to incorporate AI into existing business operations and produce transformative results is a heavy burden. That same burden is intensified as the reality of AI implementation and resulting production value have been fraught with misaligned expectations between the business, product teams, and data scientist. Coupled with challenges surrounding access to clean, unbiased data are leading to conflicting reports on ROI and convoluted success rates. In view of these challenges, consider an AI incrementalism approach.

Incrementalism is founded on key principles of aligning strategic business goals to tactical (practical) implementation of AI based on the available data. Measuring end user feedback and continually tracking the ROI on product and overall business is an AI imperative.

(arleksey/Shutterstock)

Building AI solutions in stages with various resources, whether it is human feedback or changing computational resources, allows AI to improve performance and results. For example, an organization leveraged Natural Language Processing (NLP) to make significant strides in helping their customers understand large numbers of notes and comments from their field teams. From a tactical lens, the goal was narrow enough to achieve real progress for customers, yet big enough to strategically demonstrate business results to warrant continued investment.

A successful, incremental AI approach typically includes the following:

  • Customer interviews. Conduct interviews with end users to understand current frustrations, and practical needs by user type. This includes field team members, managers, directors, and other key stakeholders. This will enable the organization to prioritize pain points. As a result, organizations can analyze and determine which types of AI solutions will have the largest impact within the organization, in the shortest time frame.
  • Test & Validate (over and over and over, again). Use your organization’s top models and approaches and gather relevant customer data. This process has a good byproduct of helping your company learn early how to extract and clean the data, and create training pipelines. Then the team can run the training process on the various models and compare outputs. From there work with a key customer user group to review outputs and gain feedback as to which outputs were providing the insights your company hoped for. Next, select the model with the highest impact and continue training and validating.
  • Deploy to a limited user group. Product teams will need to create impactful reporting and visualizations for the AI-powered results within the existing application. Leverage the solution to integrate and deploy to a limited user group within the customer’s organization. If users can provide bi-weekly feedback on the “day in the life” of how the AI was either delivering the value they needed or provided feedback for additional needs, the information will prove highly valuable.
  • Wider Deployment. Next, continue wider deployment and training. This will allow the team to gain first-hand feedback from a broader group of users. Garner feedback into the future product roadmap and new model development.
  • Once wrapped up, move on to the next incremental problem your organization needs to solve.

    (Sergey Nivens/Shutterstock)

By applying incrementalism to the basic understanding of medical affairs communication, combined with integration with a purpose-built cloud solution, an organization was able to take a data-centric approach. Due to the insights received, they recognized an average 34% improvement in identifying positive and negative sentiments. By digitizing those reports in a single, centralized location, and parsing conversations for trends and sentiments, life sciences organizations can benefit from incremental steps towards improving communications.

While AI is poised to solve major problems, many, large organizations bite off more than they can chew. You may not cure cancer in 5 years, but you might build models and create solutions that help patients in smaller ways—this is still a hugely worthwhile endeavor. Small wins for AI are sustainable, even models that deliver 3% greater success rates should be celebrated.

For example, if the cancer an organization is looking to treat involves 500,000 people, then your 3% improvement vastly improves the lives of 15,000 individuals and their families. That same 3% success rate if realistically attained per quarter is vastly more significant with the added benefit of reliability and quantifiable data.

Another aspect of tactic versus strategy involves data access, cleaning and harmonization. In short, the data centric approach, as championed by Andrew Ng, empowers AI programs to focus on building models with correct and clean data, regardless of volume. As the business of AI is continuously growing, it will encompass evolving variables of changing data vendors, data formats as well as volume.

  • Strategically, we have data. However, when circling back to the initiation of a problem, organizations face accessibility, viability, and cleanliness of the data itself.
  • Tactically, it must be determined how to clean data, including appropriately accessing and, filtering.

Today and Tomorrow

Incremental steps are key in establishing best practices and ROI. While the promise of AI is great, it won’t happen overnight. As such, it is vital to strategically focus on meeting goals, applying key learnings and moving forward. Measuring and quantifying small wins and ROI holds the promise of tapping into the true, and potentially life altering AI benefits.

With more data points from KOLs, patients, and customers from internal and external sources there will be a strong improvement from its current optimization application to predictive modeling. That means expanding from a structure that prioritizes problem solving, data visibility, accessibility, and computation interpretation to identifying deficits and providing focused structure.

About the author: Jason Smith joined Within3 in April 2021 through the acquisition of rMark Bio, where he was the co-founder and CEO. Jason began his career at IBM and ATI Research, while studying computer science at Harvard University. He was recruited out of school to the west coast where he became a serial entrepreneur in the fields of video encryption, high-performance computing and bioinformatics. As a founding executive, Jason was instrumental in building and selling multiple companies – Cryptocybernetics, Gray Area Technologies, Idea Pattern and xSides. Jason served as the VP of Corporate Development at Seattle-based Venture Studio, BE Labs. While at BE Labs, he led investment and development of multiple startups in areas of social networking, data analytics, distributed systems and consumer experiences. Jason has been granted multiple domestic and international patents. He is an active angel investor, startup advisor and board member.

Related Items:

How Data-Centric AI Bolsters Deep Learning for the Small-Data Masses

Data Sourcing Still a Major Bottleneck for AI, Appen Says

Is Data-First AI the Next Big Thing?

 

Datanami