Improving Risk Evaluation Through Advanced Analytics
The data we generate and copy annually doubles in size every two years, and IDC says it will reach 44 trillion gigabytes by 2020. That creates significant new opportunity for business. Indeed, as The Economist has noted, data has become more valuable than oil.
In the case of data, there’s a lot of it to wrangle from many sources. Ensuring you have the right data, that it’s fresh and that you’re using it to yield accurate results is challenging. And it’s quite complex.
Traditionally organizations have used people and manual processes to collect, review and report on data. That’s no longer a viable strategy. Manual systems simply can’t keep up with the growing volume of transactions and data processing required to make timely, informed decisions. Overwhelmed with higher demands and expectations for quality assurance, personnel and processes across industries have reached capacity.
Legacy rules-based analytical models don’t scale to meet today’s requirements for evaluating risk associated with this flood of data. Fortunately, technology innovation is fueling significant improvements in analytics. That’s enabling organizations to address the data onslaught and use it to their advantage.
Overcoming the Limitations of Legacy Rules-Based Models
Early decision models rely on hard-coded rules or very simple linear simulations with a limited number of features or variables. These models use sub-sampling and analyze cases as groups, ignoring important data sources that can impact outcomes. Restricted to making decisions at the group level, they are incapable of examining individual cases.
Legacy models also are limited by the perspective of those creating the rules, making it hard to incorporate unanticipated parameters. As new rules are added for each new case discovered, it becomes more difficult to see gaps and overlapping rules within hundreds of rulesets. This process yields too many false positives and exceptions. And over time it becomes too complex to maintain.
Automated decision-making is not new. Computers sift through data, spotting patterns and anomalies a lot faster than humans can. However, the application of artificial intelligence (AI) and machine learning is ushering in the next generation of analytics modeling using intelligent automation.
Improving Accuracy With Next-Generation Technologies
At first glance, intelligent automation approaches might seem similar to existing solutions, but technology innovation brings considerable change to the outcomes. New capabilities make more granular models possible, shifting from group to individual decisions.
Increased transaction volumes and higher data availability provide the inputs required for more accurate results. Greater processing power from cluster computing, distributed data management and cloud storage enable this data to be collected and used. Finally, advanced algorithms in AI and machine learning examine data fast and update frequently. And they learn from patterns and correlations to improve results over time.
Shifting the Roles of Rules
To take full advantage of new capabilities in AI and machine learning, the function of rules must shift. Instead of designing analytical models using rules to make decisions, businesses can develop rules of learning. This approach entails using algorithms to process all cases and examine individual behaviors. This allows organizations to move beyond sampling and segment-based treatments.
Sampling data only allows automated decision-making to identify segments that share high-level characteristics. This over-generalization increases the chances of false positives and negatives. While human intelligence remains a critical part of data analysis, it is limited to previously identified cases. Machine learning techniques, on the other hand, process huge amounts of data in real time to quickly identify newly emerging unknown patterns.
Even in known cases and with supervised learning, it is more efficient to let computer algorithms automatically find the relationships and causality analysis. Manually applying rules is error-prone and limited in terms of analysis and can be subjectively biased.
Deploying Advanced Analytics at Scale
Though many organizations are enthusiastic about adopting intelligent automation for risk assessment, few solutions are widely deployed in live operations. Challenges include a shortage of experienced data scientists, gaps between business and technology expertise, and a maturity curve for deploying prototyping tools at scale in the production environment. Most organizations have valuable data but face challenges drawing insight from it with no expertise in creating algorithms to extract knowledge. A lack of data science experience limits the sophistication of analytical models, and often the right technical personnel don’t have a deep understanding of the business. For more meaningful modeling, technical and business experts must work together to align with organizational goals.
Indeed, one report suggests America alone is short more than 151,000 data scientists. Another article indicates it will only get worse, as the need for data scientists is poised to increase 28% by 2020. A McKinsey Global Institute study about competing in a data-driven world comments on the data scientist shortage and related challenges.
“The biggest barriers companies face in extracting value from data and analytics are organizational. Many struggle to incorporate data-driven insights into day-to-day business processes,” McKinsey reports. “Another challenge is attracting and retaining the right talent – not only data scientists but business translators who combine data savvy with industry and functional expertise.”
Finally, many advanced analytical models in development today are prototypes. To reap the benefits of AI and machine learning, a model needs to scale in production. Hybrid approaches—building the model offline, generating insights and then manually building rules based on the insights—restrict the model’s capabilities. Transitioning from prototype to production is easier when partnering with third parties that specialize in integrating advanced analytics pipelines into operations.
To keep pace with a rapidly-moving, globally-connected digital world, analytics approaches that embrace intelligent automation will pave the way to faster, more accurate risk evaluation.
About the Author: Dr. Ali Akbari is the senior artificial intelligence lead at Unisys in Asia-Pacific, specializing in unstructured data and image processing. He has enabled the practical application of artificial intelligence and big data in multiple industries to address their real-life business challenges, including risk management. Unisys data scientists and subject matter experts put advanced analytics into practice, helping organizations in the public and private sectors to improve efficiency, assess risk and enhance customer experiences.
Related Items:
Beware of Bias in Big Data, Feds Warn
ADP Leverages Huge Database to Gauge Worker Attrition
Banks Consider the Rewards and Risks of Big Data
May 9, 2024
- Precisely Continues to Expand Reach and Capabilities for Data Enrichment and Geo Addressing
- Databricks Enhances Enterprise AI with RAG Applications and Improved Model Serving
- StarTree Extends Real-Time Analytics Capabilities for Observability and Anomaly Detection Use Cases
- Snowflake Unveils the Future of AI and Data at 6th Annual Data Cloud Summit
May 8, 2024
- IBM and SAP Announce New Generative AI Capabilities to Transform Business Processes
- Alteryx: Businesses Embrace Generative AI, but General Public Remains Wary
- Atlan’s Control Plane to Power Data & AI Governance Attracts $105M
- Cisco Unlocks AI-Powered Intelligence for Self-Hosted Observability
- Boomi Accelerates Enterprise Data Connectivity with Advanced API Management Acquisitions
- Elastic Security Labs Releases Guidance to Avoid LLM Risks and Abuses
- Neo4j Empowers Syracuse University with $250K Grant to Tackle Misinformation in 2024 Elections
- Cloudera Unveils New Flow Management and Streaming Operators at Red Hat Summit
- DataForge Open-Sources First Functional Data Programming Framework
May 7, 2024
- Datadog Launches IT Event Management to Enhance AIOps Capabilities
- Panasas Rebrands as VDURA: A New Chapter in Software-Defined Storage for AI Applications
- Niobium Secures $5.5M in Venture Financing to Commercialize FHE Accelerator Chip
- Oracle Code Assist Can Help Developers Build Applications Faster with AI
- Redpanda 24.1 Launches with New Write Caching Capabilities for 98% Improved Latency
- Persistent Transforms Enterprise Data Management with iAURA, a Portfolio of AI-Powered Data Solutions
- BigID Launches Industry-First Hybrid Scanning for Cloud Native Workloads
Most Read Features
Sorry. No data so far.
Most Read News In Brief
Sorry. No data so far.
Most Read This Just In
Sorry. No data so far.
Sponsored Partner Content
-
Get your Data AI Ready – Celebrate One Year of Deep Dish Data Virtual Series!
-
Supercharge Your Data Lake with Spark 3.3
-
Learn How to Build a Custom Chatbot Using a RAG Workflow in Minutes [Hands-on Demo]
-
Overcome ETL Bottlenecks with Metadata-driven Integration for the AI Era [Free Guide]
-
Gartner® Hype Cycle™ for Analytics and Business Intelligence 2023
-
The Art of Mastering Data Quality for AI and Analytics
Sponsored Whitepapers
Contributors
Featured Events
-
AI & Big Data Expo North America 2024
June 5 - June 6Santa Clara CA United States -
CDAO Canada Public Sector 2024
June 18 - June 19 -
AI Hardware & Edge AI Summit Europe
June 18 - June 19London United Kingdom -
AI Hardware & Edge AI Summit 2024
September 10 - September 12San Jose CA United States -
CDAO Government 2024
September 18 - September 19Washington DC United States