

via Shutterstock
Amazon Web Services used a big data conference in the backyard of some of its largest government customers to showcase its AI and machine learning tools that are helping to funnel ever-larger volumes of data into its storage and computing infrastructure.
Making a pitch for better data management tools like metadata systems, AWS executives addressing a big data conference in Tysons Corner, Va., said the the public cloud giant aims to go beyond democratizing big data to “demystify” AI and machine learning.
The combination of organized data and analytics will accelerate the building and deployment of machine learning models, many that currently never make it to production. Those that are deployed often require up to 18 months to roll out, noted Ben Snively, a solution architect at AWS (NASDAQ: AMZN).
Open source tools for model development often advance a generation or two in the time it takes many enterprises to develop, train and launch a machine learning model, he added.
Snively asserted that the combination of big data and analytics along with AI and machine learning creates a “flywheel effect” in which organized, accessible data leads to faster insights, better products and—completing the cycle—more data.
(Hence, the cloud vendor forecasts as much as 180 zettabytes of widely varied and fast-moving data by 2025.)
As it seeks to demystify machine automation technologies and move beyond the current technology “hype phase,” AWS executives note that deployment of machine learning models and, eventually, full-blown platforms, remains hard. Among the reasons are “dirty” data that must be cleansed to foster access. The company estimates that 80 percent of data lakes currently lack metadata management systems that help determine data sources, formats and other attributes needed to wrangle big data.
That makes the heavy investments in data lakes “inefficient,” stressed Alan Halamachi, a senior manager for AWS solution architectures. “If data is not in a format where it can be widely consumed and accessible,” Halamachi stressed, machine learning developers will find themselves in “data jail.”
Once big data is wrangled and secured—“Hackers would like nothing more than to engineer a single breach with access to all of it,” Hamachi said—it can be combined with analytics on the inference side to accelerate training of machine learning models, Snively said.
Noting that most machine learning models built by enterprises never make it to production, the AWS engineers pitched several new tools including its SageMaker machine and deep learning stack introduced in November. Described as a tool for taking the “muck” out of developing machine learning models, Snively said Sagemaker is also designed to free data scientists from IT chores like standing up a server for model development.
The cloud giant is seeing more experimentation among its customers as they seek to connect big data with machine learning development. “Voice [recognition] systems are here to stay,” Snively asserted, and developers are investigating “new ways of interacting with those systems.”
“It’s really about demystifying AI and machine learning” and getting beyond the “magic box” phase, he added.
Recent items:
AWS Takes the ‘Muck’ Out of ML with Sagemaker
How to Make Deep Learning Easy
June 27, 2025
- EarthDaily Ignites a New Era in Earth Observation with Landmark Satellite Launch
- Domo Deepens Collaboration with Snowflake to Accelerate AI-Driven Analytics and Data Integration on the AI Data Cloud
- AIwire Launches Annual People to Watch Program
June 26, 2025
- Thomson Reuters: Firms with AI Strategies Twice as Likely to See AI-driven Revenue Growth
- DataBahn Raises $17M Series A to Advance AI-Native Data Pipeline Platform
- BCG Report: Companies Must Go Beyond AI Adoption to Realize Its Full Potential
- H2O.ai Breaks New World Record for Most Accurate Agentic AI for Generalized Assistants
- Foresight Raises $5.5M Seed Round to Bring Unified Data and AI to the Private Market
- Treasure Data Launches MCP Server: Let Your LLM Talk to Your Data
- Fujitsu Strengthens Global Consulting with Focus on AI, Data, and Sustainability
- HPE Expands ProLiant Gen12 with New AMD Servers
- Emergence AI Launches CRAFT for Natural Language Data Workflow Automation
- Overture Maps Launches GERS, a Global Standard for Interoperable Geospatial IDs, to Drive Data Interoperability
June 25, 2025
- Datadobi Launches StorageMAP 7.3 with Policy-Driven Workflow Automation
- Lucidworks Finds 65% of Companies Lack Foundation for Agentic AI
- OpenRouter Raises $40M to Scale Up Multi-Model Inference for Enterprise
- Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027
- Linux Foundation Launches the Agent2Agent Protocol Project
- Abstract Security Introduces LakeVilla for Scalable, Instant Access to Cold Data
June 24, 2025
- Inside the Chargeback System That Made Harvard’s Storage Sustainable
- What Are Reasoning Models and Why You Should Care
- Databricks Takes Top Spot in Gartner DSML Platform Report
- Snowflake Widens Analytics and AI Reach at Summit 25
- Top-Down or Bottom-Up Data Model Design: Which is Best?
- Why Snowflake Bought Crunchy Data
- Change to Apache Iceberg Could Streamline Queries, Open Data
- LinkedIn Introduces Northguard, Its Replacement for Kafka
- The Evolution of Time-Series Models: AI Leading a New Forecasting Era
- Stream Processing at the Edge: Why Embracing Failure is the Winning Strategy
- More Features…
- Mathematica Helps Crack Zodiac Killer’s Code
- Solidigm Celebrates World’s Largest SSD with ‘122 Day’
- AI Agents To Drive Scientific Discovery Within a Year, Altman Predicts
- DuckLake Makes a Splash in the Lakehouse Stack – But Can It Break Through?
- ‘The Relational Model Always Wins,’ RelationalAI CEO Says
- Confluent Says ‘Au Revoir’ to Zookeeper with Launch of Confluent Platform 8.0
- Supabase’s $200M Raise Signals Big Ambitions
- Toloka Expands Data Labeling Service
- Data Prep Still Dominates Data Scientists’ Time, Survey Finds
- The Top Five Data Labeling Firms According to Everest Group
- More News In Brief…
- Astronomer Unveils New Capabilities in Astro to Streamline Enterprise Data Orchestration
- Yandex Releases World’s Largest Event Dataset for Advancing Recommender Systems
- Astronomer Introduces Astro Observe to Provide Unified Full-Stack Data Orchestration and Observability
- BigID Reports Majority of Enterprises Lack AI Risk Visibility in 2025
- Databricks Unveils Databricks One: A New Way to Bring AI to Every Corner of the Business
- Snowflake Openflow Unlocks Full Data Interoperability, Accelerating Data Movement for AI Innovation
- Seagate Unveils IronWolf Pro 24TB Hard Drive for SMBs and Enterprises
- Gartner Predicts 40% of Generative AI Solutions Will Be Multimodal By 2027
- Databricks Donates Declarative Pipelines to Apache Spark Open Source Project
- Code.org, in Partnership with Amazon, Launches New AI Curriculum for Grades 8-12
- More This Just In…