Follow Datanami:
September 27, 2016

Splice Machine Announces Native PL/SQL Support to Accelerate Migrations From Oracle to Hadoop

NEW YORK, N.Y., Sept. 27 — Splice Machine, provider of the open-source SQL RDBMS powered by Hadoop and Spark, today announced at Strata + Hadoop World in New York that it now supports native PL/SQL on Splice Machine. PL/SQL support dramatically reduces the time and cost for companies to offload their big data workloads from Oracle databases. It is available immediately through the Splice Machine Enterprise Edition. 

Companies are dealing with an exponential growth of data, and a constant demand for new applications that deliver deeper, real-time insights. Dealing with these challenges using Oracle’s scale-up technologies can be cost-prohibitive, but companies have been slow to migrate away from Oracle because their mission-critical applications depend on transactional SQL and custom application logic captured in PL/SQL.  Splice Machine RDBMS can now address these issues.

“Despite the proliferation of new open source scale-out solutions, and the exorbitant cost of using scale-up technologies, many enterprises still rely on Oracle to manage their Big Data workloads,” said Roger Bamford, founding father of Oracle’s Real Application Clusters (RAC) and Grid products, and an advisor for Splice Machine. “That’s because before, if you had thousands of PL/SQL stored procedures, it might take six months or more to migrate onto a Java-based platform like Hadoop. Now, with Splice Machine PL/SQL support, companies can streamline migration and extend the lifetime of their applications. It’s the first tool of its kind to support seamlessly moving from Oracle to an affordable, scale-out platform.” 

Splice Machine PL/SQL support has two components:

1. The compiler converts the PL/SQL into a fully type-checked and optimized runtime representation.

2. The interpreter executes the optimized runtime representation with PL/SQL semantics so that you can be sure your application will behave the same on Splice Machine as it does on Oracle. It maintains a procedural context and handles all scoping for variable de-referencing, iteration and conditional testing. It dispatches all DDL and DML to the Splice Machine RDBMS for execution. 

“Many Splice Machine customers are developing artificial intelligence applications that deploy machine learning models,” said Monte Zweben, CEO, Splice Machine. “These new intelligent applications learn to advise doctors about drug trials and dangerous hospital events, detect fraud or security breaches, and discover trends and events in IoT applications. Until now, this real-time intelligence was nearly impossible to achieve in legacy PL/SQL applications because of the long time it took to get data out of the PL/SQL engine into an analytical framework via a process commonly called ETL. Now, with this PL/SQL capability, old applications can run their native application logic while simultaneously running new machine learning processes on Splice Machine’s RDBMS powered by Apache Spark and Apache Spark MLlib. We can make old applications ‘intelligent.'”

About Splice Machine

Splice Machine is disrupting the $30 billion traditional database world with the open-source RDBMS powered by Apache Hadoop and Apache Spark, for mixed operational and analytical workloads. Splice Machine makes it easy to create modern, real-time, scalable applications, or to offload operational and analytical workloads from expensive Oracle, Teradata, and Netezza systems.


Source: Splice Machine

Datanami