Classifier Tuning is BigID’s Solution to Boost Data Classification Accuracy
BigID has made it easier to adjust classification machine learning models in real time, thanks to a new feature of its data intelligence platform called classifier tuning.
Classifier tuning will allow users to manually adjust classification models for improved accuracy in data classification with no advanced coding skills necessary.
Businesses are faced with managing an immense amount of data, both structured and unstructured, stored in cloud, local, and hybrid environments. As a key part of data governance, data classification enables the security and governance of multiple data sources by discovering and identifying sensitive data that may be duplicated in more than one place.
“Organizations need to know what data they have across all of these varied sources and types of data for security, privacy, compliance, and governance initiatives – and they need to know what data needs to be protected and what data is good to use,” the company said in a release.
The BigID platform currently offers ML auto-classification of data, since manual classification lacks the scalability needed for a complete view of all data, the company says. ML models can return false positives, however, and training models to increase their accuracy via human interaction could be the solution.
BigID says that before now, there was no way to interact with model results to easily adjust classification models without complex coding. The company asserts that its classifier tuning solves this by providing a user-friendly interface with automated classifiers, allowing users to accept or reject classifiers for specific data objects, without complex coding.
A company blog post explains that BigID users can preview a sample of the classified results in any structured or unstructured data assets and confirm that the data matches the assigned classifier. The classifier can then be tuned, depending on its accuracy. If the classifier is generating accurate results, it can be validated and stamped with a verification date. If it is mostly accurate with a few false positives, the model can be tuned with phrases to ignore. If the classifier is not generating useful results, users can delete and remove it.
“Classifier tuning allows BigID customers to have more accurate classification with the combination of human intelligence and artificial intelligence,” said Maor Pichadze, senior product manager at BigID. “Data owners can stop wasting time manually correcting inaccurate classification again and again from other systems.”
BigID has recently introduced several new features aimed at solving specific data governance challenges. In April, the company launched BigAI, an AI engine for accelerating data security, governance, and risk management initiatives.
BigAI uses BigID’s own private models and servers, so no data is shared with public models, the company said in a statement. BigAI automatically changes the names of data tables and columns to be more conducive for analysis, generates search-friendly cluster titles for document clustering, and enlists a virtual personal assistant called BigChat to generate answers to complex technical queries.