5 Critical Steps for Identifying the Value in Your Unstructured Information
I have spent many years helping organizations gain control and management over their unstructured data or information. The days where we looked for one content management solution to capture and manage our valued unstructured data (information) are long gone.
Organizations are now faced with the challenge of implementing multiple content management solutions focused on many tiers within the organization:
- Local business unit applications – these applications manage data and content localized to a single business unit
- Cross business unit applications – these applications manage data and content that is used by multiple business units
- Enterprise applications – these applications manage data and content that is use by the enterprise
Implementing a diverse set of content management solutions requires a consistent approach that reuses a proven methodology. All successful content management implementations address these basic five steps:
- Discover– how can we discover and categorize the valuable data contained in the file shares and many repositories?
- Organize– how can we organize information so the business users can identify, manage, and control it?
- Govern– what are the policies required to make the data available, accessible and reliable?
- Manage– how do we manage the data lifecycle to meet compliance/ retention requirements?
- Analyze – how can users access the data and turn it into assets that can be used to make business decisions?
Unstructured data or information is routinely found in many repositories, including email, file shares, Google drive, Dropbox and SharePoint, to name just a few. The ever-growing number of files has resulted in a problem I call digital hoarding. In many cases, digital hoarding has been caused by mismanaged information from the very beginning while in other cases the availability of cheap storage has quickly led to the out of control proliferation of various repositories.
Manually reviewing and organizing the vast quantity of information that we’ve stored is often a daunting and impossible task. It is critical in the cleanup process that we identify and separate the files that require management from those that can be either deleted or just left alone. Utilizing an automated tool that scans and groups your large amount information is the best approach to accomplishing this time-consuming task. A good example is their ability to differentiate files that contain PII information from those that are related to contracts. Understanding the universe of files and their associated groupings is a critical task when designing and implementing a content management solution.
Once we have discovered and sorted the digital information into stacks that I call file groupings, we now need to add metadata that will make the information more valuable to the organization. This step in the process involves the organization of the information into document types that align with the organization’s business structure or taxonomy. It is critical that the end-users understand how to access the information. End-users want to work in a familiar environment and should not be forced to think differently when trying to access information.
Developing the document types and metadata framework for each document type is often a long and tedious effort. Associating information to document types in a predefined taxonomy simplifies, if needed, the assignment of metadata. The maturity of automated tools, e.g. AI and Machine Learning, has led to better capabilities in identifying and assigning metadata resulting in higher accuracy and quality of the information that describes the content.
Once you have your unstructured data organized, the next step will be to develop a governance program that defines and enforces security, consistency and retention policies. Information governance ensures that the unstructured data is available, accessible and reliable when needed for analysis.
Timely access to the right information is critical when making strategic decisions. Too many times the wrong information is used in making decisions or communicating information to interested parties. Applying consistent policies to information provides uses with the assurance that they can the access and trust of the information.
An effective information governance program not only will ensure that you have reliable information but will also ensure that you have the implemented the right policies to keep your information protected and your executives out of jail.
The management step in the methodology refers to the information lifecycle that spans the creation to final deletion of information. The lifecycle phases cover creation, revision, approval, promotion, retention, and destruction.
Management will establish the information, security, and retention architecture. Defining an effective information architecture will ensure the unstructured data is secure and meets compliance and records management requirements.
Effective organization and management of your information creates an environment that provides your users better access and controls over their information. Applying retention policies to information will help users follow the rules for retaining and ultimately deleting information within a defined timeframe.
A good rule of thumb is that if there is no compliance/ regulatory need to manage the information or you never need to go back to the content for analysis/ reporting, there is no need to keep the information.
The final step in the methodology is the ability to access and use the information for analysis and reporting. Having a well-defined information architecture enables fast reliable access to the content. New content analytics tools have emerged that help uncover value buried in the information.
About the Author: Alan Weintraub is a senior information management leader and evangelist at DocAuthority. As an AIIM Fellow, he is focused on helping organizations maximize the value of their information. A former industry analyst at Forrester and Gartner, Alan is a recognized expert on multiple aspects of enterprise information management (EIM) including information governance (both data and content governance), enterprise content management, data management, digital rights management, and digital asset management. Get in touch with Alan on LinkedIn and Twitter.
September 18, 2019
- Announcing the Inaugural ‘Data Orchestration Summit,’ Bringing Together Practitioners and Thought Leaders at the Intersection of Cloud, AI, and Data
- SWIM.AI Introduces DataFabric Software
- Purdue University, Indiana University Scientists Work Together to Find Data-Driven Solutions
- Tableau Expands Data Management Capabilities with New Tableau Catalog
- Newest Tableau Release Expands AI-Powered Analytics with Explain Data
- Pure Bridges the Divide Between Public, Private Cloud via Storage Layer
- Dataiku’s Documentary on Data Science Pioneers Hits The Big Screen
- Pure Storage Teams Up with NVIDIA and Core Scientific to get AI Pipeline Moving
- Cloudian Launches EDGEMATRIX Subsidiary for Artificial Intelligence Processing at the Edge
- Splunk Introduces New Data-to-Everything Pricing and $150M Splunk Ventures Fund
- Micro Focus Announces Vertica in Eon Mode for Pure Storage
September 17, 2019
- Percona Announces New Distribution of PostgreSQL
- Schlumberger, Chevron and Microsoft Announce Collaboration to Accelerate Digital Transformation
- Import.io Taps Veteran Data Expert Doug Laney to Join Advisory Board
- DataRobot Launches MLOps Solution
- DataRobot Announces $206M Series E Funding Round
- Pure Storage Unveils Technology Alliance Partner Program
- New Cloudera Study Shows Current Enterprise Data Strategies Ineffective
- Oracle Introduces Exadata X8M
- Oracle, Intel Collaborate on Persistent Memory Performance Breakthroughs in Next Generation Oracle Exadata X8M
Most Read Features
- Can We Stop Doing ETL Yet?
- Big Data File Formats Demystified
- Is Python Strangling R to Death?
- Seeing the Big Picture on Big Data Market Shift
- What’s the Difference Between AI, ML, Deep Learning, and Active Learning?
- How to Build a Better Machine Learning Pipeline
- Is Hadoop Officially Dead?
- 10 Big Data Trends to Watch in 2019
- AutoML Tools Emerge as Data Science Difference Makers
- Why Knowledge Graphs Are Foundational to Artificial Intelligence
- More Features…
Most Read News In Brief
- HPE Acquires MapR
- R Backers Tout Funding Milestone, Seek Comeback
- Startup Rockset Adds SQL to DynamoDB
- H2O.ai Tops Off Funding to Accelerate AI Adoption
- AI, Analytics Help to Propel Wind Power
- MapR Says It’s Close to Deal to Sell Company
- War Unfolding for Control of Elasticsearch
- StreamSets Eases Spark-ETL Pipeline Development
- How IBM Is Turning Db2 into an ‘AI Database’
- Global DataSphere to Hit 175 Zettabytes by 2025, IDC Says
- More News In Brief…
Most Read This Just In
- Cloudera Agrees to Acquire Arcadia Data
- Ascend Introduces Queryable Dataflows for Faster Pipeline Development and Overall Time to Big Data Success
- VMware Signs Definitive Agreement to Acquire Pivotal Software
- Looker Brings the Data Community Together at JOIN 2019
- WekaIO Awarded Three Patents
- NVIDIA vComputeServer with NGC Containers Brings GPU Virtualization to AI, Deep Learning and Data Science
- Synergy of Intel Optane Drives and RAIDIX ERA Software RAID for High Workload Storage
- Deep Learning Enables Scientists to Identify Cancer Cells in Blood in Milliseconds
- Hitachi Vantara Brings Industrial IoT to the Production Floor With Lumada Manufacturing Insights
- SAS Establishes Opioid Analytics Users Group
- More This Just In…
September 23 - September 26New York United States
October 20 - October 22Charlotte NC United States
October 23 - October 24Berlin Germany