5 Critical Steps for Identifying the Value in Your Unstructured Information
I have spent many years helping organizations gain control and management over their unstructured data or information. The days where we looked for one content management solution to capture and manage our valued unstructured data (information) are long gone.
Organizations are now faced with the challenge of implementing multiple content management solutions focused on many tiers within the organization:
- Local business unit applications – these applications manage data and content localized to a single business unit
- Cross business unit applications – these applications manage data and content that is used by multiple business units
- Enterprise applications – these applications manage data and content that is use by the enterprise
Implementing a diverse set of content management solutions requires a consistent approach that reuses a proven methodology. All successful content management implementations address these basic five steps:
- Discover– how can we discover and categorize the valuable data contained in the file shares and many repositories?
- Organize– how can we organize information so the business users can identify, manage, and control it?
- Govern– what are the policies required to make the data available, accessible and reliable?
- Manage– how do we manage the data lifecycle to meet compliance/ retention requirements?
- Analyze – how can users access the data and turn it into assets that can be used to make business decisions?
Unstructured data or information is routinely found in many repositories, including email, file shares, Google drive, Dropbox and SharePoint, to name just a few. The ever-growing number of files has resulted in a problem I call digital hoarding. In many cases, digital hoarding has been caused by mismanaged information from the very beginning while in other cases the availability of cheap storage has quickly led to the out of control proliferation of various repositories.
Manually reviewing and organizing the vast quantity of information that we’ve stored is often a daunting and impossible task. It is critical in the cleanup process that we identify and separate the files that require management from those that can be either deleted or just left alone. Utilizing an automated tool that scans and groups your large amount information is the best approach to accomplishing this time-consuming task. A good example is their ability to differentiate files that contain PII information from those that are related to contracts. Understanding the universe of files and their associated groupings is a critical task when designing and implementing a content management solution.
Once we have discovered and sorted the digital information into stacks that I call file groupings, we now need to add metadata that will make the information more valuable to the organization. This step in the process involves the organization of the information into document types that align with the organization’s business structure or taxonomy. It is critical that the end-users understand how to access the information. End-users want to work in a familiar environment and should not be forced to think differently when trying to access information.
Developing the document types and metadata framework for each document type is often a long and tedious effort. Associating information to document types in a predefined taxonomy simplifies, if needed, the assignment of metadata. The maturity of automated tools, e.g. AI and Machine Learning, has led to better capabilities in identifying and assigning metadata resulting in higher accuracy and quality of the information that describes the content.
Once you have your unstructured data organized, the next step will be to develop a governance program that defines and enforces security, consistency and retention policies. Information governance ensures that the unstructured data is available, accessible and reliable when needed for analysis.
Timely access to the right information is critical when making strategic decisions. Too many times the wrong information is used in making decisions or communicating information to interested parties. Applying consistent policies to information provides uses with the assurance that they can the access and trust of the information.
An effective information governance program not only will ensure that you have reliable information but will also ensure that you have the implemented the right policies to keep your information protected and your executives out of jail.
The management step in the methodology refers to the information lifecycle that spans the creation to final deletion of information. The lifecycle phases cover creation, revision, approval, promotion, retention, and destruction.
Management will establish the information, security, and retention architecture. Defining an effective information architecture will ensure the unstructured data is secure and meets compliance and records management requirements.
Effective organization and management of your information creates an environment that provides your users better access and controls over their information. Applying retention policies to information will help users follow the rules for retaining and ultimately deleting information within a defined timeframe.
A good rule of thumb is that if there is no compliance/ regulatory need to manage the information or you never need to go back to the content for analysis/ reporting, there is no need to keep the information.
The final step in the methodology is the ability to access and use the information for analysis and reporting. Having a well-defined information architecture enables fast reliable access to the content. New content analytics tools have emerged that help uncover value buried in the information.
About the Author: Alan Weintraub is a senior information management leader and evangelist at DocAuthority. As an AIIM Fellow, he is focused on helping organizations maximize the value of their information. A former industry analyst at Forrester and Gartner, Alan is a recognized expert on multiple aspects of enterprise information management (EIM) including information governance (both data and content governance), enterprise content management, data management, digital rights management, and digital asset management. Get in touch with Alan on LinkedIn and Twitter.
July 23, 2021
- Observable Introduces Data Visualization Stack for the Enterprise
- Collibra Named a Leader in Data Governance Solutions by Independent Research Firm
- Mindtech Raises $3.25M to Accelerate Growth of Synthetic Data Training Platform for AI Vision Systems
July 22, 2021
- LigaData Now Certified on Cloudera Data Platform
- Adobe Launches Adobe Analytics for Higher Education to Advance Digital Literacy
- Anaconda Releases 2021 State of Data Science Survey Results
- Alation Supports Next Generation of Data Enthusiasts, Provides Free Software and Training
- NASA Expands Access to Planet Data to All US Federal Civilian Agencies
- Google Cloud Announces Healthcare Data Engine to Enable Interoperability in Healthcare
July 21, 2021
- Deloitte, US Chamber of Commerce Report Highlights How Public Policy Can Enable Trustworthy AI
- WHO Issues First Global Report on AI in Health
- New Data Science Platform Speeds Up Python Queries
- Teradata Joins TM Forum to Support the Cloud Journeys of Global Communications Services Providers
- Confluent Named Google Cloud Technology Partner of the Year for Third Year in a Row
- Lucata Raises $11.9M Series B to Introduce Next-Gen Computing Platform
- Securonix Announces ‘Bring Your Own Snowflake’ Program to Power Security Data Lake for Snowflake Customers
- Rensselaer Team Aims to Pave Way for Robust AI in Medical Imaging
- Dremio Launches SQL Lakehouse Service to Accelerate BI and Analytics
- Spectra Logic, StorMagic Announce Active Archive Repository for Video Surveillance
- Quobyte Releases Hadoop Native Driver to Unlock the Power of Analytics, ML, Streaming
- Big Data File Formats Demystified
- Who’s Winning In the $17B AIOps and Observability Market
- Tuplex Gives Python UDFs a Performance Boost
- What’s the Difference Between AI, ML, Deep Learning, and Active Learning?
- Presto the Future of Open Data Analytics, Foundation Says
- The Data Mesh Emerges In Pursuit of Data Harmony
- Why Data Scientists and ML Engineers Shouldn’t Worry About the Rise of AutoML
- What’s Holding Us Back Now? ‘It’s the Data, Stupid’
- Achieving Data Literacy: Businesses Must First Learn New ABCs
- Composite AI: What Is It, and Why You Need It
- More Features…
- Hiring, Pay for Data Science and Analytics Pros Picks Up Steam
- Starburst Backs Data Mesh Architecture
- Confluent Raises More Than $800M in IPO
- Data Prep Still Dominates Data Scientists’ Time, Survey Finds
- Off the Couch: Database Maker Seeks $160 Million In IPO
- Let It Go: The Financial Benefits of Data Deletion
- Global DataSphere to Hit 175 Zettabytes by 2025, IDC Says
- Confluent Files to Go Public. Who Could Be Next?
- Databricks Edges Closer to IPO with $1B Round
- Teradata: We’ve Outsourced Some Hardware Support for Years
- More News In Brief…
- Splunk Launches New Security Cloud
- Red Hat Expands Workload Possibilities Across Hybrid Cloud with Latest Version of OpenShift
- JetBrains Announces Datalore Enterprise, Enabling Data Science Teams to Collaborate On-Prem
- Alluxio v2.6 Release Brings Performance, Ease of Use Improvements to AI/ML Workloads
- DDN Selected by Bytesnet to Provide ‘Pay-Per-Use’ Storage for Data-Intensive Organizations
- Yugabyte and Hasura Integration Provides Cloud Elasticity, Frictionless Application Development
- Vertica Announces Vertica 11, Delivering on Vision of Unified Analytics
- TigerGraph Expands Partner Ecosystem to Bring the Power of Graph to More Businesses
- MariaDB Collaborates with AWS to Deliver SkySQL on AWS
- Collibra Announces 24 Gold and Silver Partners for 2021
- More This Just In…
Sponsored Partner Content
August 25 @ 12:00 pm - 5:00 pm
November 29 - December 3
December 6 - December 10San Diego CA United States