5 Critical Steps for Identifying the Value in Your Unstructured Information
I have spent many years helping organizations gain control and management over their unstructured data or information. The days where we looked for one content management solution to capture and manage our valued unstructured data (information) are long gone.
Organizations are now faced with the challenge of implementing multiple content management solutions focused on many tiers within the organization:
- Local business unit applications – these applications manage data and content localized to a single business unit
- Cross business unit applications – these applications manage data and content that is used by multiple business units
- Enterprise applications – these applications manage data and content that is use by the enterprise
Implementing a diverse set of content management solutions requires a consistent approach that reuses a proven methodology. All successful content management implementations address these basic five steps:
- Discover– how can we discover and categorize the valuable data contained in the file shares and many repositories?
- Organize– how can we organize information so the business users can identify, manage, and control it?
- Govern– what are the policies required to make the data available, accessible and reliable?
- Manage– how do we manage the data lifecycle to meet compliance/ retention requirements?
- Analyze – how can users access the data and turn it into assets that can be used to make business decisions?
Unstructured data or information is routinely found in many repositories, including email, file shares, Google drive, Dropbox and SharePoint, to name just a few. The ever-growing number of files has resulted in a problem I call digital hoarding. In many cases, digital hoarding has been caused by mismanaged information from the very beginning while in other cases the availability of cheap storage has quickly led to the out of control proliferation of various repositories.
Manually reviewing and organizing the vast quantity of information that we’ve stored is often a daunting and impossible task. It is critical in the cleanup process that we identify and separate the files that require management from those that can be either deleted or just left alone. Utilizing an automated tool that scans and groups your large amount information is the best approach to accomplishing this time-consuming task. A good example is their ability to differentiate files that contain PII information from those that are related to contracts. Understanding the universe of files and their associated groupings is a critical task when designing and implementing a content management solution.
Once we have discovered and sorted the digital information into stacks that I call file groupings, we now need to add metadata that will make the information more valuable to the organization. This step in the process involves the organization of the information into document types that align with the organization’s business structure or taxonomy. It is critical that the end-users understand how to access the information. End-users want to work in a familiar environment and should not be forced to think differently when trying to access information.
Developing the document types and metadata framework for each document type is often a long and tedious effort. Associating information to document types in a predefined taxonomy simplifies, if needed, the assignment of metadata. The maturity of automated tools, e.g. AI and Machine Learning, has led to better capabilities in identifying and assigning metadata resulting in higher accuracy and quality of the information that describes the content.
Once you have your unstructured data organized, the next step will be to develop a governance program that defines and enforces security, consistency and retention policies. Information governance ensures that the unstructured data is available, accessible and reliable when needed for analysis.
Timely access to the right information is critical when making strategic decisions. Too many times the wrong information is used in making decisions or communicating information to interested parties. Applying consistent policies to information provides uses with the assurance that they can the access and trust of the information.
An effective information governance program not only will ensure that you have reliable information but will also ensure that you have the implemented the right policies to keep your information protected and your executives out of jail.
The management step in the methodology refers to the information lifecycle that spans the creation to final deletion of information. The lifecycle phases cover creation, revision, approval, promotion, retention, and destruction.
Management will establish the information, security, and retention architecture. Defining an effective information architecture will ensure the unstructured data is secure and meets compliance and records management requirements.
Effective organization and management of your information creates an environment that provides your users better access and controls over their information. Applying retention policies to information will help users follow the rules for retaining and ultimately deleting information within a defined timeframe.
A good rule of thumb is that if there is no compliance/ regulatory need to manage the information or you never need to go back to the content for analysis/ reporting, there is no need to keep the information.
The final step in the methodology is the ability to access and use the information for analysis and reporting. Having a well-defined information architecture enables fast reliable access to the content. New content analytics tools have emerged that help uncover value buried in the information.
About the Author: Alan Weintraub is a senior information management leader and evangelist at DocAuthority. As an AIIM Fellow, he is focused on helping organizations maximize the value of their information. A former industry analyst at Forrester and Gartner, Alan is a recognized expert on multiple aspects of enterprise information management (EIM) including information governance (both data and content governance), enterprise content management, data management, digital rights management, and digital asset management. Get in touch with Alan on LinkedIn and Twitter.
July 2, 2020
- Anaconda Releases 2020 State of Data Science Survey Results
- Big Data Analytics Among Top Three Deployment Priorities for Enterprises, Says Frost & Sullivan
- Informatica Acquires Compact Solutions
- Confluent Announces Infinite Retention for Apache Kafka in Confluent Cloud
- BP Invests $5M in Geospatial Analytics Software Company Satelytics
- Data Visualization Gets Artificial Intelligence Boost with $5M NSF Grant
- LSU CS Professor Studies COVID-19 Disparities on Social Media
July 1, 2020
- OmniSci Powers New Website Enabling Public to View House-by-House Information On Flint Water Crisis
- Aerospike Adds New Partners to Meet Growing Demand in APAC Region
- Informatica, The ADAPT Research Centre Collaborate to Accelerate AI Research, Development
- Huawei’s Data Virtualization Engine openLooKeng Goes Open Source
- Zoic Labs Creates Interactive Data Visualization Tool, Connecting Scientists with COVID-19 Research Data
- Noodle Partners and Stevens Institute of Technology Address Shifting Demand with Online Programs
- Decisions, NLP Logix Partner to Deliver Machine Learning Capabilities to Business Process Management
- UMass Amherst Awarded Federal Grants to Support Research to Improve Pandemic Forecasting
- Yellowbrick Makes Cloud Disaster Recovery Service, New Features Generally Available
June 30, 2020
- Hitachi Vantara Names Gajen Kandiah as New CEO
- Ahana Announces Linux Foundation’s PrestoDB Now Available on AWS Marketplace and DockerHub
- Fivetran Raises $100M in Series C Financing Round
- American Family Insurance Data Science Institute Awards ‘Mini Grants’ to Advance Data Science
Most Read Features
- Big Data File Formats Demystified
- Nvidia Destroys TPCx-BB Benchmark with GPUs
- How to Build a Better Machine Learning Pipeline
- BI Tools — Are They Enough to Build a Data-Driven Culture?
- How COVID-19 Is Impacting the Market for Data Jobs
- Databricks Brings Data Science, Engineering Together with New Workspace
- What Is a Data Cloud? And 11 Other Snowflake Enhancements
- Understanding Your Options for Stream Processing Frameworks
- SAS Provides Big Data Solutions for… Bees?
- MongoDB Steps Up Game with MongoDB Cloud
- More Features…
Most Read News In Brief
- New Report Ranks Countries by COVID-19 Safety
- Spark 3.0 Brings Big SQL Speed-Up, Better Python Hooks
- IBM Brings Back a Netezza, Attacks Yellowbrick
- New Map Shows Hundreds of Counties in the COVID-19 Endgame — and Thousands on the Uptick
- Blurred Lines: SAS and Microsoft To Go Deep in Analytics Partnership
- U.S. Special Ops Launches $600M Analytics Effort
- NIH Launches Massive Initiative for COVID-19 Patient Data Analytics
- War Unfolding for Control of Elasticsearch
- AWS Upgrades SageMaker Labeling Tool
- Nebula Graph Joins Database Race
- More News In Brief…
Most Read This Just In
- HSBC Joins Data Privacy Firm Privitar’s Series C Financing Round with $7M Investment
- D2iQ Unveils KUDO for Kubeflow to Accelerate Enterprise-Grade Machine Learning on Kubernetes
- SAS Debuts Tools to Gauge Risks and Impacts of Reopening
- The Linux Foundation Cloud Engineer Bootcamp Announced
- Databricks Introduces Delta Engine, Acquires Redash
- Technology Aims to Provide Cloud Efficiency for Databases During Data-Intensive COVID-19 Pandemic
- Cloudera Debuts its Cloudera Data Platform Private Cloud
- New Actian Vector for Hadoop Enables Real-time and Operational Analytics
- MariaDB Announces the General Availability of MariaDB Community Server 10.5
- Alation Launches Data Governance Initiatives
- More This Just In…