5 Critical Steps for Identifying the Value in Your Unstructured Information
I have spent many years helping organizations gain control and management over their unstructured data or information. The days where we looked for one content management solution to capture and manage our valued unstructured data (information) are long gone.
Organizations are now faced with the challenge of implementing multiple content management solutions focused on many tiers within the organization:
- Local business unit applications – these applications manage data and content localized to a single business unit
- Cross business unit applications – these applications manage data and content that is used by multiple business units
- Enterprise applications – these applications manage data and content that is use by the enterprise
Implementing a diverse set of content management solutions requires a consistent approach that reuses a proven methodology. All successful content management implementations address these basic five steps:
- Discover– how can we discover and categorize the valuable data contained in the file shares and many repositories?
- Organize– how can we organize information so the business users can identify, manage, and control it?
- Govern– what are the policies required to make the data available, accessible and reliable?
- Manage– how do we manage the data lifecycle to meet compliance/ retention requirements?
- Analyze – how can users access the data and turn it into assets that can be used to make business decisions?
Unstructured data or information is routinely found in many repositories, including email, file shares, Google drive, Dropbox and SharePoint, to name just a few. The ever-growing number of files has resulted in a problem I call digital hoarding. In many cases, digital hoarding has been caused by mismanaged information from the very beginning while in other cases the availability of cheap storage has quickly led to the out of control proliferation of various repositories.
Manually reviewing and organizing the vast quantity of information that we’ve stored is often a daunting and impossible task. It is critical in the cleanup process that we identify and separate the files that require management from those that can be either deleted or just left alone. Utilizing an automated tool that scans and groups your large amount information is the best approach to accomplishing this time-consuming task. A good example is their ability to differentiate files that contain PII information from those that are related to contracts. Understanding the universe of files and their associated groupings is a critical task when designing and implementing a content management solution.
Once we have discovered and sorted the digital information into stacks that I call file groupings, we now need to add metadata that will make the information more valuable to the organization. This step in the process involves the organization of the information into document types that align with the organization’s business structure or taxonomy. It is critical that the end-users understand how to access the information. End-users want to work in a familiar environment and should not be forced to think differently when trying to access information.
Developing the document types and metadata framework for each document type is often a long and tedious effort. Associating information to document types in a predefined taxonomy simplifies, if needed, the assignment of metadata. The maturity of automated tools, e.g. AI and Machine Learning, has led to better capabilities in identifying and assigning metadata resulting in higher accuracy and quality of the information that describes the content.
Once you have your unstructured data organized, the next step will be to develop a governance program that defines and enforces security, consistency and retention policies. Information governance ensures that the unstructured data is available, accessible and reliable when needed for analysis.
Timely access to the right information is critical when making strategic decisions. Too many times the wrong information is used in making decisions or communicating information to interested parties. Applying consistent policies to information provides uses with the assurance that they can the access and trust of the information.
An effective information governance program not only will ensure that you have reliable information but will also ensure that you have the implemented the right policies to keep your information protected and your executives out of jail.
The management step in the methodology refers to the information lifecycle that spans the creation to final deletion of information. The lifecycle phases cover creation, revision, approval, promotion, retention, and destruction.
Management will establish the information, security, and retention architecture. Defining an effective information architecture will ensure the unstructured data is secure and meets compliance and records management requirements.
Effective organization and management of your information creates an environment that provides your users better access and controls over their information. Applying retention policies to information will help users follow the rules for retaining and ultimately deleting information within a defined timeframe.
A good rule of thumb is that if there is no compliance/ regulatory need to manage the information or you never need to go back to the content for analysis/ reporting, there is no need to keep the information.
The final step in the methodology is the ability to access and use the information for analysis and reporting. Having a well-defined information architecture enables fast reliable access to the content. New content analytics tools have emerged that help uncover value buried in the information.
About the Author: Alan Weintraub is a senior information management leader and evangelist at DocAuthority. As an AIIM Fellow, he is focused on helping organizations maximize the value of their information. A former industry analyst at Forrester and Gartner, Alan is a recognized expert on multiple aspects of enterprise information management (EIM) including information governance (both data and content governance), enterprise content management, data management, digital rights management, and digital asset management. Get in touch with Alan on LinkedIn and Twitter.
August 10, 2020
- New Relic and Grafana Labs Partner to Advance Open Instrumentation
- Qlik and Fortune Launch ‘History of the Fortune Global 500’ Data Analytics Site
August 7, 2020
- Sumo Logic Expands its Observability Suite with Added Solutions
- Google Cloud Delivers Enhancements to Looker that Optimize Performance, Accelerate Application Development
- Terbium Labs and DarkOwl Announce Partnership
- Mode Analytics Raises $33M in Series D Funding, Led by H.I.G. Growth Partners
August 6, 2020
- Online Applied Data Analytics Program Focuses on Data Decision-Making for Working Professionals
- Informatica and Google Cloud Expand Strategic Partnership with Deeper Integrations
- Swarm64 Announces Strategic Partnership with Command Prompt
- Confluent Launches Confluent Cloud in All Three Major Cloud Marketplaces
- HPE, SAP Partner to Deliver SAP Hana Enterprise Cloud with HPE Greenlake Cloud Services
- Zencity Raises $13.5M in Funding
- NIH Harnesses AI for COVID-19 Diagnosis, Treatment, and Monitoring
August 5, 2020
- Isima Introduces Self-Service, Hyper-Converged Data Platform
- Brussels Hospital Manages COVID-19 Outbreak with Qlik
- Jupiter Announces Launch of ClimateScore Global
- Research: 83% of IT Leaders are Not Fully Satisfied with their Data Warehousing Initiatives
- The NLP Summit 2020 Program Announced
August 4, 2020
- Yellowbrick Data and Emtec Enter Partnership
- cnvrg.io AI OS Delivers Accelerated ML Workloads with Support of NVIDIA A100 Multi-Instance GPU
Most Read Features
- Big Data File Formats Demystified
- Big Data Apps Wasting Billions in the Cloud
- How to Build a Better Machine Learning Pipeline
- What’s the Difference Between AI, ML, Deep Learning, and Active Learning?
- Is Python Strangling R to Death?
- How COVID-19 Is Impacting the Market for Data Jobs
- To Centralize or Not to Centralize Your Data–That Is the Question
- Is Hadoop Officially Dead?
- Hacking AI: Exposing Vulnerabilities in Machine Learning
- R Works Its Way Into Qubole’s Data Lake
- More Features…
Most Read News In Brief
- Researchers Explore Link Between American Individualism and Poor COVID-19 Response
- Left for Dead, R Surges Again
- Data Prep Still Dominates Data Scientists’ Time, Survey Finds
- Why Gartner Dropped Big Data Off the Hype Curve
- HPE Acquires MapR
- Global DataSphere to Hit 175 Zettabytes by 2025, IDC Says
- Spark 3.0 Brings Big SQL Speed-Up, Better Python Hooks
- Kepler AutoML Targets Next-Gen Business Analysts
- IBM Brings Back a Netezza, Attacks Yellowbrick
- War Unfolding for Control of Elasticsearch
- More News In Brief…
Most Read This Just In
- FortressIQ Launches Adaptive Computer Vision-Based Firewall for Data Privacy
- UBS Launches Big Data Shareholder Activism Tool
- Cloudera Foundation Announces Grant Partnership with Urban Institute
- Orange and Google Cloud to Form Partnership in Data, AI and Edge Computing Services
- Syniti Acquires Virtyx Technologies
- KNIME Analytics Platform 4.2 is Now Available
- Hazelcast, Sorint Expand Partnership to Address In-Memory Computing Adoption
- Privacera Raises $13.5M in Series A Funding
- MariaDB Platform X5 Adds New Distributed SQL
- TileDB Closes $15M Series A to Expand its First Universal Data Engine
- More This Just In…