ASG is now part of Rocket Software! Please visit to learn more. Follow our journey
Blog > May 2020 > 2 Ways to Improve Data Management

2 Ways to Improve Data Management

“We cannot solve our problems with the same thinking that we used when we created them.” — Albert Einstein

Introducing the “Data Management Problem”

About a month ago, I happened upon a news item about a major international bank which had failed to justify a discrepancy in reported figures. Even after six months of struggle with their “Data Management Problem”, they were unable to trace back through what amounted to a veritable maze of transactions and data elements, all stored in various systems spread throughout the world.

This occurrence of accidental misreporting might not seem very striking to us; after all, we are used to hearing about this kind of issue by now. What does seem striking, however, is that even a large global institution with massive resources and well-developed technology wasn’t able to satisfactorily solve a problem they always knew existed — the “Data Management Problem”.

How Do We Solve the “Data Management Problem”?

In trying to solve this massive problem, we must try first to understand it. To simplify the issue and reduce it to its bare bones, I set out to answer just one simple question:

“What are the key technology areas we can focus on to reduce significant cost and time in managing data effectively and in a timely fashion?”

To answer the above question, we must first identify what leads to the complications that increase the cost and duration of data management. Using the banking and financial services industry as an example, I have identified three major problem areas in managing data:

  1. Data Diversity
Structured and unstructured data are both critical to the industry. The technology evolution has treated these data types differently, but there is a need for an interplay, ranging from discovery of structured data in the unstructured sources, to managing metadata of them both together.
  1. Data Synchronization
There is a tremendous amount of data that gets synchronized between systems. This happens at regular time intervals or at the end of day—usually in all different shapes and forms. The data is often error-prone due to data quality issues arising at the source or because of errors in manual entry. These errors cause huge issues in data management across systems.
  1. Data Understanding at Source
Any reporting and decision-making system requires a good understanding of information at their diverse source systems. This is often a manual and time-consuming process. Collecting metadata across diverse sources can be extremely time-consuming, error-prone and cost a lot of money.

The 2 Key Capabilities to solve the “Data Management Problem” effectively are: Discovery and Automation

When we carefully observe these issues, we can see that there are two clear trends emerging from the nature of the problems. The trends point us to two key capabilities that we would need to solve the problem effectively. These two capabilities are Discovery and Automation.


Like all consulting projects, the starting point in solving any data challenge is to know the “as is.” Specifically, one must be able to understand the current state of one’s data pipeline; this can only be done by discovering the metadata automatically across all source systems. This process of discovery must include metadata discovery from both structured and unstructured data automatically. The extraction, classification and consumption of structured content from unstructured data should also be part of the discovery capability. The tools of this discovery could even connect to cloud systems that the enterprise deploys and give a data supply chain view of the entire enterprise. This discovery foundation, if built well, will solve many data issues and drive significant cost and time benefits.


Automation serves many purposes ranging from accuracy to speed. It is key to eliminating human errors that happen during manual interventions in the data supply chain. It can trigger steps and post results back into the system, reducing the human effort and time required to complete the end-to-end data management process. Using automation can let us ingest massive amounts of data, and even help us parse and process them, giving us scalability. We can connect to legacy backend systems with API-less integration, thereby saving lots of time and money. We can reach a point when we can build an integrated enterprise process that orchestrates between consuming, cleaning and moving data across systems and triggering applications.

Discovery + Automation = The Data Management Solution Dream Team

While it’s not true that organizations don’t possess these capabilities at all, the maximum benefit lies in using capabilities simultaneously and in a cohesive manner to solve data management problems.

Let us now look at a real-life situation and see how both these capabilities together can drive value. The core process of using scanners to collect metadata automatically across diverse sources of information drives huge time and cost savings. If we extend automation around the core process, it can help us run scripts that understand when there is a change in metadata at source, run the scanners to capture the metadata and even ingest the changed output back into the data intelligence system. Together, discovery and automation capabilities create a data management framework that significantly brings down cost and time.

The benefits include:
  1. Savings around automatic understanding of vast amount of data
  2. Productivity and accuracy gains by reducing human intervention for data manipulation
  3. Being able to make sense of unstructured data and drive process automation based on trusted data

The core outcome of implementing these capabilities in unison can be easily explained with a simple phrase: “Productivity Driven by Data Trust.”

Over the last 25 years, ASG Technologies has helped companies increase trust in data and derive value from this invaluable asset. Today, ASG-Zenith, a leading digital automation platform, integrates the process and automation capabilities in a single model, enabling enterprises to drive significant productivity increases. ASG’s data, content and digital automation platform capabilities are endorsed by various analysts and independent organizations globally.


Data Management is a set of process/techniques used for acquiring, validating, storing, processing and protecting required data for easy accessibility and reliability and timeliness of the data for all its users across an organization. The data management process includes a combination of different functions that collectively aim to make sure that the data in corporate systems is accurate, available and accessible.
Data discovery is the process of collecting and analyzing data from various sources to gain insight from hidden patterns and trends. It is the first step in fully harnessing an organization's data to inform critical business decisions. Through the data discovery process, data is gathered, combined, and analyzed in a sequence of steps. The goal is to make messy and scattered data clean, understandable, and user-friendly.
Data automation is the process of automatically updating your data on your open data portal, rather than manually data entry. Automating the process of data uploading is important for the long-term sustainability of your open data program. Any data that is updated manually risks being delayed because it is one more task an individual has to do as part of the rest of their workload.