In the first part of this series, we will show you which content and technological challenges can arise due to mixed IT system landscapes and decentralized IT infrastructures and how data virtualization can help.
"Every company has to face the challenge of combining data from different systems over the course of its life cycle."
The need to combine and use data from different systems is a challenge that every large company must face over the course of its life cycle.
This challenge can be triggered by various factors, such as the use of different systems within different business areas such as production, purchasing or HR, organizational circumstances such as subsidiaries abroad or acquisitions, or simply the need to integrate external data sources.
Content Challenges in Combined Reporting
Maintaining content consistency of data can be difficult due to various aspects. A common reason for this is inhomogeneous number ranges for customers, products or materials in the different company systems.
Additionally, missing exchange rates and different data formats for dates and numerical values also make it difficult to maintain a consistent evaluation. Since data often only fulfills its informative purpose in combination with appropriate business logic, the challenge here is also to keep it consistent across the board when used decentrally.
Technological Challenges
Many providers of databases, warehouse systems or other enterprise systems use their own proprietary technologies, interfaces and tools. These are usually difficult to combine with one another and pose major challenges for end users when carrying out comprehensive analyses. Even open interface architectures such as “Application Programming Interface” (API) or “Representational State Transfer” (REST) very often cannot be used by the end user without programming skills.
"A dynamic integration of data into analyses is often not possible."
Another problem is network boundaries, which exist when different company areas operate their own decentralized IT infrastructures or have to obtain data from third-party companies (suppliers, customers, data suppliers, etc.).
Even when obtaining data from a single system, technical limitations, complexity of use or lack of access to necessary applications can lead to the issue that an automated and dynamic integration of the data into analyses is not possible.
Conventional Approaches and Their Problems
Typically, companies faced with these challenges decide to implement a data warehouse system or a data lake. In these systems, the data from different sources is copied from the original sources into one or more new data persistences usually using complex, multi-stage transformation, harmonization and cleansing processes. Redundant copies of the data are created.
The implementation projects of these solutions are often very time-consuming and costly. Subsequent changes to the often very complex loading processes usually involve significant effort and are avoided if possible. However, the business world continues to change even after the introduction of a warehouse system, for example due to the introduction of additional systems or the replacement of existing systems, because of company mergers, changed market situations or new technical and legal requirements.
The Long-Term Consequences
Given the effort involved in making changes to existing warehouse systems, customization needs are typically prioritized strictly. Requesters often have to bring their own budget and a lot of patience until changes can be implemented and made usable following the regular release process.
"The workarounds often consist of repetitive, time-consuming and error-prone manual processes."
If change requests are rejected, affected employees or departments are left on their own and have to help themselves with cumbersome workarounds. These often consist of repetitive, time-consuming and error-prone manual processes. In many cases, this results in additional databases or database-like solutions. This can quickly lead to uncontrolled growth and shadow system landscapes that do not meet all technical or regulatory requirements.
In addition, inexpensive and less powerful systems are usually used or the execution is carried out directly on your own PC, which endangers the original IT investment in expensive and powerful IT systems.
Overcoming Content and Technological Challenges Through Data Virtualization
The idea of data virtualization is based on the idea that data is not duplicated for every use case and thus stored redundantly (persisted), but rather that only a virtual reference is created and stored that points to the original data in the source system. In addition to many other advantages, this technology offers companies numerous opportunities to achieve content consistency and overcome technological hurdles.
"Access to the data can be configured by the end user and easily integrated into their analyses and reports."
As access to the data does not have to be technically implemented (programmed), but can be configured by the end user and easily integrated into their analyses and reports, the hurdles to automate processes are significantly lower than with classic data warehouse systems.
Necessary mappings to harmonize possible different number ranges in combined reporting, additional currency conversions or the harmonization of date and numerical values can be created and linked by the end users themselves.
The Benefits of Data Virtualization
Since the data is always loaded directly from the source system, the user does not need to worry about monitoring loading routines or whether the data is up to date. All defined transformation steps are applied in real time to the data from the source system or systems.
Thanks to Single Sign-On technologies, the authorization concepts remain in the source system and can easily be reused by the user.
This approach not only allows access to systems from different providers and with different technologies, but also enables secure access to data across network boundaries.
Yorumlar