Unified Data: Your Business Needs One Beautifully Connected House, Not Four Random Buildings

Why having the right data architecture is essential for financial businesses.

Key Takeaways

  • Unifying ever-increasing amounts of data is a major challenge for businesses today.

  • To gain insights from their data, FSI businesses should consider adapting their infrastructure strategy and switch to a data-centric approach.

  • Intel technology is helping FSI businesses to build data architecture that enables them to deal with the growing data deluge.

author-image

โดย

Having multiple infrastructures, data architectures and databases creates chaos for any organization, not just FSI businesses.

The massive amounts of data that businesses are dealing with on a daily basis continues to grow all the time. Adding new platforms on top of legacy infrastructure means that organizations around the globe are trying to deal with data stored across various systems that were not designed to work together seamlessly. Continually adding new systems and platforms on top of existing architecture is like adding rooms and new structures to a house. The result is a series of random buildings, rather than a seamlessly connected house.

Unification of data is a major challenge today, especially for Financial Services Industry (FSI) businesses, which have extra complications in terms of security and compliance. "The problem with the existing environment is that the data is stored in many different places," said Parviz Peiravi, Global CTO/Principal Engineer, Financial Services Industry at Intel.

Some of the largest institutions have not just hundreds, but thousands of data repositories. Many came through a series of acquisitions with a number of disparate local data systems which remained largely siloed and disconnected. In addition, due to organic growth they are adding new capabilities based on new technologies that require different types of data collection, processing and storage. To cope with the limitations of traditional data warehousing technologies, data lake-based solutions were added to the mix which provide flexibility for storing structured, unstructured and streaming data in high volume.

"Big data technologies based on Hadoop offered greater performance and scalability by bringing data and compute together and paving the way for some level of data consolidation. But we are at a point where even those technologies are reaching their potential limits. As a result, we are seeing an increase in the number of data lakes per organization which by itself creates new challenges for accessing data, especially in a large multinational institution. It is clear that technology alone does not solve the ongoing data issues."

As new technology such as advanced analytics and Artificial Intelligence (AI) is adopted at an ever-accelerating pace, it gets increasingly difficult for businesses to have a clear overall view of their data. This is the challenge that financial institutions, small or large, are facing today. And it needs to be addressed to enable businesses to use their data effectively, including data from external sources.

How to Design Modern Architecture for Unified Data

This is not a simple problem to solve and there are multiple approaches that businesses can take. There is no silver bullet and no one size fit all. There are several new approaches in designing modern data architecture, each with its own strength and weakness:

  • Add in new capabilities to existing legacy data infrastructure – this has the advantage of maximizing current investment but relies on traditional data processing, storage and governance based on tightly coupled and monolithic architecture with less flexibility.
  • Logical Data Warehouse and Data virtualization – although this provides new functionality, it prolongs the use of traditional data architecture and eventually needs to be replaced.
  • Data consolidation – the creation of greenfield data lakes based on centralized data architecture enables organizations to deal with batch, Real-time and streaming requirements. But as the volume of data and size of the lake increases, so does the complexity of retrieving, processing and extracting insights.
  • Domain Driven Data Architecture (DDDA) and data mesh strategy – this is a recent yet promising concept to address the challenges of the dynamic data environment based on centrally managed but distributed data architecture. As we are moving from “Monoliths” to “Microservices" and "Cloud Native” application development, likewise we are moving from “Monolith” to distributed data architecture.

To start with, companies need to look at data as a product and not as a byproduct. Data strategy, architecture, platform and products should address the companywide needs. They should also build on the foundation of unified ownership and cross-functional business and IT organizational collaboration with continuous evaluation and prioritization of data products with an eye to long-short-term requirements while starting with simple implementation.

Building a unifying data platform and its associated products should follow the same philosophy as delivering a software product with clear lifecycle management and delivery cadence. Most successful organizations in developing modern data architectures are applying the DevOps concept to effectively manage data product development and delivery, which is known as DataOps. Adoption of a new generation of application development, like immutable, microservices and cloud native applications, is accelerating the take-up of the data platform and products paradigm managed by declarative process.

In addition, Cloud native and Microservices based-applications can span across on-prem and multiple clouds. Data strategy and implementation should cover all aspects of hybrid multi-cloud infrastructure including business, regulatory and technical requirements. For example, data governance, security and data locality all play an important role in operationalizing next-gen applications.

We are seeing a definite paradigm shift from application-centric architecture to data-centric architecture. "With data-centric architecture, everything is designed around where the data resides and how it can be delivered to the application in different environments, on prem, hybrid or in the cloud," said Peiravi. "And that's really where the impact is. Not only having access to data to be able to extract insight, but also delivering that data to any applications that are providing the business functions within an enterprise. And that is the data platform in a nutshell."

Shifting to a data-centric approach is completely changing how FSI businesses look at data architecture. This creates a dynamic environment that's constantly changing, albeit in a synchronized and managed way, rather than the chaos that many organizations are experiencing today. Creating a CI/CD (Continuous Integration/Continuous Delivery) data pipeline ensures reliable delivery processes for FSI businesses so that they can make frequent updates to existing applications while delivering new services.

The concept of managing data as a code is emerging as a good way to manage data in an ever-changing environment. "When you manage the data as a code, you use the same CI/CD principle, and therefore you build in a continuous flow that's managed in a coherent way," said Peiravi. "When you develop a data pipeline, you can effectively roll back that pipeline development through each stage of delivery, enabling you to restart it where you need to. This cannot be achieved unless you're managing data as a product concept using declarative process.”

In addition to organizational and cultural planning for digitization of the enterprise, new data, application architecture and implementation strategy, financial businesses should also consider adoption of new technologies that future proof the plan, such as Intel® Optane™ technology.

Partnering with Intel

Intel is helping FSI businesses to deal with the data deluge by providing fundamental technology to address the need for different data architecture patterns. In particular, Intel® Optane™ technology can support the new generation of applications such as in-memory Data Grid, Database, Data processing and Analytics, Cloud Native and Microservices that require fast response times and data persistency and quick recovery.

"Intel Optane provides large in system memory storage that can be combined with traditional DRAM" said Peiravi. "This means that you have the capacity to run large data analytics applications within the system memory.” Intel is also working with ecosystem partners to deliver different solutions that can help FSI organizations, along with businesses from other sectors, to build a foundation for digital enterprise. With this, they are using a new generation of data and application architecture to deliver the best customer experience and innovative products.

For example, Intel is working with IBM* and in-memory computing platform Hazelcast* to create a fast data and compute plane for financial institutions, such as JPMorgan Chase* and Lloyds*. This will ensure that there is zero downtime, from edge to hybrid cloud.

Unification of data is not a problem that can be solved quickly. However, it is not something that can be ignored if FSI organizations want to unlock the full potential of the data available to them. Just like creating one beautifully connected house, rather than a series of random buildings and rooms, IT managers and solutions architects must work towards building seamless data architecture, rather than a series of data silos. Only then will FSI businesses be able to tame the data deluge. Stay tuned for the next instalment in our financial services thought leadership series.