In short, DI is in the information infrastructure trenches, 24/7. Hence it is critical that DI platforms and processes be able to adapt to change intelligently, quickly and without requiring manual coding.
Adaptive DI covers a lot of territory. It starts with detecting and adapting automatically to minute-to-minute changes in data — transactional, operational and metadata. This means detecting and adapting to changing data volumes and to changing patterns in the data in order to optimize its processing. Is the data real-time, batch or changed-data? An adaptive DI platform should be able to adjust intelligently to capture and integrate it all.
Adaptive DI similarly entails adapting automatically to ongoing operating environment changes. This means automatically detecting new servers in the environment, determining which ones are available to share workloads in a server-grid arrangement, and seamlessly picking up processing on different servers in the event of server failure.
Depending on loads, it an adaptive DI platform should be able to decide whether to execute on a mainframe, a UNIX system-based server, a Linux box or a Windows machine, or any combination. In terms of data sourcing and transforming, adaptive integration also means detecting different versions of application software and adjusting accordingly.
Integrating Standards and Error Reduction
Not to be downplayed in a standards-dominated IT world, adaptive integration also involves adjusting automatically emerging standards while minimizing the operational impact. This includes avoiding the downstream ripple effects that can easily accompany the adoption of new standards. The idea is to implement the new standard in just one place and let the software make all the necessary downstream adjustments automatically.