Grands comptes - Développement d'application spécifique - Secteur Aéronautique
DataImpactEngine
The DataImpactEngine is an event-driven solution designed to define, run, and supervise all streams between your applications. It offers low latency, high resilience, and ease-of-use through an intuitive interface.
Download link
1. Introduction
The DataImpactEngine is a flexible event orchestrator that empowers organizations to design, implement, and monitor data streams between multiple applications. The solution is designed to integrate seamlessly with modern architectures and supports various languages and frameworks including Java (Spring), PHP (Symfony), and Python.
The system leverages annotation-based declarations, which enable non-intrusive integration into existing codebases, and is optimized for both real-time and batch processing scenarios.
2. Architecture Overview
The DataImpactEngine architecture is modular and scalable, consisting of several key components:
- Event Orchestrator: The core engine responsible for receiving, routing, and processing events between endpoints.
- Graphical User Interface (GUI): A web-based interface that allows users to design, visualize, and manage data streams.
- API Integration Libraries: Language-specific libraries (Java, PHP, Python) to facilitate the integration of existing applications.
- External Dependencies: Components such as Kafka for message queuing, Elasticsearch/Oracle/MySQL for data storage, and Hashicorp Consul for service discovery.
The system can be deployed natively on macOS, Windows, and Linux, or within containerized environments to support microservices architectures.
3. Key Features
3.1 Design Flows from Trigger to Endpoints
Utilize the intuitive graphical interface to easily design, update, and manage event flows. With hot-update configuration capabilities, changes can be applied without restarting the engine.
3.2 Simplicity & Reliability
The DataImpactEngine is implemented as a single Java application, ensuring a simple deployment process. It supports multiple database backends and works with Kafka for reliable event processing. Optional integration with Hashicorp Consul adds robust service discovery features.
3.3 Easy Integration
With dedicated libraries for Java, PHP, and Python (Beta), integrating the DataImpactEngine with your existing API is straightforward. The system is compatible with Spring Boot Cloud Config, ensuring a seamless configuration experience.
3.4 Scalability & Performance
Whether processing real-time updates or handling large batches, the engine scales efficiently. Its low latency and minimal resource consumption allow it to handle high-throughput scenarios.
4. Use Cases
4.1 Real-Time Updates
Design and monitor real-time updates between your applications to detect bottlenecks and preemptively address potential issues. The interactive graphs can be shared with business stakeholders to facilitate clear communication.
4.2 Batch Processing
The engine supports processing large batches of events without impacting other operations. Its internal queuing mechanism allows for scalable concurrency, ensuring high efficiency even under heavy loads.
4.3 Hybrid Scenarios
Combine real-time and batch processing to create hybrid workflows that meet complex business requirements. The flexibility of the DataImpactEngine makes it adaptable to a wide range of scenarios.
5. How It Works
The DataImpactEngine functions as an event orchestrator, managing event flows between applications. The process begins when an event is triggered by an external application. This trigger can be an API call, a Kafka consumer reading, or a Kafka-bridge consumer reading, among other methods.
Once the event is triggered, it is received by the DataImpactEngine, which stores the event for tracing purposes and places it onto a Kafka queue. A DataImpactEngine consumer then processes the event, running the associated graph action flow defined by the trigger.
The original payload is stored indefinitely, ensuring it is always available for reference. Meanwhile, intermediate payloads are stored for a configurable duration (hours by default), allowing you to replay them on demand—ideal for debugging purposes.
The graph is composed of a series of action nodes. These nodes represent HTTP requests to your applications, which generate payloads for subsequent steps in the flow. Each execution is safely encapsulated in a dedicated thread to protect the main flow, ensuring that one process does not disrupt others.
To handle potential errors, the system includes a built-in try/catch mechanism for each action node. This allows you to catch and address errors as they occur, with the option to perform corrective actions. If any node encounters an error, a fallback URL can be triggered, ensuring that the flow can continue or that the issue can be mitigated appropriately.
Additionally, all execution monitoring can be accessed through an exposed API, providing visibility into the health and status of the event flows managed by the DataImpactEngine.