Skip to main content
Get UI Flow
中文
Back to all articles
Get UI Flow Team

Integration Patterns with Existing Enterprise Stacks

An in-depth look at integration architectures — hub-and-spoke, event-driven, and API gateway — for connecting AI workflow platforms to legacy and modern enterprise systems.

Integration Enterprise Architecture API

The Integration Challenge

Every enterprise runs on a patchwork of systems. A typical mid-market company operates 80 to 120 SaaS applications alongside a handful of on-premise or legacy systems that refuse to retire. When you introduce a workflow automation platform, it needs to talk to these systems — reliably, securely, and without turning your architecture into a tangled mess.

Integration is where many automation initiatives stall. The platform itself might be capable, but if it cannot connect to your ERP, your CRM, your HRIS, and your homegrown order management system, its value is limited. The architecture you choose for these integrations determines not just whether they work today, but whether they will scale and remain maintainable as your organization grows.

This article covers the three most common integration patterns, when to use each, and how to handle the real-world complications that textbooks tend to skip.

Pattern 1: Hub-and-Spoke

Hub-and-spoke is the most intuitive integration architecture. The workflow platform sits at the center (the hub), and each connected system is a spoke. All data flows through the hub — no system talks directly to another.

How It Works

When a workflow needs data from System A and needs to write a result to System B, both interactions pass through the workflow platform. System A sends data to the platform via its connector. The platform processes the data, applies business logic, and sends the result to System B via a separate connector.

The hub maintains a canonical data model. Even if System A calls a customer record a “client” and System B calls it an “account,” the hub maps both to a single, standardized representation. This is one of the most underappreciated benefits of the pattern — it forces you to define a shared vocabulary across your organization’s systems.

When to Use It

Hub-and-spoke works well when:

  • You have a moderate number of systems (5 to 25) that need to exchange data
  • Workflows are primarily request-response or batch-oriented
  • You need centralized visibility into all integrations
  • Your team prefers a single place to monitor, debug, and manage data flows

Get UI Flow’s integration framework uses a hub-and-spoke model by default, with pre-built connectors for over 200 enterprise systems. For most organizations, this is the right starting point.

Trade-offs

The main limitation of hub-and-spoke is that the hub is a single point of failure. If the platform goes down, all integrations stop. This is manageable with proper high-availability architecture (redundant instances, health checks, automatic failover), but it is a risk you need to plan for.

Hub-and-spoke can also become a bottleneck at very high throughput. If you are processing millions of events per hour, the hub may struggle to keep up. For most enterprise workflow scenarios, this threshold is not a concern, but it matters for high-volume data pipelines.

Pattern 2: Event-Driven Architecture

Event-driven architecture (EDA) decouples systems by introducing a message broker or event bus between them. Instead of System A calling System B directly, System A publishes an event (“order placed”), and any interested system subscribes to that event and reacts independently.

How It Works

The core components of an event-driven integration are:

  • Event producers: Systems that emit events when something happens (a record is created, updated, deleted, or a threshold is crossed).
  • Event bus or message broker: The infrastructure that receives events and delivers them to subscribers. Common choices include Apache Kafka, AWS EventBridge, Azure Service Bus, and RabbitMQ.
  • Event consumers: Systems that subscribe to specific event types and take action when they receive one.

In this pattern, the workflow automation platform acts as both a consumer and a producer. It listens for events from your enterprise systems, executes workflow logic in response, and publishes its own events that other systems can react to.

When to Use It

Event-driven architecture excels when:

  • You need real-time or near-real-time data propagation across systems
  • Multiple systems need to react to the same event independently
  • You want loose coupling — the ability to add, remove, or replace systems without rewriting integrations
  • Your organization already uses a message broker or event streaming platform
  • You need to handle high-throughput scenarios (thousands of events per second)

Implementing Event-Driven Integrations

If you are building event-driven integrations with your workflow platform, there are several design decisions to get right:

Event schema design. Define a clear schema for each event type. Include enough context in the event payload that consumers can act on it without making additional API calls. But do not include sensitive data unnecessarily — every consumer that receives the event gets all of its contents.

Idempotency. Events can be delivered more than once (network retries, broker redelivery). Every consumer must handle duplicate events gracefully. The simplest approach is to include a unique event ID and track which IDs have been processed.

Ordering guarantees. Some message brokers guarantee ordering within a partition or queue; others do not. If your workflow depends on processing events in the order they occurred (for example, “created” before “updated”), verify that your broker configuration supports this.

Dead letter queues. When a consumer fails to process an event after repeated retries, the event should land in a dead letter queue for manual investigation — not disappear silently. Configure alerting on dead letter queue depth so failures are caught quickly.

Schema evolution. Your event schemas will change over time as business requirements evolve. Plan for backward compatibility from the start. Adding new optional fields is safe. Removing or renaming fields requires a migration strategy.

Trade-offs

Event-driven architecture adds operational complexity. You have a new piece of infrastructure (the message broker) to manage, monitor, and secure. Debugging is harder because the flow of execution is not linear — you need distributed tracing to follow an event through multiple consumers.

EDA also introduces eventual consistency. When System A publishes an event and System B consumes it, there is a delay — usually milliseconds, but potentially longer under load. If your workflow requires strict transactional consistency across systems, pure event-driven architecture may not be sufficient.

Pattern 3: API Gateway

An API gateway pattern places a centralized gateway in front of your enterprise APIs. The workflow platform calls the gateway, and the gateway routes requests to the appropriate backend system, handling cross-cutting concerns like authentication, rate limiting, and request transformation.

How It Works

The API gateway acts as a facade. The workflow platform makes a single, standardized API call. The gateway translates that call into whatever format the backend system requires — REST, SOAP, GraphQL, gRPC, or even a proprietary protocol. From the workflow platform’s perspective, every system looks the same.

This is particularly valuable when dealing with legacy systems. A 20-year-old ERP that exposes a SOAP interface with XML payloads can be wrapped behind a modern REST API. The workflow platform never needs to know about the complexity behind the gateway.

When to Use It

API gateway is the right pattern when:

  • You have many systems with heterogeneous API styles (REST, SOAP, GraphQL, proprietary)
  • You need centralized policy enforcement (rate limiting, IP whitelisting, request validation)
  • You want to abstract away the complexity of legacy system interfaces
  • Your organization has an API management team or platform already in place
  • You need detailed API analytics and usage tracking

Key Capabilities to Implement

Request and response transformation. The gateway should translate between data formats. If the workflow platform sends JSON and the backend expects XML, the gateway handles the conversion. If field names differ, the gateway maps them.

Authentication and authorization. Rather than configuring credentials for every backend system in the workflow platform, store them in the gateway. The workflow platform authenticates to the gateway with a single token, and the gateway handles authentication to each backend independently. This centralizes credential management and reduces the attack surface.

Rate limiting and throttling. Legacy systems often cannot handle the request volume that an automated workflow can generate. The gateway enforces rate limits per-backend, queuing or rejecting requests that would overwhelm a target system.

Circuit breaking. If a backend system is down or degraded, the gateway should stop sending requests to it temporarily (the circuit breaker pattern) rather than queueing thousands of requests that will all time out. This protects both the backend system and the workflow platform from cascading failures.

Caching. For data that changes infrequently (reference data, configuration, lookup tables), the gateway can cache responses and serve them without hitting the backend. This reduces load on legacy systems and improves workflow execution speed.

Trade-offs

An API gateway adds another layer of infrastructure and another potential point of failure. It also adds latency — every request goes through an extra hop. For most enterprise scenarios, this latency (a few milliseconds) is negligible, but it is worth measuring.

The gateway can also become a governance bottleneck. If every new integration requires an API to be configured in the gateway, and the gateway team has a two-week backlog, your automation velocity suffers. Ensure the gateway team is aligned with the automation team’s pace.

Connecting Legacy Systems

No discussion of enterprise integration is complete without addressing legacy systems. These are the systems that run critical business processes, have limited or no API support, and cannot be replaced anytime soon.

Common Legacy Integration Approaches

Database-level integration. Read from or write to the legacy system’s database directly. This is fast and reliable, but it bypasses the application’s business logic and validation. Use this approach only for read-only data extraction, not for writing data that the application expects to control.

File-based integration. Many legacy systems can export and import flat files (CSV, fixed-width, XML). Set up automated file generation on the legacy side and file ingestion on the workflow platform side. This is low-tech but battle-tested.

Screen scraping and RPA. For systems with no API and no file export capability, robotic process automation can interact with the user interface programmatically. This is fragile — any UI change can break the integration — but it is sometimes the only option.

Middleware adapters. Products like MuleSoft, Dell Boomi, and Workato specialize in wrapping legacy systems with modern APIs. If you have many legacy systems, a dedicated integration platform may be worth the investment.

Data Synchronization Patterns

When data exists in multiple systems, you need a synchronization strategy. There are three primary approaches:

System of record. Designate one system as the source of truth for each data entity. All other systems receive updates from the system of record. This is the simplest pattern and should be your default.

Last-write-wins. Allow any system to update a record, and propagate the most recent change to all other systems. This works for low-contention data but can cause subtle bugs when two systems update the same record simultaneously.

Conflict resolution. When simultaneous updates conflict, apply business rules to determine which update wins. This is the most complex pattern and should only be used when genuinely necessary — for example, when field teams and back-office teams update the same customer record.

Handling Authentication Across Systems

A workflow that spans five systems needs valid credentials for all five. Managing this securely is a non-trivial problem.

Best Practices

Use service accounts, not personal credentials. Create dedicated service accounts for the workflow platform in each connected system. This avoids the problem of workflows breaking when an employee changes their password or leaves the company.

Centralize credential storage. Store all integration credentials in a secrets manager (AWS Secrets Manager, Azure Key Vault, HashiCorp Vault) rather than in the workflow platform’s configuration. Rotate credentials on a schedule.

Prefer OAuth 2.0 and token-based authentication. Where supported, use OAuth with short-lived access tokens and refresh tokens. This limits the blast radius of a compromised token.

Implement least privilege. Each service account should have the minimum permissions needed for the workflows it supports. If a workflow only reads customer records, its service account should not have write access.

Audit authentication events. Log every authentication event — successful and failed — and review logs regularly. Unusual patterns (failed attempts, access from unexpected locations, off-hours activity) should trigger alerts.

Choosing the Right Pattern

In practice, most enterprises end up with a hybrid architecture. The workflow platform uses hub-and-spoke for most integrations, event-driven patterns for real-time scenarios, and an API gateway for legacy systems and centralized policy enforcement.

When deciding which pattern to apply to a specific integration, ask these questions:

  1. What is the latency requirement? If the workflow needs data in milliseconds, event-driven is the right choice. If seconds or minutes are acceptable, hub-and-spoke is simpler.
  2. How many consumers need this data? If only the workflow platform needs it, a direct connector is fine. If multiple systems react to the same event, use an event bus.
  3. How reliable is the target system? If it has frequent downtime, use an API gateway with circuit breaking and retry logic.
  4. What is the data volume? Low to moderate volume works with any pattern. High volume favors event-driven with a scalable message broker.
  5. What authentication does the target system support? If it only supports basic authentication or API keys, put an API gateway in front of it to add a modern auth layer.

Next Steps

Integration architecture decisions have long-lasting consequences. They affect performance, maintainability, security, and the speed at which you can automate new workflows. Take the time to evaluate your current landscape and choose patterns that match your organization’s reality — not just your aspirations.

Explore Get UI Flow’s integration capabilities to see which connectors and patterns are available out of the box. If you have a complex integration landscape and want to talk through architecture options, schedule a demo and bring your systems inventory. We can map out an integration strategy tailored to your stack.

This article is also available in 中文 .