Skip to content

Deep process automation: Process-first Architecture for operating enterprise processes

 Deep process automation: Process-first Architecture for operating enterprise processes

Key Message

  • An enterprise at any given time runs (operates) thousands of business processes across many software systems and teams.
  • To operate a business process we need: i) systems executing each business function, ii) connectors and interfaces for each system, and iii) ) business logic to operate the process across the systems.
  • Today, processes have i) mature software system ecosystem for executing each business  function (e.g. ERP,  CRM, CMS, BI), ii) advanced & standardised interfaces in/out of each system (e.g,  APIs, queues, and other event-driven and message passing systems, including kafka, AMQP, and JMS).
  • But operating them is difficult, where current approaches are (i) slow, (ii) costly, (iii) inflexible to change.
  • Enterprise processes broadly fall into two categories: i) simple, and ii) complex.
  • Simple processes (known as tasks and workflows) (i) have 1-10 tasks, (ii) localised to 1 or 2 operational  teams, (iii) involve a few software systems.
  • Complex processes (i) have 50+ tasks, (ii) are operationally distributed across 3+ separate teams , (iii) involve multiple software systems.
  • For operating simple processes there are well known solutions using workflow and RPA technologies.
  • For operating complex processes, there are conventional approaches which use combinations of i) process orchestration, and ii) process choreography.
  • These approaches use non-standard and fragmented logic to operate the process, which ultimately makes them not practical (slow, expensive, and inflexible to change) at scale.
  • Deep Process Automation is a new paradigm for operating complex processes that is practical (fast, low-cost, and flexible) at scale.
  • DPA achieves this by adopting a distributed architecture which is a natural choice to wrangle the complexities inherent in operating complex processes which are spread across operationally separate teams.
  • Luther’s DPA platform provides a vertically integrated stack optimised for operating complex business processes
  • The platform abstracts away the common requirements of developing and operating a complex process  to enable enterprises to focus on the application business logic and rapidly develop long lived complex applications at low cost which are flexible to changes in the process  over time.
  • This platform is already in production and is expandable to standardise the operations across all complex processes.

Introduction

  • An enterprise at any given time runs (operates) thousands of business processes across many software systems and teams.
  • To operate a business process we need i) systems executing each business function, ii) connectors and interfaces for each system, and iii) ) business logic to operate the process across the systems.
  • Today, processes have 1) mature software system ecosystem for executing each business  function (including ERP,  CRM, CMS, BI, etc), 2) advanced & standardised interfaces in/out of each system (including APIs, queues, and other event-driven and message passing systems, including  kafka, AMQP, and JMS).
  • But operating them is difficult, where current approaches are (i) slow, (ii) costly, (iii) inflexible to change.
    • These inefficiencies are due to 1) manual steps, 2) ad-hoc point-to-point API integrations to cross system boundaries, and 3) localised & fragmented & bespoke logic distributed across systems & teams.
  • These approaches are not practically adaptive to change as teams, systems, processes drift.
    • Over time changes to teams, logic and rules, result in localised process updates that are not reflected in the end-to-end process operations.
  • As a result, limitations emerge including business and compliance errors, limited process execution visibility, increased troubleshooting and change management costs, lower developer velocity due to unexpected patching, and the need for system reconciliation.
    • These limitations ultimately result in increased overall processing time and cost.
  • Enterprise processes broadly fall into two categories: simple and complex.
  • We define process complexity by the number of (i) tasks, (ii) operationally separate participants, (iii) software systems participating in operating the process.
    • Simple processes (known as tasks and workflows) (i) have 1-10 tasks, (ii) localised to 1 or 2 operational  teams, (iii) involve a few software systems
    • Complex processes  (i) have 50+ tasks, (ii) are operationally distributed across 3+ separate teams , (iii) involve multiple software systems
  • There are well known solutions for operating simple processes, using workflow and RPA technologies.
    • RPA systems automate small processes that require manual steps due to a lack of APIs.
    • Workflow systems standardise the execution of business processes at a local level  (1-2 operationally separate teams and up to 10 tasks).
  • Conventional approaches for operating complex processes use combinations of i) process orchestration, or ii) process choreography.
    • Process orchestration approaches stitch together combinations of RPA and workflow systems through ad-hoc point-to-point message passing techniques.
    • Event Driven Architectures and microservices use process choreography, where distributed microservices raise events on message queues for other services to listen and react.
  • These approaches use non-standard and fragmented logic implementation to operate the process, which ultimately makes them not practical (expensive, slow, inflexible to change) at scale.
    • All of this localised process operations logic (shared kernel) must adapt as the teams, systems, and process evolves, which introduces errors and considerable maintenance costs.
  • Deep Process Automation is a new paradigm for operating complex processes that is practical (low-cost, fast to deliver, flexible) at scale.
    • DPA interconnects separate teams through a DLT network that executes a common Smart Contract for process operations.
    • DPA allocates a node for each operational entity to form a network
    • Deploys connectors that interfaces the entity’s systems with the node
    • Encodes the process operations logic within a smart contract
    • Executes the smart contract across the network of nodes
  • DPA achieves this by adopting a distributed architecture which is a natural choice to wrangle the complexities inherent in operating complex processes which are spread across operationally separate teams.
    • Smart contracts make otherwise fragmented and local operations logic explicit and standardised, ensuring consistent operations across the participants.
  • Luther’s DPA platform provides a vertically integrated stack optimised for operating complex business processes. It abstracts away these complexities to enable enterprises to focus on the application business logic.
    • DPA standardises the execution of complex  processes that run across the entire enterprise through a common process operations platform.
  • This platform is already in production and is expandable to standardise the operations across all complex processes.

Problem Statement
Large enterprises execute thousands of processes each day, each of which touches several software systems and teams. Business processes consist of many systems, teams, and technologies that must seamlessly interoperate and coordinate to perform their execution. To operate a business process we need i) systems executing each business function, ii) connectors and interfaces for each system. iii) business logic to operate the process across the systems (process operations).

Today, processes have a i) mature software system ecosystem for executing each business functions, and ii) advanced & standardised interfaces in/out of each system. Individual business functions are heavily optimised and standardised, by adopting streamlined microservice and event driven architectures, and by offering modern APIs.

Typical systems include Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Content Management Systems (CMS), and Business Intelligence (BI). They are specialised to perform specific tasks within a business domain and are optimised to execute tasks within that domain. Although they execute tasks as part of an overall business process, these systems do not track or otherwise monitor the overarching business processes in which they participate.

Modern IT systems provide standardised interfaces, through APIs (such as OpenAPI, SOAP, gRPC) , queuing systems (such as AMQ and RabbitMQ) and other message passing systems (including kafka and JMS). These interfaces passively receive requests from other systems, perform business functions within the systems domain, and return a response.

Despite these advances in systems and interfaces, operating a process across systems remains challenging. Process operations refers to the coordinated execution of the following sequential activities: i) select and trigger task,  ii) communicate with participant/system, iii) participant/system execution, iv) monitoring of execution, v) receive & verify execution results. These steps repeat in a loop until the process is complete.

Current approaches to process operations are (i) slow, (ii) costly, (iii) inflexible to change. These inefficiencies are due to i) manual steps,  ii) ad-hoc point-to-point API integrations to cross system boundaries, and iii) localised & fragmented & bespoke logic distributed across systems & teams. These approaches are not practically adaptive to change as teams, systems, processes drift. The top of Fig. 1 illustrates the mature systems and advanced connectors, however the bottom of the diagram (red) depicts the point-to-point connections and distributed logic that stitches together the systems across the various teams, and is the source of these inefficiencies.

Process Operation Messy

When a new business process is developed the delivery team spends considerable effort in development and testing to ensure that it executes correctly at that point in time. Bespoke code is developed for the current version of the process by developers with local knowledge of their part of the processes as part of a dedicated project team.  However, over time the original delivery teams move on to other projects, the systems involved evolve independently, and the local operations logic continuously changes to meet new requirements. Connections between systems quickly become outdated and left behind as the process changes and the separate teams upgrade and replace their systems.

As time goes on, it becomes increasingly difficult to give due consideration to the downstream impact of these localised changes, and as a result, operations inefficiencies emerge including business and compliance errors, limited process execution visibility, increased troubleshooting and change management costs, lower developer velocity due to unexpected patching, and the need for system reconciliation. To make matters worse, the business processes become “ossified” (inflexible) as it becomes too risky and costly to change. These limitations ultimately result in increased overall processing time and cost as the process becomes more complex.

Enterprise processes broadly fall into two categories: simple and complex. We define process complexity by the number of (i) tasks, (ii) operationally separate participants, (iii) software systems participating in operating the process. Simple processes (known as tasks and workflows) (i) have 1-10 tasks, (ii) localised to 1 or 2 operational  teams, and (iii) involve a few software systems. Complex processes  (i) have 50+ tasks, (ii) are operationally distributed across 3+ separate teams , and (iii) involve multiple software systems

There are well known and widely adopted solutions for operating simple processes. These conventional systems focus on individual teams with small numbers of tasks, with a robust ecosystem developed around Robotic Process Automation (RPA, e.g., UPIPath & BluePrism)  and workflow automation tools (WFA, e.g, Pega & Appian) and Business Process Management (BPM).

RPA is effective for automating small-complexity processes with 1-2  tasks across several legacy systems within a single team. RPA systems automate small processes that require manual steps due to a lack of APIs. RPA’s successor technology, Intelligent Process Automation (IPA), improves RPA systems to make them easier to develop and more robust against changes by leveraging AI (e.g., computer vision). Despite these improvements, RPA & IPA are heavily optimised for small tasks across one or more systems within the same team and business function. Some teams try to scale their automations to larger processes by connecting together multiple RPA bots through bespoke code, however in practice this approach is 1) brittle, 2) hard to maintain, 3) difficult to troubleshoot.

To overcome these limitations, teams use workflow execution systems (e.g Pega, Appian, IBM BPM), that are designed for simple processes that span systems with APIs. Batch schedulers are a traditional type of workflow system that queues jobs consisting of multiple tasks, and executes these jobs on a predetermined schedule. Tasks in these jobs typically involve making API calls to various systems to retrieve data, processing the data, and then making additional API calls. Batch schedulers are effective at coordinating the execution of tasks within a single team. Modern workflow systems also include native integrations with common systems, as well as offer low code development experiences. Workflow systems are effective for executing small-complexity business processes of 2-10  tasks, several systems, and one or two operationally separate teams. However, proprietary workflow systems can be quite costly, and often require specialised development teams (e.g., pega developers) despite their low code features.

When compared to the industries surrounding simple process operations, process operations for complex processes are underserved. Conventional approaches for operating complex processes use combinations of i) process orchestration, or ii) process choreography.

Process Orchestration

Fig 2. Process orchestration pattern adopted by conventional Service-oriented Architectures (SOA).

Process orchestration (Fig. 2) approaches stitch together combinations of RPA and workflow systems through ad-hoc point-to-point message passing techniques. Here the composite service is tightly coupled to the various business function services in order to send them explicit commands in a predetermined sequence. The composite service implements a batch scheduler or workflow system. Although these composite systems are effective at coordinating the execution of tasks within a single team, they face inefficiencies when crossing team-boundaries in complex business processes. These inefficiencies stem from a lack of visibility and unclear governance of the composite service that orchestrates the entire process. As a result, each operationally separate team implements their own composite services for their individual part of the process, using a variety of technologies. In practice, participants chain together workflows by running multiple workflow instances that communicate locally through standardised interfaces, point-to-point API calls, and bespoke code known as “script bloat”. The lack of coordination of these composite services results in process execution inefficiencies, including errors and limited execution visibility across system boundaries. As a result, subsequent offline reconciliation processes are necessary to ensure that the separate  teams and their systems are in agreement. Reconciliation introduces operations friction, delays, and economic losses from uncertainty and compliance violations.

Process Choreography

Fig 3. Process choreography pattern adopted by modern event-driven architecture and microservice architectures.

Modern Event Driven Architectures (EDA) and microservices (MSA) use process choreography (fig. 3) to attempt to overcome limitations of conventional process orchestration, where distributed microservices stream events on message queues (e.g., kafka) for other services to listen to and react to. This approach is inherently distributed and removes the composite service coordination and coupling problems across team boundaries. However, it also distributes the process operations logic across all the individual services, where no individual service has a coherent view of the end-to-end process. Although the individual connections between these services are standardised, the execution logic coordinating the handoff between the systems is localised and left to the implementers of the individual service, meanwhile none of the participants have a global view of the execution.

In practice, both of these approaches use non-standard and fragmented logic implementations to operate the process, which ultimately makes them impractical (expensive, slow, inflexible to change) at scale. These localised process operations logic are effectively shared kernels which must adapt seamlessly as the teams, systems, and process evolves.

Ultimately it is necessary to adopt new platforms that are optimised for complex business process operations. What is needed is a standardised and efficient way to operate complex processes, including scheduling, execution, and verification across its participants.

Deep Process Automation

Deep Process Automation is a new paradigm for operating complex processes that is practical (low-cost, fast to deliver, flexible) at scale. It is a process-driven approach to automation that holistically addresses the inefficiencies of complex business processes across their entire lifecycle. This approach is in contrast with conventional IT design which optimises individual business units and functions, and deploys point solutions to eliminate local inefficiencies. Fig. 4 illustrates where DPA sits on the complexity spectrum, where RPA and workflow automation (WFA) tools target low-complexity processes (on the left), while super complex processes use bespoke code (on the right).

Fig 4. Solution space for process operations with increasing complexity (number of operationally separate participants and number of tasks).

DPA achieves its efficiencies by adopting a distributed architecture which is a natural choice to wrangle the complexities inherent in operating complex processes which are spread across operationally separate teams. DPA interconnects separate teams through a DLT network that executes a common Smart Contract for process operations. Smart contracts make otherwise fragmented and local operations logic explicit and standardised through a single logical runtime, ensuring consistent operations across the distributed participants. Fig. 5 illustrates the core principles and components in a DPA platform.  Fig. 6 illustrates the distributed message flows across the local operations through a common orchestrator Smart Contract.

Fig 5. Core components of a Deep Process Automation (DPA)  Platform.

DPA interconnects separate teams through a DLT network that executes a common Smart Contract for process operations. With DPA, each operationally separate participant in the process is allocated a node. DPA deploys connectors that are the interfaces between the participant’s systems and their node. Collectively the nodes form a network and execution layer for Smart Contracts. DPA encodes the process operations logic within a smart contract, and executes the smart contract across the network of nodes. Fig. 6 illustrates a network topology view of an example DPA deployment. The operationally separate teams sit around the core network and their respective business function systems interconnect through a single touchpoint. The inner core network is managed by the DLT protocols which manage transactions and coordinate the Smart Contract execution.

Fig 6. Network view that illustrates the distributed architecture of a DPA platform and how it connects to par hiticipant systems.

Smart Contracts are the guardrails that standardise the interactions across the various systems operated by the participants. Smart Contracts capture the embedded and local operations logic that are implicit between APIs, which the participants have a joint interest in executing correctly. Smart Contracts orchestrate the data processing and message passing between the participant’s respective systems to ensure that everyone is in agreement on the execution. They adopt the “shared kernel” architecture pattern and capture the common process context that all the participants have a joint interest in.

Each participant node runs an identical copy of the Smart Contract, and the platform ensures that these copies are always in sync according to the agreed upon governance rules. Smart Contracts enforce the business process that is agreed  upon among all the participants, and move  the bespoke process operations steps “on-chain.”

Further, the Smart Contract is a living document for the end-to-end process and standardizes its operations within a single runtime. In other words, the physically distributed process becomes logically centralised in code. This allows for (i) an efficient and scalable system for operating the process, (ii) resilience to changes in the process over time and (iii) optimal utilisation and coordination of the software system through a common platform.

Participants run connectors that use standard integration patterns and technologies that hook into the participant’s local systems to send and receive data from those systems. These connector patterns include 1) RPC (REST/JSON, SOAP/XML, protobuf/GRPC), 2) Direct DB (PostgresDB, Oracle), 3) file transfer (SFTP), 4) asynchronous queue (AMQP, Kafka). These connectors translate data from their local systems into a common data model[Ref] (CDM) that is used within the Smart Contract and stored on the ledgers in this standard format.

The DPA platform is responsible for keeping all of the nodes in sync, with a shared copy of both the process data and process logic, and history, which enforces correct execution. This architecture eliminates the need for secondary reconciliation systems, as the process is jointly executed by the participants in real-time with full transparency for each node.

Fig 7. Luther’s vertically integrated DPA stack applied to the claims settlement processes.

DPA in practice

DPA has been deployed in production to automate a number of different applications including Cross Entity claims settlement, mortgage and insurance policy sourcing, issuance of digital assets, single view of customer profile across lines of business and many other applications. DPA is highly repeatable, designed for repeatability across additional use cases, using infrastructure as code and containerization best practices.

Luther’s DPA platform provides a vertically integrated stack optimised for operating complex business processes. It abstracts away these operational complexities to enable enterprises to focus on the application business logic. DPA standardises the execution of complex processes that run across the entire enterprise through a common process operations platform. Fig. 7 illustrates the key layers in the DPA stack, where participant systems are depicted at the top of the diagram, and the Smart Contract automation is depicted at the bottom. From a functional developer perspective, they interact with the logic in the bottom layer, while the underlying protocols and servers are opaque.

Luther’s DPA platform supports Domain Driven Design (DDD) patterns, and employs microservices that are coordinated with strong transactional semantics across domains. Re-usable logic modules enable rapid delivery for new use cases. These modules include user & org management, claims, billing, documents, and worklist task scheduling. The continued development of modules across other use cases enables a catalogue of reusable components that form the building blocks for most complex applications.

Reusable DPA connectors have been designed to work with existing customer services, including identity authorization and authentication, payments, and foreign exchange calculations. Luther’s DPA platform is open core, leveraging popular and well maintained open source projects including kubernetes, hyperledger fabric, prometheus, grafana, docker, and golang.

DPA is well suited for complex business processes and enables faster delivery at lower cost than conventional process orchestration and choreography. Most importantly, processes running on the platform are easily adaptable to unlock rapid innovation. Luther DPA’s platform & process-driven strategy has already resulted in 15-20x ROI for key customers and is expandable to standardise the operations across all complex processes.

Conclusion

    • DPA is a process-driven approach to automation that examines and eliminates inefficiencies of complex business processes holistically, across their entire lifecycle.
    • Complex processes today are chained together with ad hoc logic using point-to-point integrations, using APIs, resulting in operations inefficiencies.
    • These inefficiencies result from business and compliance errors, limited process execution visibility across systems, expensive troubleshooting, and requires reconciliation processes.
    • With DPA, each operationally separate participant in the process is allocated a node. Collectively the nodes form a network and execution layer for Smart Contracts.
    • The Smart Contracts are the guardrails that standardised the interactions across the various systems operated by the participants. They capture the implicit  processes between the APIs.
    • Today DPA has successfully been applied across multiple domains, including insurance, legal, and mortgage processing DPA’s open core and platform strategy results in lower costs in comparison with conventional approaches, already resulting in 15-20x ROI for customers.

We use cookies to give you the best experience of using this website. By continuing to use this site, you accept our use of cookies. Please read our Cookie Policy for more information.