Mule 3
Mule 3
Mule 3
Mule works by responding to events (such as the receipt of a message) that are initiated by
external resources. This follows the concept of Event Driven Architecture (EDA).
o At the simplest level, Mule applications accept and process events as messages through
several message processors.
o Message processors are arranged into a flow (or several of them).
Flow Configuration
A Flow is configured in XML using the <flow> element. Each flow has a name attribute, a
message source (unless it’s a private flow), one or more message processors and an optional
exception strategy.
Basic Structure
<flow name="">
- 0..1 MessageSource
- 1..n MessageProcessor(s)
- 0..1 ExceptionStrategy
</flow>
Types of Flows
When its execution is triggered by another flow in an application, a flow exists as one
of three types:
A subflow processes messages synchronously (relative to the flow that triggered its
execution) and always inherits both the processing strategy and exception strategy
1 Subflow employed by the triggering flow. While a subflow is running, processing on the triggering
flow pauses, then resumes only after the subflow completes its processing and hands the
message back to the triggering flow.
A synchronous flow, like a subflow, processes messages synchronously(relative to the
Synchronous flow that triggered its execution). While a synchronous flow is running, processing on the
2
Flow triggering flow pauses, then resumes only after the synchronous flow completes its
processing and hands the message back to the triggering flow. However, unlike a subflow,
this type of flow does not inherit processing or exception strategies from the triggering
flow.
This type of flow processes messages along a single thread, which is ideally suited to
transactional processing.
An asynchronous flow simultaneously and asynchronously processes messages in parallel
to the flow that triggered its execution. When a flow passes a message to an asynchronous
flow, thus triggering its execution, it simultaneously passes a copy of the message to the
Asynchronous
3 next message processor in its own flow. Thus, the two flows – triggering and triggered –
Flow
execute simultaneously and independently, each finishing on its own. This type of
flow does not inherit processing or exception strategies from the triggering flow.
This type of flow processes messages along multiple threads.
Execution
Exception and
Relative
Type of Flow Component Processing
to Triggering
Strategies
Flow
Subflow Flow Reference synchronous inherited
Synchronous
Flow Reference synchronous not inherited
Flow
Asynchronous Flow Reference wrapped within
asynchronous not inherited
Flow an Async Scope
Sub Flow
queued-thread-
per-processor- Not applicable to most use cases. Writes messages to a queue, then every processor in
processing- the scope runs sequentially in a different thread.
strategy
thread-per-
processor- Not applicable to most use cases. Every processor in the scope runs sequentially in a
processing- different thread.
strategy
Applicable when using a HTTP Listener at the start of your flow and you use one for more
HTTP Requester’s in your flow and your flow does not include any components currently
not supported with this strategy.
Non-blocking
The non-blocking processing strategy uses an evented non-blocking processing
model to process requests. In this model a single thread still handles each incoming
request, but non-blocking components return this thread to the listener thread pool.
Global Element Description
Only upon obtaining and using a new thread, can processing continue.
custom A user-written processor strategy. Create the custom strategy through the custom-
processing processing-strategy element and configure it using Spring bean properties. This custom
strategy processing strategy must implement the org.mule.api.processor.ProcessingStrategy interface.
o Variables are user-defined metadata about a message. Variables have three scopes:
o Flow variables apply only to the flow in which they exist.
o Session variables apply across all flows within the same application.
o Record variables apply to only to records processed as part of a batch.
Variables are temporary pieces of information about a message that are meant to be used
by the application that is processing it, rather than passed along with the message to its
destination.
Message Payload
The message payload is the body of a Mule message. For example, the payload contains the
content of records you retrieve through the Select operation of the Database connector or
the content of a file that you retrieve through a Read operation to the File or FTP connector.
Use set-payload message processor to completely replace the content of the message’s
payload.
Message Enricher scope: Mule Message Enricher is one of the scopes in Mule which allows
the current message to be augmented using data from a separate resource, which we call
the Enrichment Resource. The Muleimplementation of the Enrichment Resource (a source
of data to augment the current message) can be any message processor.
Message Sources
Mule processes messages, also known as events, which may be transmitted from resources
external to Mule. The message receives messages from one or more external sources, thus
triggering the execution of a flow (flow instance). Each time it receives another message, the
message source triggers another flow instance. Message sources in Mule are usually Anypoint
Connectors, elements which provide connectivity to a specific external source, either via a standard
protocol (such as HTTP, FTP, SMTP) or a third-party API (such as Salesforce.com, Twitter, or
MongoDB.)
Composite Sources
A special scope known as a Composite Source Scope allows you to encapsulate
two or more connectors that receive the same type of data (for example, email, files,
database maps, or HTML) into a single message processing block. Each embedded
connector listens on its specific channel for incoming messages. Whichever
connector receives a message first becomes the message source for that particular
instance of the flow.
Message Processors
Typically, message processors are pre-packaged units of functionality that process
messages in various ways. Message processors offer the following advantages:
o Generally, they don’t have to be custom-coded.
o Multiple message processors can be combined into various structures that provide
the exact functionality you need for your application.
Category Brief Description
They provide a means for Mule applications to communicate with the outside world.
Connectors often serve as message sources, but they can also appear elsewhere in a
Connectors
flow, performing operations that require data exchange outside the flow, or defining a final
destination of the message.
They enhance, in a wide variety of ways, the functionality of other message processors or
Scopes
functional groups of message processors known as Processing Blocks.
They allow you to enhance a flow by attaching functionality such as logging, or displaying
output. Alternatively, they facilitate integration with existing systems by providing
Components
language-specific "shells" that make custom-coded business logic available to a Mule
application.
Transformers They enhance or alter the message payload, properties, variables, or attachments.
Singly and in combination, they determine whether a message can proceed through an
Filters
application flow based on some condition or test.
They specify how messages get routed among the various Message Processors within a
Routers(Flow
flow. They can also process messages (that is, aggregate, split, or resequence) before
Controls)
routing them to other message processors.
Error Handlers They specify various procedures for handling exceptions under various circumstances.
This special category currently contains just one member: the Custom Business
Miscellaneous Event processor, which you place between other processors to record Key Performance
Indicator (KPI) information, which you monitor through the Mule Console.
Connectors
o Operation-based: When you add operation-based connector to your flow, you
immediately define a specific operation for that connector to perform. Operation-based
connectors follow an information exchange pattern based on the operation that you
select and are often (but not always) named and based around one or more specific
third-party APIs.
o Endpoint-based : Endpoints pass messages into and out of a Mule flow, usually to
external resources such as databases, Web clients, or email servers, but they can
exchange messages with other Mule flows as well. Configuration is used to set up a
global endpoint. Endpoint-based connectors follow either a one-way or request-
response exchange pattern and are often (but not always) named and based around a
standard data communication protocol, such as FTP, JMS, and SMTP.
Inbound Endpoints
An Inbound Endpoint, which resides at the beginning of a flow and acts as
a Message Source, triggers a new flow instance each time it receives a message.
Each incoming message must follow the specific protocol supported by the receiving
endpoint. For example, email can arrive on a POP3 or IMAP inbound endpoint, but
files must use the FTP, File, or SFTP endpoints.
Outbound Endpoints
If an endpoint-based connector is not the first processor (i.e., the message source) in
a flow, it is designated as an outbound endpoint, since it uses the specific transport
channel it supports (such as SMTP, FTP, or JDBC) to dispatch messages to targets
outside the flow, which can range from file systems to email servers to Web clients
and can also include other Mule flows.
Scopes
Scope Description
Creates a block of message processors that execute asynchronously while the
rest of the flow continues to execute in parallel. For instance, you can populate
an Async scope with a sequence of processors that perform logging so that
logging does not slow down the rest of the application flow. It can have its own
Async processing strategy.
To facilitate this simultaneous branch processing, the async scope sends one
copy of the message it has received to the first embedded message processor
in its own processing block; at the same time it sends another copy of the
message to the next message processor in the main flow (see below).
The Cache Scope is a Mule feature for storing and reusing frequently called
data. The Cache Scope saves on time and processing load.
Caches data produced by part of a flow. Wrap a cache scope around message
processors in your flow so that it caches the response events produced within
the scope. Advantages:
o Processing repeated requests for the same information.
Cache
o Processing requests for information that involve large, non-consumable
message payloads.
Different Object Stores:
o InMemoryObjectStore
o ManagedObjectStore
o TextFileObjectStore
To accept incoming messages from multiple input channels, place two or more
Composite message sources (also known as receivers) into a Composite Source. A
Source message entering the Composite Source on any supported channel triggers the
processing flow.
Splits any type of message collection apart into individual messages for
For each
processing, and then aggregate them again at the end of the scope.
Appends information to a message, often using an expression to determine
what part of the payload to evaluate so as to return an appropriate value to
Message append to that payload. For example, the expression can evaluate a ZIP code
Enricher and then append the associated City and State to the payload. The message
processor is executed and the enricher scope uses the result of that execution
to enrich the message coming into the scope.
Periodically polls an embedded message receiver for new messages. For
Poll example, set a Poll to retrieve email at regular intervals by placing a request-
response connector such as SMTP within the Poll processing block.
A flow that is called by another flow. Sub flows inherit their properties from the
flow reference and are always synchronous. This type of scope can be very
Sub Flow useful when you need to reuse code at several points within the same flow.
Simply place (and configure) Flow Reference Components wherever you want
the sub flow processing block to execute.
Mule applies the concept of transactions to operations in application for which
the result cannot remain indeterminate. In other words, where a series of steps
Transactional
in flow must succeed or fail as one unit, Mule uses a transaction to demarcate
such a unit.
Attempts, at a specified interval, to route a message to an embedded message
processor until one of the following occurs:
o The message processor succeeds
o The maximum number of retries is reached
Until
o An exception is thrown
Successful
Thus, Until Successful can prove useful in sending messages to resources,
such as shared printers, which might not always be immediately available.
By default, until-successful’s processing occurs asynchronously from the main
flow.
The Request-Reply Scope enables you to embed a pocket of asynchronous
Request- processing within a Mule flow. This functionality enables you to receive a
Reply response from an asynchronous flow without hardcoding the destination of the
response.
o Under some circumstances, splitting a message collection into pieces can cause
certain vital bits of XML — metadata in the header or footer, for example — to be
dropped from the re-aggregated XML.
o When you split certain collection types — Java, for example — into many pieces for
processing, the collection may be re-aggregated into a different collection type —
MuleMessageCollection, for example. (As a result, you may need to add extra flow
steps to transform the processed message collection back into its original collection
type.)
o When you split, process, or aggregate a message collection, you must choose
among several splitter and aggregator types. Sometimes, it proves difficult to
determine which splitter/aggregator combination best suits your message processing
needs.
o Foreach splits collections into elements, then processes them iteratively without
losing any of the message payload.
o After Foreach splits a message collection and processes the individual elements, it
doesn’t re-aggregate those individual elements into a MuleMessageCollection;
rather, it returns the original message. (This results in "Java in, Java out" rather than
"Java in, MuleMessageCollection out.")
o The Foreach scope is versatile; it can iteratively process elements from any type of
collection, including maps, lists, arrays, and MuleMessageCollections.
o The Foreach scope can split and process collections of elements that are not part of
the message payload. For example, Foreach can process message property
collections (metadata) from the message header.
You can’t use transactions in VM and JMS connectors inside a request-reply scope.
Transactions are not compatible with how the request-reply scope works.
The request-reply scope does not send a request until a transaction is committed,
and a transaction in turn is not committed until the entire flow executes, including the
execution of the request-reply scope. This leads to a situation where both processes
block each other.
The runtime can’t fully execute the flow because it’s still waiting for the reply on the
request-reply scope, but this reply never arrives because the request is not sent until
the transaction is committed.
Components
Components fall into three categories, general, script, and web service.
General Components
General components execute whenever a message is received. The logic embedded into General
components cannot be modified. General components allow you to perform general tasks to help
keep your flows organized. They are different than other components in that they do not usually
act or transform the Mule message.
Components Description
This processor calls another external flow. The called flow can be one of
two types:
A subflow, which inherits the processing strategy and exception handling
properties of the calling flow.
Flow
A child flow, which sets its own processing strategy and exception handling
Reference
properties.
If the called flow is Synchronous, the calling flow waits until the called flow
completes execution, then resumes. If the called flow is Asynchronous, the
calling flow resumes execution immediately.
Script Components
Script components facilitate Software as a Service (SaaS) integration by providing language-
specific "shells" to make custom-coded business logic available in a Mule application. Script
components also allow you to:
o Configure interceptors
o Add Spring beans
o Change the value or reference of a specific property within the associated class
The Java Component allows you to reference a Java class. The other Script components support the
Groovy, JavaScript, Python and Ruby scripting engines.
Description
Components
REST Makes a REST web service available to the application flow via Jersey.
CXF Makes a web service available to the application flow via CXF.
Transformers
Script Transformers
This type of transformer integrates a script to perform the transformation. One transformer is
provided for each of the four supported scripting languages, and a fifth, generic transformer
can implement a script written in any of the four languages.
Icon Transformer Description
JmsMessage to Converts a JMS message into an object by extracting the message payload.
Object (Enterprise
Edition) Documentation: Common Transformer Configuration Fields
Converts program code types into readable text strings. Used for debugging.
Collections will be truncated at a maximum of 50 items. For larger payloads,
Object to String a custom Java transformer needs to be used.
Serializable to Byte Converts a Java Object to a byte array by serializing the object.
Array Documentation: Object-to-XML Transformer Reference
Content Transformers
This group of transformers modifies messages by adding to, deleting from, or converting a
message payload (or a message header).
Icon Transformer Description
Evaluates one or more expressions within the message, then transforms the
Expression message according to the results of its evaluation.
SAP Transformers
These transformers change SAP objects (JCo functions or IDoc documents) into their XML
representations, or an XML representation into the corresponding SAP object.
Icon Transformer Description
Transforms a SAP object representing a JCo function or IDoc document into its
SAP-Object-to-XML XML representation.
(Enterprise Edition)
Documentation: SAP Connector.
Collectively, these four Message and Variable Transformers replace the single Message Properties
Transformer, which has been deprecated.
This transformer allows you to specify a property, which is typically applied to the
message header. The "life span" of such a property extends from the moment it is
Property created until the message is passed to an outbound endpoint.
This transformer resembles the Variable transformer, except the Session Variable
set by this transformer persists as long as the associated message remains within
Session the Mule application, even though the message may be processed through multiple
Variable flows.
Custom Transformers
For detailed information on configuring standard and custom Transformers with an XML
editor, see Using Transformers.
Filters
Filters Description
These logic filters express simple logic. When required to express complex logic, these three fil
And, Or, Not
can be used in combination with other filters.
Custom References a user-implemented filter class.
Exception Filters against an exception of a specified type.
Expression Filters against a range of expressions.
Idempotent
Ensures that a flow receives only unique messages.
Message
Message Applies specified criteria to a message to determine whether it should be processed.
Message Applies a regular expression pattern to the message payload to determine whether it should be
Property processed.
Filters Description
Payload Evaluates the payload type of a message to determine whether it should be processed.
Regex Applies a regular expression pattern to determine whether it should be processed.
Schema
Uses the JAXP libraries, to validate a message against a schema.
Validation
Wildcard Matches string messages against a wildcard pattern.
Routers
Message
Description
Processor
Broadcast same message to multiple targets. Routes are invoked sequentially. All
messages (if any) returned by the targets are aggregated together and form the response
from this processor.
All(Deprecated)
Based on an API RAML file, it routes arriving calls to the corresponding flow depending on
APIkit Router
the resource and method. See APIkit documentation.
Async Run a chain of message processors in a separate thread
Evaluates a message against specified criteria, then sends it to the first message
Choice
processor that matches those criteria.
Collection Checks the group tag (known as a Correlation ID) attached to each message in a group to
Aggregator create a collection of messages which share the same Correlation ID.
Accepts a collection of messages (or parts of messages), splits them into individual
Collection Splitter messages, then sends each new message, in sequence, to the next message processor
in a flow.
Custom
Lets you write you own Java code to determine how messages are constructed and sent.
Aggregator
Custom
A custom-written message processor
Processor
The First Successful message processor iterates through its list of child message
First Successful processors, routing a received message to each of them in order until one processes the
message successfully. If none succeed, an exception is thrown.
Idempotent An idempotent filter checks the unique message ID of the incoming message to ensure
Message Filter that only unique messages are received by the flow.
This filter calculates the hash of the message itself using a message digest algorithm to
Idempotent
ensure that only unique messages are received by the flow. This approach provides a
Secure Hash
value with an infinitesimally small chance of a collision and can be used to filter message
Message Filter
duplicates.
Checks the group tag (Correlation ID) of each message in a collection, selects all the
messages whose gr
Message Chunk
oup tag matches the specified value, then combines those messages into a single
Aggregator
message which is then sent to the next message processor in an application flow. This is
particularly useful for re-assembling the segments of a long message that has been
Message
Description
Processor
received as multiple messages, each one consisting of a segment of fixed length created
and sent by the Message Chunk Splitter.
Sections a message into segments of a specified length, then sends each segment, in
Message Chunk
sequence, to the next message processor in a flow. This is particularly useful when the
Splitter
message recipient cannot accept messages longer than a specified length.
Message Filter Filter messages using a filter
A Processor Chain is a linear chain of message processors which process a message in
Processor Chain
order.
Recipient List Send a message to multiple connectors
The Request Reply message processor receives a message on one channel, allows the
Request Reply back-end process to be forked to invoke other flows asynchronously, and accepts the
asynchronous result on another channel.
Accepts a collection of messages, then uses the Sequence ID of each message to reorder
Resequencer those messages. It then sends the messages (in order of their new sequence), to the next
message processor in an application flow.
Iterates through a list of two or more message processors, sending successive messages
Round Robin to the next message processor on the list. When it reaches the end of the list, it jumps to
the start of the list and resumes the iteration.
Until Successful Repeatedly attempt to process a message until successful
Sends a request message to multiple targets concurrently. It collects the responses from
all routes, and aggregates them into a single message.
Scatter Gather
Based on a WSDL file, it routes arriving calls to the corresponding flow depending on the
SOAP Router
resource and method. See APIkit for SOAP documentation.
Evaluates an expression which determines how it sections a message into two or more
Splitter parts. The Splitter then sends each of these message parts, in sequence, to the next
message processor in an application flow.
Send a message to an extra message processor as well as to the next message
WireTap
processor in the chain
Validations Module
The Validations module provides an easy way to verify that the content of a message in your flow
matches a given set of criteria. The main advantage this has over using Filters is traceability, as filters
all raise identical exceptions. Validators, on the other hand, raise a ValidationException with a
meaningful message attached. You can optionally customize this message and even the type of
exception you want it to throw.
Error Handling
Faults that occur within Mule are referred to as exceptions; when an activity in your
Mule instance fails, Mule throws an exception. To manage these exceptions, Mule
allows you to configure exception strategies.
From a high level perspective, errors that occur in Mule fall into one of two
categories: System Exceptions, and Messaging Exceptions.
System Exceptions
Mule invokes a System Exception Strategy when an exception is thrown at
the system-level (that is, when no message is involved, exceptions are handled by
system exception strategies). For example, system exception strategies handle
exceptions that occur:
Messaging Exceptions
Mule invokes a Messaging Exception Strategy whenever an exception is thrown
within a flow (i.e., whenever a message is involved, exceptions are handled
by messaging exception strategies).
When a message being processed through a Mule flow throws an exception, normal
flow execution stops and processes transfers to the message processor sequence
within the exception strategy.
Exception
Use Transaction Error Handling
Strategy
o A batch job is the top-level element in an application in which Mule processes a message
payload as a batch of records. The term batch job is inclusive of all four phases of processing:
Input, Load and Dispatch, Process, and On Complete.
o A batch job instance is an occurrence in a Mule application whenever a Mule flow executes a
batch job. Mule creates the batch job instance in the Load and Dispatch phase. Every batch job
instance is identified internally using a unique String known as batch job instance id.
Are there any message processors that you cannot use in batch processing?
o The only element you cannot use in batch processing is a request-response inbound
connector.
Phase Configuration
1 Input optional
2 Load and Dispatch implicit, not exposed in a Mule application
3 Process required
4 On Complete optional
Input
The first phase, Input, is an optional part of the batch job configuration and is designed
to Triggering Batch Jobs via an inbound connector, and/or accommodate any
transformations or adjustments to a message payload before Mule begins processing it as a
batch.
During this phase, Mule performs no splitting or aggregation, creates no records, nor queues
anything for processing; Mule is not yet processing the message as a collection of records, it
only receives input and prepares the message payload for processing. In this phase, you
use message processors to act upon the message the same way you would in any other
context within a Mule application.The batch:input child element appears first inside
a batch:job element; indeed, it cannot exist anywhere else within the batch job – it can only
be first.
1. Mule sends the message payload through a collection splitter. This first step triggers
the creation of a new batch job instance.
2. Mule creates a persistent queue and associates it to the new batch job instance.
A batch job instance is an occurrence in a Mule application resulting from the
execution of a batch job in a Mule flow; it exists for as long as Mule processes each
record in a batch.
3. For each item generated by the splitter, Mule creates a record and stores it in the
queue. (This is an "all or nothing" activity – Mule either successfully generates and
queues a record for everyitem, or the whole message fails during this phase.)
4. Mule presents the batch job instance, with all its queued-up records, to the first batch
step for processing.
Process
In the third phase, Process, Mule begins asynchronous processing of the records in
the batch. Within this required phase, each record moves through the message
processors in the first batch step, then is sent back to the original queue while it
waits to be processed by the second batch step and so on until every record has
passed through every batch step. Only one queue exists and records are picked out
of it for each batch step, processed, and then sent back to it; each record keeps
track of what stages it has been processed through while it sits on this queue. Note
that a batch job instance does not wait for all its queued records to finish processing
in one batch step before pushing any of them to the next batch step. Queues are
persistent.
Mule persists a list of all records as they succeed or fail to process through each
batch step. If a record should fail to be processed by a message processor in a
batch step, Mule can simply continue processing the batch, skipping over the failed
record in each subsequent batch step. (Refer to the Handling Failures During Batch
Processing section for more detail.)
At the end of this phase, the batch job instance completes and, therefore, ceases to
exist.
Beyond simple processing of records, there are several things you can do with
records within batch steps:
o You can set record variables on records and pass them from step to step (read
more)
o You can apply filters by adding accept expressions within each batch step to
prevent the step from processing certain records; for example, you can set a filter to
prevent a step from processing any records which failed processing in the preceding
step (read more)
o You can commit records in groups, sending them as bulk upserts to external
sources or services. (read more)
Studio Visual Editor
XML Editor
Note that details in code snippet are abbreviated so as to highlight batch phases,
jobs and steps. See Complete Code Example for more detail.
<batch:job name="Batch3">
<batch:input>
<poll doc:name="Poll">
<sfdc:authorize/>
</poll>
<set-variable/>
</batch:input>
<batch:process-records>
<batch:step name="Step1">
<batch:record-variable-transformer/>
<data-mapper:transform/>
</batch:step>
<batch:step name="Step2">
<logger/>
<http:request/>
</batch:step>
</batch:process-records>
</batch:job>
On Complete
During the fourth phase, On Complete, you can optionally configure Mule to create
a report or summary of the records it processed for the particular batch job instance.
This phase exists to give system administrators and developers some insight into
which records failed so as to address any issues that might exist with the input data.
While batch:input can only exist as the first child element within
the batch:job element, batch:on-complete can only exist as the final child element.
Studio Visual Editor
XML Editor
Note that details in code snippet are abbreviated so as to highlight batch phases,
jobs and steps. See Complete Code Example for more detail.
<batch:job name="Batch3">
<batch:input>
<poll doc:name="Poll">
<sfdc:authorize/>
</poll>
<set-variable/>
</batch:input>
<batch:process-records>
<batch:step name="Step1">
<batch:record-variable-transformer/>
<data-mapper:transform/>
</batch:step>
<batch:step name="Step2">
<logger/>
<http:request/>
</batch:step>
</batch:process-records>
<batch:on-complete>
<logger/>
</batch:on-complete>
</batch:job>
After Mule executes the entire batch job, the output becomes a batch job result
object(BatchJobResult). Because Mule processes a batch job as an asynchronous, one-
way flow, the results of batch processing do not feed back into the flow which may
have triggered it, nor do the results return as a response to a caller (indeed, any
message source which feeds data into a batch job MUST be one-way, not request-
response). Instead, you have two options for working with the output:
o Create a report in the On Complete phase, using MEL expressions to capture the
number of failed records and successfully processed records, and in which step any
errors might have occurred.
o Reference the batch job result object elsewhere in the Mule application to capture
and use batch metadata, such as the number of records which failed to process in a
particular batch job instance.
If you leave the On Complete phase empty (i.e. you do not set any message
processors within the phase) and do not reference the batch job result object
elsewhere in your application, the batch job simply completes, whether failed or
successful. Good practice dictates, therefore, that you configure some mechanism
for reporting on failed or successful records so as to facilitate further action where
required. Refer to Batch Processing Reference for a list of available MEL
expressions pertaining to batch processing.
Transactional Processing
o Transactional processing handles a complex event (such as the processing of an individual
message by a Mule application) as distinct, individual event that either succeeds entirely or fails
entirely, and never returns an intermediate or indeterminate outcome.
o Even if only one of the many message processing events in a Mule flow fails, the whole flow fails.
The application can then “rollback” (i.e. undo) all the completed message processing steps so
that, essentially, it’s as though no processing has occurred at all on the message. Sometimes, in
addition to rolling back all the steps in the original, failed processing instance, the application can
recover the original message and reprocess it from the beginning. Since all traces of the
previous, failed attempt have been erased, a single message ultimately produces a only single
set of results.
Dataweave
The DataWeave Language is a simple, powerful tool used to query and transform
data inside of Mule. It can be implemented to:
API-led Connectivity
API-led connectivity provides an approach for connecting and exposing assets. With this
approach, rather than connecting things point-to-point, every asset becomes a managed API – a
modern API, which makes it discoverable through self-service without losing control.
The APIs used in an API-led approach to connectivity fall into three categories:
System APIs – these usually access the core systems of record and provide a means of
insulating the user from the complexity or any changes to the underlying systems. Once built,
many users, can access data without any need to learn the underlying systems and can reuse
these APIs in multiple projects.
Process APIs – These APIs interact with and shape data within a single system or across
systems (breaking down data silos) and are created here without a dependence on the source
systems from which that data originates, as well as the target channels through which that data is
delivered.
Experience APIs – Experience APIs are the means by which data can be reconfigured so that it
is most easily consumed by its intended audience, all from a common data source, rather than
setting up separate point-to-point integrations for each channel. An Experience API is usually
created with API-first design principles where the API is designed for the specific user experience
in mind.
1. Experience APIs: Experience APIs are utilised for Mobile Apps and Web Apps. These are used to
avoid setting up separate point-to-point integrations for each channel and instead create a common
data source, where data can be reconfigured based on the source which is looking to access it,
without making any change in the original database servers. In simple terms, it enables
displaying the same data into multiple formats based on who is asking for it.
2. System APIs: Legacy systems, SaaS apps, mainframes, FTP servers are some of the core
underlying systems of any IT architecture. System APIs hide the complexity of an IT infrastructure
from the users. These type of APIs are the ones enabling loose-coupling by providing a platform to
access systems of record, and exposing data into each record in a canonical format. This avoids any
3. Process APIs: Process APIs are implemented when a business is looking to scale up the current IT
infrastructure, either in terms of onboarding new systems as a result of expansion into new
geographies or integrating different legacy systems of an already existing vast IT ecosystem. Process
APIs enable developers to create independent data source points as well as independent target
channels to deliver the data. In simple terms, Process APIs feed in the data coming from System
Layer, without any need of interfering with the legacy systems; apply business logic to them and
transform them as required and orchestrate the data as demanded by the Experience layer, thus in
turn satisfying the needs of each user both geographically and demographically.