Content
01.03 - Solution Strategy
This chapter lists all common design principles valid for all OpenWMS.org microservices. Each microservice implementation may add specific rules and principles but overall rules are captured here.
Microservice Architecture
Description |
Originally coming from a more technically organized application layout, in 2016 the step was taken towards a microservice architecture. Each microservice focus on its own functional use cases, business functions and business value. The whole OpenWMS.org system encompasses around 15 different services that serve different purpose and that can be composed differently in end-customer projects. |
|---|---|
Justification |
OSGi was a great technology to built modular applications. One of the most important requirements on OpenWMS.org is the ability to extend in a flexible way and to get changes into production without downtime. The best successive architecture style after OSGi is te microservice style, it helps us to deploy changes easily and without downtime and to implement new features quickly. |
Addressing |
Modularity, Flexibility, Extendability, Changeability |
Database Table Separation
Description |
Each microservice has its own set of database tables, either relational database tables or NoSQL collections/tables. Tables are differentiated by a naming pattern. Each table has a prefix that points to the owning microservice. All tables can be deployed into a single database, or in multiple databases or database schemas. This is a question of distribution and what need to be deployed where. Relationships between database tables of different microservices are forbidden. |
|---|---|
Justification |
It is absolutely fine to use one database for all the tables used in a system. It may also make sense to separate between database schemas, for example to separate WMS tables from TMS or BPMN tables. With this approach we do not get in conflict with the strict data separation enforced in microservice architecture style and still have the flexibility of different deployment forms |
Addressing |
Flexibility, Performance |
Event-Driven Architecture (EDA)
Event-Driven Architecture (EDA) is a software design pattern where system components communicate and react to events (changes in state or significant actions) rather than making direct synchronous requests. Beside events (that express what happened in a system) there might also be commands, to express or request an action from a system component.
Nowadays there’re two major architecture categories:
-
Traditional messaging with an Enterprise Message Broker (eg. RabbitMQ, AmazonMQ, Azure Service Bus)
-
Log-based messaging with a (Distributed) Event-Streaming Platform (eg. Apache Kafka, AmazonMSK)
The most obvious difference is, that in tradtional messaging systems the message is removed from the broker as soon as it is consumed, whereas in event-streaming platforms the message can be replayed.
Architectural Styles in EDA
Basically three high-level architectural styles exist as explained in the following. The data sent between producer and consumer can be subdivided into two types: Thin events and Fat events. The former type only carries an unique data identifier and (if required) some metadata information (eg. type, date), whereas the latter is used to transfer the full data representation as part of the event.
Queue & Pub/Sub
In general in Queue and Pub/Sub architectural style it is preferred to deliver Thin events. All consumers must query the producer to get the actual data instead of getting the data delivered as part of the event.
Queue (Point-to-Point Messaging)A queue delivers each message to only one consumer. | |
|---|---|
Characteristics |
|
Example Use Cases |
|
Pub/SubIn pub/sub, a message is sent to all subscribed consumers. Every subscriber gets its own copy of the message | |
|---|---|
Characteristics |
|
Example Use Cases |
|
Both styles have the following characteristics:
-
Event contains minimal data (ID, type, date)
-
Requires synchronous follow-up calls (requires existing request interfaces, eg. REST API)
-
Lightweight messages
-
Coupling between services due to callbacks
Queue |
Pub/Sub |
|---|---|
|
Fat events hurt less here because:
Cons
|
Only Thin events proposed, because of
|
Event Streaming
Events are stored in ordered event logs, that can be replayed by consumers repeatedly. Consumers mutate their datastores based on these events. In contract to the first architectural style, the event contains all required data (Fat event) so consumers do NOT need to call the producer subsequently.
Characteristics |
|
|---|---|
Example Use Cases |
|
The downside of this log-based messaging systems, compared to traditional message brokers are:
-
Complex to setup, operate and maintain
-
Querying a large amount of data on the event log is not efficient, compared to a local datastore or cache
Event Sourcing
Event Sourcing is even a step further. System state is derived from the event history. Events are the source of truth and consumer' datastores are not - those can be eliminated or serve as local temporary store only. In this type of architecture the event log is seen as the “central nervous system” of the application (or even the organization) that stores all data changes over the time in the event log.
Characteristics |
|
|---|---|
Example Use Cases |
|
Supported Services on Azure
Technical summary: Service Bus moves work. Event Hubs moves data.
|
Azure Service Bus Message Broker (Queue & Pub/Sub) |
Azure Event Hubs Event Streaming Platform |
|
|---|---|---|
Designed for |
|
|
When to take |
|
|
The one doesn’t replace the other. Here a direct feature comparison:
Feature |
Azure Service Bus |
Azure Event Hubs |
|---|---|---|
Primary Purpose |
Enterprise messaging |
Big data event ingestion |
Category |
Message Broker |
Event Streaming Platform |
Messaging Model |
Queue + Topic/Subscription |
Partitioned event stream (log) |
Typical Message Size |
Small/medium business messages |
High-volume telemetry/events |
Throughput |
Thousands/sec |
Millions/sec |
Latency |
Low |
Very low |
Retention |
Message removed after consumption |
Time-based retention (1–90 days or more with Capture) |
Replay Events |
❌ No |
✅ Yes |
Ordering |
FIFO via sessions |
Ordering per partition |
Delivery Guarantee |
At-least-once, exactly-once workflows possible |
At-least-once |
Transactions |
✅ Supported |
❌ Limited |
Dead Letter Queue |
✅ Built-in |
❌ Not native |
Message Locking |
✅ Yes |
❌ No |
Backpressure Handling |
Built-in |
Consumer-managed |
Scaling Model |
Broker-managed |
Partition-based horizontal scale |
Kafka Protocol Support |
❌ No |
✅ Yes (native Kafka API compatible) |