Top Menu

Jump to content
Home
    Modules
      • Projects
      • Activity
      • Work packages
      • Gantt charts
      • News
    • Getting started
    • Introduction video
      Welcome to Wiki
      Get a quick overview of project management and team collaboration with OpenProject. You can restart this video from the help menu.

    • Help and support
    • Upgrade to Enterprise edition
    • User guides
    • Videos
    • Shortcuts
    • Community forum
    • Enterprise support

    • Additional resources
    • Data privacy and security policy
    • Digital accessibility (DE)
    • OpenProject website
    • Security alerts / Newsletter
    • OpenProject blog
    • Release notes
    • Report a bug
    • Development roadmap
    • Add and edit translations
    • API documentation
  • Sign in
      Forgot your password?

Side Menu

  • Overview
  • Activity
    Activity
  • News
  • Forums
  • 00 - Business Architecture
    00 - Business Architecture
  • 01 - Software Architecture Document (SAD)
    01 - Software Architecture Document (SAD)
  • 02 - Developer's Cookbook
    02 - Developer's Cookbook
  • 03 - REST Resource Design
    03 - REST Resource Design
  • 04 - Installation Notes
    04 - Installation Notes
  • 05 - Support
    05 - Support

Content

You are here:
  1. 01 - Software Architecture Document (SAD)
  2. 01.06 - Deployment View
  3. 01.06.01 - On-Premise Deployment View

01.06.01 - On-Premise Deployment View

  • More
    • Print
    • Table of contents

Most often a warehouse management system and material flow control are installed in a datacenter close to the warehouse and the hardware, either on a single server or on multiple ones. This on-premise deployment has the advantage of increased maintainability, operability and throughput.

Simple Single Box Deployment

The simplest way to deploy OpenWMS.org is the Single Box Deployment. All software components installed on one single physical or virtual server. All processes run and communicate in-memory and do not require external network access. In its simplest way all microservices can be installed as single instances as Unix daemons or MS Windows services. Scalability and elasticity are not an option in a typical warehouse project, therefore there is no need to scale out processes on demand.

Figure 7. Simplest deployment on single server

The services could also be installed as Docker containers on Docker Compose to increase operability and reliability but this is not a requirement.

Motivation
  • Simple project setup
  • One OpenWMS.org system for one warehouse project, no need to combine other OpenWMS.org instances
  • Independent IT systems operated by the warehouse management team
  • No need to scale out, the load and frequency is known for present and for the future
Quality and/or Performance Features
  • High degree of latency
  • High level of operability
  • In multiple warehouse projects per site the maintainability decreases

Mapping of Building Blocks to Infrastructure

Component Responsibility
Server Either one physical or virtual server instance for the whole warehouse project
RabbitMQ Server A RabbitMQ installation, running as a OS service
Virtual Host Inside the RabbitMQ server a virtual host instance is created for the warehouse project
Database Server A database server installation
Database Instance A instance dedicated for the warehouse project, that contains all schemas and tables (the feature of schema is optional)
Microservice Network A logical group of microservices. Has no

Multi Box Deployment

Similar to the one-box deployment, OpenWMS.org can also be split and deployed on multiple machines. For this scenario we propose to run the microservices within Docker containers and let the container scheduling infrastructure distribute the instances as needed. For this scenario we also propose Docker Swarm as container scheduling runtime. But if customers have other schedulers in place, like Kubernetes or OpenShift this works the same way. The main point here is, that OpenWMS.org does not require to run on Kubernetes or any PaaS solution. The basic requirements of this scenario are met with Docker Swarm.

Figure 8. Deployment distributed on multiple servers

The benefit of a container scheduler in a distributed environment is tremendous. We do not need to care about low-level infrastructure details on our own and rely on proven scheduling technologies.

Motivation

A container scheduler is already in place

A robust and more reliable setup is required, where process restarts and monitoring is required

Load can be divided to multiple servers

Processed can run at multiple instances

One part of the system (e.g. TMS) is independent on the availability of other parts

Quality and/or Performance Features

High degree of latency

High level of reliability and failure tolerance

High level of operability

Advantages with multiple warehouses in the project

A distributed system is more complex and this is a decrease of maintainability

Mapping of Building Blocks to Infrastructure

In addition to the Single Box Deployment.

Component Responsibility
Server 1 One managed Docker Swarm node (could also be a Kubernetes node)
Server 2 A second managed Docker Swarm node (could also be a Kubernetes node)

Both Docker Swarm nodes use several ports for cluster and container management. The database and the RabbitMQ are made transparently available to both nodes and microservices on each node.

Loading...