Ordering Stack is an e-commerce and pos platform for restaurant chains.
It consists of backend services and frontend applications. There are services for authentication, products and mens, venues, payments, orders and others, providing business functionality for particular business domains of the system. Each one exposes its own REST API. To access any of these services, clients must first acquire access tokens from auth service. Having this token, clients can call API endpoints providing authentication credentials in the request header.
Beside calling API, clients will receive asynchronous messages from backend via websockets. These messages contain the state of orders and other notifications.
General idea of working with backend is to call API methods and receive state updates via websocket. Client applications must be prepared to work asynchronously.
- non blocking & asynchronous
user commands and updates are sent through two independent channels HTTP Request and WebSokets
- it works like multiplayers online games (MMOG)
all state is kept on server side
users send commands to server without waiting to actual action to happen (they just receive confirmation that was queued successfully)
updated state is delivered asynchronously
Description of using APIs and websocket can be found here: https://docs.orderingstack.com/api-examples/
Currently all our client applications are implemented as React.js applications.
How backend is done
During discussions on Ordering Stack architecture, we wanted to follow DDD (Domain Driven Design) principles and enclose individual business areas in separate (micro)services. Each domain service should also be loosely coupled with other services, not calling their API too often in regard to fetching data from outside of a bounded context.
Another thing was to implement an asynchronous process of order flow with incoming commands from ordering API and sending back notifications via websockets. We had to choreograph multiple services as subsequent stages of order processing were done. For that we used an event streaming platform.
It could be visualized like that:
Please note that the message broker (event streaming) platform is a central backbone of the whole system. Firstly it helps to exchange data between services, which allows to avoid too much interaction between them. Each microservice can maintain its own data including ones owned by other services and update them as appropriate events come. Secondly, it makes it possible to choreograph services, as they can react to incoming events and publish other events to activate other services. This approach makes the whole solution much more agile, performant (non blocking), failure tolerant and robust. As an example of benefits, we were able to easily add support for webhooks in the flow of order processing.
From an implementation perspective we used the following technologies:
- microservices are built with Java, Spring boot or node.js,
- mongodb is used if a service needs storage (services can not share databases).
- microservices uses service discovery to find other services (Eureka)
- each micro service is running in Docker container (docker compose used for container orchestration)
- Apache Kafka is used as event streaming platform
Ordering Stack is a cloud SaaS platform and it can be used by many customers (tenants). Each micro service in the system supports multi-tenancy. Tenant identifiers are also always encoded in the access token. If a service has a storage, then data for each tenant are kept in different collections. Also every tenant has different topics for order processing in the event streaming platform (Kafka).
Food ordering has its own characteristics of usage patterns within week and day hours. Typically there are peaks of orders during lunch time and in the evenings. Monday is usually week, Friday, Saturday and Sunday days are the strongest. Additionally weather conditions can strongly influence orders during a day, heavy rain or snow can significantly increase the numbers. Last but not least – marketing events like precise time advertisements or special days like “international pizza day” can implay huge run with new orders.
Considering all above, the ordering platform must be prepared for heavy load. Therefore designing Ordering Stack we put performance and scalability high in our priorities. We wanted to avoid any menu and complex products calculation during request processing, any blocking responses because of basket discount recalculation and so on. But how to do this is some logic can be placed outside of the core system like webhooks which can be assigned feely by the tenant in its configuration. We don’t have any control about the performance of such elements! The answer is asynchronicity. All order wise actions like add product to basket are sent to the backend as commands via REST request. They are put into the kafka stream for further processing by multiple microservices in different domains like menu verification, order processing, discounts, loyalty etc. and when a new order state is ready it is sent back to the ordering application via websocket. Additionally all external calls use precise timeouts and circuit breaker patterns for better reliability.
It is important to mention that all microservices attached to Kafka Stream are stateless and can be horizontally scaled with respect to Kafka architecture. Kafka guarantees order of event processing even with multiple processing instances. It is achieved by putting all events connected to one order to the same topic partition and each partition can only be processed by a single thread (this is by design).
All these techniques make the Ordering Stack high performance and high availability system, capable of handling even huge peaks.
Test and production environment
We use battle proven hosting and cloud providers like 3S and Microsoft Azure for hosting our solution. Currently all servers are located in the European Union area.
Most of the system artefacts are containerized with Docker, we used gitlab for CI/CD and other services for monitoring how all our environments work.