Reactive Design Patterns — Traditional Web Server Application Design

Traditional design

To understand why Reactive systems are better than the traditional alternatives, it is useful to examine a traditional implementation of an image service. Even though it has a cache, a connection pool, and even a fallback image for when things go wrong, it can fail badly when the system is stressed. Understanding how and why it fails requires looking beyond the single-thread illusion. Once you understand the failures, you will see that even within the confines of a traditional framework, you can improve the image service with a simplified version of the Managed Queue pattern(covered in chapter 16).

public interface ImageRepository {
Image find(String key);
void save(String key, Image image);
public ImageRepository cacheRepo;
public ImageRepository poolManagedRepo;
Image image = cacheRepo.find(key);
if (cached != null) {
return image;
} else {
image = poolManagedRepo.find(key);
if (image != null) {, image);
return image;
} else {
return fallback;

Understanding the traditional approach

On a cache hit, the request thread can provide a response immediately, On a cache miss, the internal implementation of Images needs to obtain a connection from the pool. The database query itself may be performed on the request thread, or the connection pool may use a separate thread pool. Either way, the request thread is obliged to wait for the database query to complete or time out before it can fulfill the request.

When you are tuning the performance of a system such as this, one of the key parameters is the ratio of request threads to connection-pool entries. There is not much point in making the connection pool larger than the request-thread pool. If it is the same size and all the request threads are waiting on database queries, the system may find itself temporarily with to do other than wait for the database to respond. That would be unfortunate if the next several requests could have been served from the cache, instead of being handled immediately, they will have to wait for an unrelated database query to complete so that a request thread will become available. On the other hand, setting the connection pool too small will make it a bottleneck, this risks the system being limited by request threads stuck waiting for a connection.

Analyzing latency with a shared resource

The simplistic implementation can be analyzed first by examining one extreme consisting of an infinite number of request threads sharing a fixed number of database connections. Assume each database query takes a consistent time W to complete, and for now ignore the cache. You want to know many database connections L will be used for a given load, which is represented as λ. A formula called Little’s Law gives the answer.

L = λ * W

Little’s Law is valid for the long-term averages of the three quantities independent of the actual timing with which requests arrive or the order in which they are processed. If the database takes on average 30ms to response, and the system is receiving 500 requests per second, you can apply Little’s Law.

L = 500 requests/second * 0.0.3 seconds/request = 15

The average number of connections being used will be 15, so you will need at least that many connections to keep up with the load.

Limiting maximum latency with a queue

The initial implementation blocked and waited for a database connection to become available. it returned null only if the requested image was not found in the database. A simple change will add some protection. if a database connection is not available, return null right away. This will free the request thread to return the fallback image rather than stalling and consuming a large amount of resources.

Given what you know about how that 50ms average is achieved, you al so would know not to set a timeout less than 120ms. if that time was not acceptable, the simple solution would be to use a smaller queue. A developer who know only that the average is less than 50ms might assume it is a Gaussian distribution and be tempted to set a timeout value at perhaps 80 or 100ms. Indeed, the assumptions that went into this analysis are vulnerable to the same error, because the assumption that the database provides a consistent 30ms response time would be questionable in real-world implementation. Real databases have cached of their own.

Setting a timeout has the effect of choosing a boundary at which the system will be considered to have failed. Either the system succeeded or it failed. When viewed from that perspective, the average response time is less important than the maximum response time. Because systems typically respond more slowly when under heavy load, a timeout based on the average will result in a higher percentage of failures under load and will also waste resources when they are needed most. Average response often has little bearing on choosing the maximum limits.




Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

The First Big One: The Partnership Update

Managing your App Distribution using Firebase

How to color the text in the application?

Recapitulation of AMA event held at AMA LOVERS CLUB

How to create a team channel on Mattermost

Reliable, Scalable and Maintainable of Data-Intensive Application

Faster Web-Delivery via QUIC || A TCP Rival or TCP2.0? — Part 1

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


More from Medium

Introduction to OpenAPI codegen - Simplify RESTful development

End-to-end testing with Selenium, Gradle, JUnit

Shutdown Hooks for JVM Applications

Creating and Integrating a Spring REST API with React