You might want to migrate from a standalone Redis to a Redis cluster for a variety of reasons. Once you have decided that you want to migrate, and have a Redis cluster set up the next question is how to migrate the existing data. There are multiple ways of doing this.
This would involve moving from a Redis client to a Redis cluster client and also making sure that all the changes are backward compatible. One way of doing this is having a toggle that would write to both Redis and Redis cluster while reading data only from the standalone…
This article is part of a series that I have started working on. In this series, I’ll be briefly discussing engineering concepts.
2/n
Consider you have a server hosting multiple domains example.com, example2.com, example3.com
. Consider a client that makes a request to example2.com
. When the request reaches the server how will the server decide on the certificate it needs to serve? The server does not know about the domain yet since TLS handshake happens on the TCP layer which knows only about IP addresses and that would be same for all the domains. The domain can only be found out…
This article is part of a series that I have started working on. In this series, I’ll be briefly discussing engineering concepts.
1/n
Connection pooling is re-using a bunch of connections instead of creating a new connection for every request.
Let’s see what happens when we use the same connection for multiple requests.
These requests need to be to the same domain
Run the following command
curl https://github.com/404…
Databases come with a bunch of isolation levels, each of them providing some sort of guarantee. For example, snapshot isolation protects us against phantom reads. However, isolation levels do not protect us against everything. Let’s discuss a few real-world scenarios where concurrency can cause bugs and their possible solutions.
READ_COMMIT is the default isolation level in postgres. You can read about isolation levels here
Case 1: Consider you are on Book My Show trying to book tickets for your next movie. You and Mr X selected the same seats for a show and proceeded for checkout. Ideally only one of…
Load balancing is one of the primary features of a proxy. The layer on which a load balancer operates provides different capabilities to it. These capabilities and the internals of L4/L7 load balancer will be the focus of this article.
L4 as the name suggests works on Layer4 (and Layer3) of the OSI model. When a client makes a request, it creates a TCP connection with the load balancer. The Load Balancer then uses the same TCP connection that the client created with it, to connect with one of the upstream servers.
At Gojek, we use Kafka to solve problems at scale. So, every bit of optimisation is important to us. Producers in Kafka come with many configurations, and knowing when to use them can improve the performance of your Kafka pipeline.
In this post, let’s analyse some of these configurations that you might not be using, but should.
Going through some system design videos, I realized that I have been using the term Proxy incorrectly all this time. So let’s quickly discuss it so that you don’t make the same mistake.
The position of a proxy — is what makes a proxy either a forward or reverse proxy. The easiest way to remember the difference between these two proxy is -
Forward proxy is used by clients where as Reverse proxy is used by servers
Applications nowadays handle a tremendous amount of requests. We deploy multiple instances of the same applications, fronting them with load balancers. Every request that comes in usually does something with a database. It either reads or updates the state of the database. If the databases we use handle only one request at a time (read/write), we would never be able to serve our users. Concurrency solves this by handling multiple requests at the same time.
This post deals with the need for concurrency control, and how to go about it.
Before diving into how databases handle concurrency, let us first…
For the past few months, my team has been working on a gRPC service. A few weeks back a new requirement came in: we needed a web portal to display some information. Since we already had a gRPC backend, the server side was sorted. But for the front-end, we had a few important choices to make.
1. Vue or React (We choose react)
2. REST or gRPC from the web portal
If you don’t know what gRPC is you can read about it here. Here are a few reasons that made us choose gRPC over REST.
Product Engineer GO-JEK | GSoC 2018 @openMF | Mobile | Backend | mohak1712 everywhere