Customers Contact TR

‘Broadcasting’, ‘Cache’, ‘Lock’ in Distributed Systems

 
Source: http://beyondplm.com
 

If you have big, multi-module, and distributed projects, you need solutions to establish communication among services, distributed cache, etc. At the same time, you need to run many cluster-based processes individually. For example, in the case of using coupons simultaneously, you need a cluster-based lock to limit users to a single transaction only.

 

It’s possible to overcome this problem with a solution with many features such as in-memory distributed cache, lock, broadcast, and distributed task execution. And, there is a solution developed in our own country: Hazelcast!

 

Hazelcast ensures that each client becomes a cluster node (Hazelcast server is embedded in your services). Therefore you don’t have to use it as a server-client model. It is also possible to work with the server-client model.

 

They both have their pros and cons.

 
 

For example, when it is embedded in your service, it is quite easy to integrate, and there is no additional need to manage the Hazelcast service servers. When your service starts running, Hazelcast is included in the cluster as a node as you set. Meanwhile, it does replication and similar operations automatically.

 

At this point, we should take into account one major problem. If you use in-memory grid operations intensively with this structure, then you start to use the resources of your services. Although your services have many other jobs to do, they start acting as nodes on the Hazelcast cluster, and they get interrupted. Therefore, resources such as memory and processors become unstable.

 

If you prefer the server-client model, your services search for a cluster to connect as a client. To do this, you need to set up several Hazelcast server node instances and manage them.

 

I prefer that server instances do their principal work. Although managing is a little more complicated, it works smoothly, except when the servers are down. Still, it is not difficult to manage.

 

Separate resources, unique jobs

 

Instead of scaling the services in your applications, you can scale Hazelcast node instances that are smaller and less costly.

You only have to send a message to all the nodes and clients in the cluster using specific topics and say, “Hey, I have this message. Please take action!”. If your services are listening to this topic, they get the message and take the action you want. You need to implement cache, but you also want to update it from one place and access the latest data from anywhere. You also want it to be distributed and distribute the load. You want replication, but you don’t wish to lose data (although short-term due to cache). At this point, you can easily configure and use the in-memory data grid features of Hazelcast, such as Map and Set.  

What you can do for Lock

 

There are several ways of implementing the lock mechanism, of course. For example, you can do a database lock. But it would be best if you looked for a solution that would provide high-speed communication when the number of transactions is high, and the performance should also be increased. To do this, you can write a system yourself, but this is quite costly. You need to consider many issues such as heartbeat processes of people who are connected and locked, and deadlock.

 

It is possible to use a system like Redis. However, it does not provide full confidence. As it offers asynchronous replication in critical lock operations when using mechanisms such as cluster and master-slave, this may create a single point of failure. For this, Redis offers Redlock as a solution.

 
 

Redlock is an algorithm, and it is possible to apply it in different ways. The performance of its Java libraries is not at the desired level, and they have bugs. If you want to learn more about Redlock, you can research.

 

Lock solution with Hazelcast is quite easy!

 

If we are not planning to use a very active lock system, Hazelcast’s ready-to-use ILock mechanism is quite simple, useful, and functional. But this structure has a handicap if we are using too many lock operations: A single node is responsible for the locks! It can lead to problems; the node can get very busy and stop responding when there is a heavy load. If it cannot respond, it disconnects from the cluster, and another node spends a very long time taking over the task. Then, because this leads to more resource usage, the service which took over cannot sustain and eventually disconnect. The entire cluster can split up like this with a snowball effect. Besides, the node that survived this task can automatically connect to the cluster and get stuck in a vicious circle.

 

In this case, a key-value-shaped distributed in-memory cache mechanism called IMap comes to rescue us. This structure can make lock per key, and because the nodes responsible for the keys are separate, it can distribute the load and scale. Of course, you should not forget to set up the data for storing replication for the map, backup, etc., in your Hazelcast cluster (such as quorum replica).

 

Have a keen code!


Author: Ersin Yetişen

Date Published: Sep 21, 2018