Parallel, Distributed and Concurrent Systems

Mubasheer Siddiqui
8 min readJan 5, 2021

This Blog is written and publish by TY-38:

Harsh Mandviya

Md Mubasheeruddin Siddiqui

Jayna Medtia

Harshal Nana Patil

Prashant Kumar


If you listen to anybody discussing computers or programming, there are three universes you’ll constantly hear: parallel, concurrent, and distributed. At first glance, it seems like they mean something very similar, yet indeed, they’re three distinct things, and the distinctions are significant.


Parallel computing is a sort of computation where numerous calculations or the execution of cycles are done all simultaneously. Enormous issues can regularly be partitioned into more smaller ones, which would then be able to be tackled simultaneously. There are a few distinct types of equal figuring: bit-level, guidance level, information, and undertaking parallelism. Parallelism has been utilized for a long time, fundamentally in high-performance computing, yet interest in it has become of late because of the actual imperatives forestalling recurrence scaling. As power utilization (and therefore heat generation) by PCs has become a worry lately, equal figuring has become the prevailing worldview in PC design, for the most part as multi-core processors.

Parallel computing is firmly identified with concurrent computing — they are frequently utilized together, and regularly conflated, however the two are distinct: it is possible to have parallelism without concurrency, (for example, bit-level parallelism), and concurrency without parallelism, (for example, performing multiple tasks by time-sharing on a single-core CPU).

IBM’s Blue Gene/P massively parallel supercomputer.

In parallel computing, a computational assignment is commonly separated into a few, frequently many, fundamentally the same as subtasks that can be processed independently and whose outcomes are combined afterwards, upon completion. Interestingly, in concurrent computing, the different cycles regularly don’t address related assignments; when they do, as is commonplace in circulated figuring, the different undertakings may have a varied nature and frequently require some inter-process communication during execution.. Equal PCs can be generally grouped by the level at which the equipment upholds parallelism, with multi-center and multi-processor PCs having different handling components inside a solitary machine, while bunches, MPPs, and lattices utilize various PCs to chip away at a similar errand. Particularly equal PC models are in some cases utilized close by customary processors, for quickening explicit undertakings. Now and again parallelism is straightforward to the developer, for example, in piece level or guidance level parallelism, yet unequivocally equal calculations, especially those that utilization simultaneousness, are more hard to compose than successive ones, since simultaneousness presents a few new classes of potential programming bugs, of which race conditions are the most widely recognized. Communication and synchronization between the distinctive subtasks are commonly probably some of the greatest obstacles to getting good parallel program performance.

Example of Parallel Computing:

  • Weather forecasting programming is generally parallel code. Doing the computational fluid dynamics work to produce precise forecasts requires a colossal measure of computation, and sharing it among numerous processors makes it run at a (more) reasonable rate.

Distributed computing

Distributed Computing is a field of software engineering that reviews distributed systems and their frameworks. A distributed system is a model wherein segments situated on organized networked computers communicate and facilitate their activities by passing messages. The parts communicate with one another to accomplish a common objective. Three important characteristics of distributed systems are: concurrency of segments/components, absence of a global clock, and independent failure of components disappointment of segments. Instances of distributed system shift from SOA-based systems to enormously multiplayer internet games to distributed(peer-to-peer) applications.

A PC program that runs in a distributed system is known as an distributed program, and distributed programming computer programs is the way toward composing such programs. There are numerous options for the message passing mechanism, including unadulterated HTTP, RPC-like connectors and message queues. An objective and challenge sought after by some computer researchers and experts in distributed systems is transparency in location; notwithstanding, this objective has become undesirable in industry, as distributed systems are not the same as conventional non-distributed systems, and the distinctions, for example, network practitioners , partial system failures, and fractional overhauls, can’t just be covered up by endeavors at straight forwardness. Distributed systems likewise alludes to the utilization of distributed systems to take care of computational issues. In distributed computing, an issue is separated into numerous assignments, every one of which is tackled by at least one PCs, which speak with one another by message passing.

Explanations behind utilizing distributed systems and distributed computing may include:

The very idea of an application may require the utilization of a correspondence network that interfaces a few PCs: for instance, information created in one actual area and needed in another area.

There are numerous cases where the utilization of a solitary PC would be conceivable on a basic level, however the utilization of a distributed system is valuable for practical reasons. For instance, it could be more cost-effective to get the ideal degree of execution and performance by utilizing a bunch of a few low-end PCs, in correlation with a solitary high-end PC. A distributed system can give more dependability than a non-distributed system, as there is no single instance of failure. Also, a distributed system might be simpler to grow and oversee than a monolithic uniprocessor system

Example of Distributed Computing:

  • An example of a distributed system would be a bit of programming like Writely, which is a word processor that runs inside the web browser. In writely, you can alter an archive in your web browser, and you can impart altering to numerous individuals — so you can have three or four internet browsers all altering a similar record. Regarding the system, the web browsers are each running little Java applications that discussion to the worker and each other by informing; and they have totally none of the code for really changing the report. The server has no code for doing things like delivering the UI, however it speaks with the customers to get and measure alter orders, and convey refreshes so the entirety of the UIs are delivering something very similar. The whole plan of the framework is worked around the possibility that there are these numerous pieces, each running on various machines.


In software engineering, concurrency refers to the capacity of various parts or units of a program, algorithm , or problems to be executed out-of-order or in partial order, without influencing the ultimate result. This parallel execution of the concurrent units, which can essentially improve by and large speed of the execution in multi-processor and multi-core systems. In more specialized terms, concurrency refers to the decomposability property of a program, algorithm, or issue into order-independent or partially-ordered components or units.

Various numerical models have been produced for general concurrent computation including Petri nets, process calculi, the Parallel Random Access Machine model, the Actor model and the Reo Coordination Language.

Reo circuit: Alternator

As Leslie Lamport (2015) notes, “While concurrent program execution had been considered for quite a long time, the software engineering of concurrency started with Edsger Dijkstra’s original 1965 paper that presented the mutual exclusion problem. The resulting many years have seen a tremendous development of interest in concurrency — especially in distributed systems. Looking back at the birthplaces of the field, what stands apart is the fundamental role played by Edsger Dijkstra”.

Since calculations in a concurrent system can interact with one another while being executed, the quantity of possible execution paths in the system can be amazingly huge, and the resulting outcome can be indeterminate. Simultaneous utilization of shared assets can be a source of indeterminacy leading to issues such as deadlocks, and resource starvation.

Concurrency theory has been a functioning field of research in theoretical software engineering. One of the principal proposition was Carl Adam Petri’s fundamental work on Petri Nets in the early 1960s. In the years since, a wide assortment of formalisms have been produced for demonstrating and thinking about concurrency.

Concurrent programming incorporates programming languages and algorithm used to execute concurrent systems. Simultaneous writing computer programs is typically viewed as more broad than equal programming since it can include discretionary and dynamic examples of correspondence and connection, though equal frameworks by and large have a predefined and very much organized interchanges design. The base objectives of concurrent programming incorporate correctness, performance and robustness. Concurrent systems such as Operating systems and Database management systems are generally designed to operate indefinitely, including automatic recovery from failure, and not terminate unexpectedly. Some concurrent systems execute a type of transparent concurrency, where concurrent computational substances may compete for and share a single resource, yet the complexities of this competition and sharing are protected from the developer.

Since they utilize shared resources, concurrent systems in general require the inclusion of some sort of arbiter some place in their execution (frequently in the underlying hardware), to control access to those resources. The utilization of arbiters presents the chance of indeterminacy in concurrent computation which has significant ramifications for work on including rightness and execution. For example, arbitration presents unbounded nondeterminism which raises issues with model checking on the grounds that it causes blast in the state space and can even reason models to have an endless number of states. Some concurrent programming models incorporate co processes and deterministic concurrency. In these models, strings of control unequivocally yield their time slices, either to the system or to another cycle.

Example of Concurrency:

  • Database systems are often built for concurrency. The idea is that there’s a huge database somewhere, and it’s being pounded with lots and lots of queries. When one user starts a query, the system doesn’t shut down and stops doing anything else until that query is done. It permits various clients to be running queries all simultaneously at the same time. Also, most databases even assurance that in the event that one client is performing an update query, different clients that are performing concurrent queries while that update is in advancement, e queries will restore consistent results representing the state of the database either before or after the update, however not a mix of the two.