Loading learning content...
Before developers could write async/await, before threading libraries existed, before even operating systems had sophisticated thread schedulers—computer scientists faced a fundamental challenge: How do we express the concept that multiple operations should execute simultaneously?
This question wasn't merely about implementation. It was about notation—finding a precise, unambiguous way to describe concurrent execution that both humans and compilers could understand. The solution that emerged, developed in the 1960s and refined through the 1970s, would shape the very way we think about parallelism today: parallel block notation.
By the end of this page, you will understand the formal syntax and semantics of parallel block notation (parbegin/parend), its historical development, how it differs from sequential composition, and why this seemingly simple notation represents one of the most important abstractions in concurrent programming.
In traditional sequential programming, the order of operations is implicit in the textual arrangement of statements. When you write:
statement_A;
statement_B;
statement_C;
The meaning is unambiguous: execute A, then B, then C, in that exact order. This sequential composition is so fundamental that we rarely think about it explicitly—it's simply how programs work.
But what if we want to express something different? What if A, B, and C are independent operations—perhaps reading from different files, making different network requests, or computing different parts of a result—and we want them to execute concurrently?
Sequential programming languages provided no mechanism to express 'these operations may execute in any order or simultaneously.' The very syntax of semicolons and newlines carried implicit sequential semantics. Expressing parallelism required new notation—a way to break free from the tyranny of linear text.
The Deeper Semantic Challenge:
The problem wasn't just syntactic—it was semantic. When we say two operations execute 'in parallel,' we need to specify:
These questions demanded a formal notation with well-defined semantics—and that's precisely what parallel block notation provides.
The development of parallel block notation was not the work of a single inventor but rather evolved through contributions from multiple computer scientists addressing concurrent programming challenges in the 1960s and 1970s.
Key Contributors and Milestones:
| Year | Contributor | Contribution | Significance |
|---|---|---|---|
| 1965 | Edsger W. Dijkstra | Introduced structured approach to concurrent programming | Formalized the concept of cooperating sequential processes |
| 1966 | J. B. Dennis & E. C. Van Horn | Published 'Programming Semantics for Multiprogrammed Computations' | Established rigorous semantics for concurrent execution |
| 1968 | Dijkstra | Published 'Cooperating Sequential Processes' | Introduced parbegin/parend notation explicitly |
| 1972 | C.A.R. Hoare | Developed CSP concepts | Extended structured parallelism with communication primitives |
| 1975 | Per Brinch Hansen | Concurrent Pascal language | First practical implementation of structured parallelism |
Dijkstra's Foundational Contribution:
Edsger Dijkstra, already renowned for his work on algorithms (shortest path) and structured programming, recognized that concurrent programs posed unique challenges. His seminal 1968 manuscript 'Cooperating Sequential Processes' laid the groundwork for understanding concurrent execution as the composition of multiple sequential processes.
In this work, Dijkstra introduced the parbegin and parend keywords (short for 'parallel begin' and 'parallel end') as a structured notation for expressing that enclosed statements should execute concurrently. This notation was deliberately designed to mirror the familiar begin and end of structured sequential programming, emphasizing that parallelism could be tamed through the same structured approach that had revolutionized sequential programming.
Dijkstra's parallel block notation was heavily influenced by his advocacy for structured programming. Just as 'goto' statements created unreadable spaghetti code in sequential programs, unstructured parallelism (threads spawned without clear scope) created unmaintainable concurrent programs. parbegin/parend imposed structure: clear entry, clear exit, predictable semantics.
The parbegin/parend construct has a deceptively simple syntax that belies its powerful semantics. Let us examine it with the rigor it deserves.
123456789101112
// Basic Syntaxparbegin S1; // Statement 1 S2; // Statement 2 ... Sn; // Statement nparend // Semantics:// 1. All statements S1, S2, ..., Sn begin execution simultaneously// 2. The parbegin block completes when ALL statements complete// 3. Execution of statements after parend begins only after all Si completeFormal Semantics:
Let's define the semantics more precisely using operational semantics notation:
Syntax:
P ::= parbegin S₁ ; S₂ ; ... ; Sₙ parend
Semantics:
If we denote the execution of statement S as ⟨S, σ⟩ → σ' (S transforms state σ to σ'), then:
⟨parbegin S₁; S₂; ...; Sₙ parend, σ⟩ → σ'
where σ' is the result of executing all Sᵢ concurrently, and the final state σ' is reached when all Sᵢ have terminated.
The parend keyword does more than mark the end of a block—it enforces synchronization. This implicit synchronization is what makes parbegin/parend 'structured': you cannot accidentally proceed with one branch of execution while another is still running. This contrasts sharply with fork-style parallelism where synchronization must be explicitly programmed.
To fully appreciate parallel block notation, we must contrast it explicitly with the sequential composition we take for granted in everyday programming.
12345678910
// Sequential Compositionbegin read_file_A(); read_file_B(); read_file_C();end // Total time: T(A) + T(B) + T(C)// Order: A → B → C (deterministic)// State: Each sees effects of previous12345678910
// Parallel Compositionparbegin read_file_A(); read_file_B(); read_file_C();parend // Total time: max(T(A), T(B), T(C))// Order: Arbitrary (non-deterministic)// State: No visibility of others' effects| Aspect | Sequential (begin/end) | Parallel (parbegin/parend) |
|---|---|---|
| Execution Order | Deterministic, linear | Non-deterministic, concurrent |
| Time Complexity | Sum of all operations | Maximum of all operations |
| State Visibility | Each operation sees effects of predecessors | Operations don't see each other's intermediate states |
| Synchronization | Implicit (order == synchronization) | Explicit at parend (barrier) |
| Error Propagation | First error halts remaining | Errors in branches are isolated until parend |
| Composability | Trivially nestable | Nestable with proper scoping |
| Debugging | Predictable, reproducible | Non-deterministic, harder to reproduce issues |
The Performance Implication:
Consider three I/O-bound operations, each taking 100ms:
This isn't just a minor optimization—it's a fundamental improvement in how we model computation. The parallel model recognizes that independent operations don't need to wait for each other, potentially reducing total time to a small fraction of the sequential equivalent.
The Correctness Implication:
However, parallelism introduces complexity. If the operations modify shared state:
Parallel block notation makes the tradeoff explicit: you gain potential speedup but sacrifice deterministic execution order. This is why Dijkstra and others emphasized that parallelism should only be used when statements are truly independent—when their relative ordering doesn't affect correctness.
The parbegin/parend construct is most powerful—and most safe—when used with independent statements. But what exactly constitutes independence in concurrent execution?
Bernstein's Conditions (1966):
David Bernstein formalized the conditions under which two statements can safely execute in parallel. For statements S₁ and S₂, let:
Statements S₁ and S₂ are independent (can be safely parallelized) if and only if:
123456789101112131415161718192021222324
// SAFE: Statements are independentparbegin a := x + y; // W={a}, R={x,y} b := z * 2; // W={b}, R={z}parend// W(S1) ∩ W(S2) = {a} ∩ {b} = ∅ ✓// W(S1) ∩ R(S2) = {a} ∩ {z} = ∅ ✓ // R(S1) ∩ W(S2) = {x,y} ∩ {b} = ∅ ✓ // UNSAFE: Bernstein's conditions violatedparbegin x := x + 1; // W={x}, R={x} y := x * 2; // W={y}, R={x}parend// R(S2) ∩ W(S1) = {x} ∩ {x} = {x} ≠ ∅ ✗// Result: y may get old or new value of x (race condition) // UNSAFE: Write-Write conflictparbegin x := 1; // W={x}, R=∅ x := 2; // W={x}, R=∅parend// W(S1) ∩ W(S2) = {x} ∩ {x} = {x} ≠ ∅ ✗// Result: x could be 1 or 2 (non-deterministic)Modern compilers perform dependency analysis automatically to identify parallelizable loops and operations. This same analysis—checking Bernstein's conditions—underpins auto-parallelization in compilers like GCC and LLVM. Understanding these conditions helps you write code that compilers can optimize.
One of the key strengths of parallel block notation is its composability. Just as sequential blocks can be nested within each other, parallel blocks can be nested and composed, enabling sophisticated concurrent program structures.
123456789101112131415161718192021222324
// Nested parallelism: parallel blocks within parallel blocksparbegin // Branch 1: Sequential within parallel begin fetch_data_A(); process_data_A(); end // Branch 2: Nested parallel block parbegin compute_X(); compute_Y(); parend // Branch 3: Simple statement compute_Z();parend // Execution pattern:// - Main parbegin creates 3 parallel branches// - Branch 1 executes fetch then process (sequentially)// - Branch 2 executes compute_X and compute_Y in parallel// - Branch 3 executes compute_Z independently// - Main parend waits for ALL branches (including nested parallels)Semantic Implications of Nesting:
When parallel blocks nest, the semantics remain clear:
Scope: Each parbegin creates a new parallel scope. The enclosing parend waits for everything within that scope.
Synchronization Hierarchy: Inner parend completes before its branch is considered done for the outer parend.
Maximum Concurrency: In the example above, up to 4 operations could be running simultaneously (process_A (after fetch_data_A completes), compute_X, compute_Y, compute_Z).
Barrier Semantics: Each parend is a barrier, but only for its directly enclosed statements.
The composability of parbegin/parend supports modular reasoning. You can reason about each parallel branch independently (given Bernstein's conditions are satisfied), then compose your understanding. This 'divide and conquer' approach to reasoning is essential for managing the complexity of concurrent programs.
While you may never write parbegin and parend in production code, the concepts embodied in this notation have profoundly influenced modern concurrent programming constructs. Let's trace the lineage:
| Language/Framework | Equivalent Construct | Relationship to parbegin/parend |
|---|---|---|
| Go (Golang) | sync.WaitGroup with goroutines | WaitGroup.Wait() is the parend barrier; each goroutine is a parallel branch |
| Java | ForkJoinPool.invokeAll() | invokeAll blocks until all tasks complete—direct barrier semantics |
| C# / .NET | Task.WhenAll() / Parallel.Invoke() | Parallel.Invoke executes all actions and waits for completion |
| Python | concurrent.futures.wait() | Executor.map or wait(futures) provides barrier synchronization |
| OpenMP | #pragma omp parallel sections | Direct descendant with explicit section markers |
| Rust | rayon::join() / rayon::scope() | Scoped parallelism with implicit barrier at scope exit |
| Cilk/Cilk++ | cilk_spawn / cilk_sync | Explicit spawn and sync directly map to parbegin/parend |
1234567891011121314151617181920212223242526272829303132333435363738
package main import ( "sync" "fmt") func main() { // This is conceptually equivalent to: // parbegin // task1() // task2() // task3() // parend var wg sync.WaitGroup wg.Add(3) // Three parallel branches go func() { defer wg.Done() fmt.Println("Task 1 executing") }() go func() { defer wg.Done() fmt.Println("Task 2 executing") }() go func() { defer wg.Done() fmt.Println("Task 3 executing") }() wg.Wait() // This is 'parend' - barrier synchronization fmt.Println("All tasks complete")}Despite being developed over 50 years ago, the core abstraction of parbegin/parend—structured parallel scope with barrier synchronization—remains the foundation of concurrent programming across virtually all modern languages. When you use WaitGroup in Go or Task.WhenAll in C#, you're using Dijkstra's notation in modern clothes.
We have explored the foundational notation that shapes how we express concurrent execution. Let's consolidate our understanding:
What's Next:
Now that we understand the fundamental notation for expressing parallelism, we'll explore structured parallelism more deeply—examining how parbegin/parend relates to other parallel constructs and why the 'structured' aspect is crucial for building maintainable concurrent systems.
You now understand parallel block notation—its syntax, semantics, historical development, and modern manifestations. This foundational knowledge will inform everything we learn about structured concurrent programming.