Loading learning content...
As parallel block notation spread through academic literature and programming language design, variations emerged. While Dijkstra's original parbegin/parend notation became widely known, an equally important alternative appeared: cobegin/coend.
The 'co' prefix stands for 'concurrent' or 'cooperative,' emphasizing that enclosed statements execute together as cooperating units. Despite the different names, cobegin/coend expresses precisely the same semantics as parbegin/parend—it's syntactic variation, not semantic difference.
Understanding both notations is essential because academic papers, textbooks, and different language designs use them interchangeably. You'll encounter both, and recognizing their equivalence demonstrates mastery of the underlying concepts.
By the end of this page, you will understand the cobegin/coend notation, its historical origins, its semantic equivalence to parbegin/parend, syntactic variations in the literature, and when you might encounter each notation in academic and professional contexts.
The cobegin/coend notation emerged in the early 1970s, roughly contemporaneous with parbegin/parend. Its adoption was driven by several factors:
1. Emphasis on Cooperation:
The prefix 'co-' (from Latin 'com-' meaning 'together with') emphasized that concurrent statements cooperate to achieve a common goal. This framing was particularly influential in the context of cooperating sequential processes, where individual processes work together while remaining largely independent.
The terminology aligned with Dijkstra's own phrasing in 'Cooperating Sequential Processes' and reinforced the idea that parallelism should be viewed as cooperation rather than competition.
2. Linguistic Consistency:
In languages where sequential blocks were delimited by begin/end, the parallel equivalent cobegin/coend created a natural linguistic parallel:
begin/end for sequential compositioncobegin/coend for concurrent compositionThis symmetry made the notation more intuitive for developers familiar with Algol-style languages.
3. Textbook Adoption:
Several influential operating systems textbooks adopted cobegin/coend, cementing its place in computer science education. Notably, Ben-Ari's 'Principles of Concurrent and Distributed Programming' used this notation extensively, as did Silberschatz, Galvin, and Gagne's 'Operating System Concepts' in discussions of process synchronization.
| Notation | Key Sources | Context |
|---|---|---|
| parbegin/parend | Dijkstra's original works, some European academic papers | Theoretical foundations, formal verification |
| cobegin/coend | American textbooks, many OS courses, Concurrent Pascal | Education, practical language implementations |
| par { }/ || | Occam, modern parallel languages | Language implementations with block syntax |
The choice between parbegin/parend and cobegin/coend is largely stylistic. Academic papers from European institutions often favor parbegin/parend, while American textbooks often use cobegin/coend. Neither is 'more correct'—they express identical semantics.
The cobegin/coend construct has precisely the same semantics as parbegin/parend. The only difference is the choice of keywords. Let's examine the notation formally:
1234567891011121314151617181920
// Basic syntaxcobegin S1; // Statement 1 S2; // Statement 2 ... Sn; // Statement ncoend // Semantics (identical to parbegin/parend):// 1. All statements S1, S2, ..., Sn begin execution concurrently// 2. The cobegin block completes when ALL statements complete// 3. Execution proceeds past coend only after all Si terminate // This is EXACTLY equivalent to:parbegin S1; S2; ... Sn;parendFormal Equivalence:
We can formally state the equivalence:
cobegin S1; S2; ...; Sn coend ≡ parbegin S1; S2; ...; Sn parend
Where ≡ denotes semantic equivalence: for any initial state σ, both constructs produce the same set of possible final states (accounting for non-deterministic interleaving).
The operational semantics follow identical rules:
In any formal system, you can freely substitute parbegin/parend for cobegin/coend or vice versa. If you're reading a paper that uses one notation but you're more comfortable with the other, simply perform a mental substitution—the meaning is identical.
Beyond the basic cobegin/coend and parbegin/parend forms, academic literature and programming languages have introduced numerous syntactic variations. Understanding these variations helps you recognize parallel block notation across different contexts.
| Notation | Example | Source/Context |
|---|---|---|
| parbegin/parend | parbegin S1; S2; parend | Dijkstra's original works |
| cobegin/coend | cobegin S1; S2; coend | American OS textbooks |
| par { } | par { S1 || S2 } | Occam, some modern languages |
| || (double bar) | S1 || S2 || S3 | Process algebras (CCS, CSP) |
| & (ampersand) | S1 & S2 & S3 | Some functional languages |
| parallel | parallel { S1; S2; } | Some high-level languages |
| spawn/sync | spawn S1; spawn S2; sync; | Cilk and derivatives |
| async/await all | await all { async S1; async S2; } | Modern async frameworks |
123456789101112131415161718192021222324252627
// All of these express the same parallel computation: // Notation 1: parbegin/parend (Dijkstra)parbegin x := compute_a(); y := compute_b();parend // Notation 2: cobegin/coend (Textbooks)cobegin x := compute_a(); y := compute_b();coend // Notation 3: Process algebra style (CCS/CSP)(x := compute_a()) || (y := compute_b()) // Notation 4: Occam-stylePAR x := compute_a() y := compute_b() // Notation 5: Modern async/structured concurrencyasync with scope: x := await compute_a() y := await compute_b() # Implicit barrier at scope exitThe Parallel Composition Operator (||):
The double-bar operator || deserves special attention. In process algebras like CSP (Communicating Sequential Processes) and CCS (Calculus of Communicating Systems), the parallel composition operator is a fundamental primitive:
P || Q
This reads as 'process P in parallel with process Q.' The key difference from parbegin/parend is that || is a binary operator, composing exactly two processes. To express three or more parallel processes:
(P || Q) || R or P || (Q || R)
The associativity of || means parentheses don't matter for the set of behaviors, but the notation is inherently binary. parbegin/parend, in contrast, directly supports arbitrary arity (any number of statements).
The choice of notation often reveals the author's perspective. Block notation (parbegin/parend, cobegin/coend) emphasizes structured programming and scope. Operator notation (||, &) emphasizes algebraic composition and formal reasoning. Both perspectives are valuable; modern concurrency benefits from both.
Academic papers and textbooks commonly use cobegin/coend (or parbegin/parend) in pseudocode to describe concurrent algorithms. Understanding the conventions helps you read and write such pseudocode effectively.
Common Conventions:
Semicolons or Separators: Statements within cobegin/coend are typically separated by semicolons or placed on separate lines
Numbering: For reference in proofs, statements may be numbered:
cobegin
S1: x := 1;
S2: y := 2;
S3: z := x + y;
coend
Process Labels: When modeling processes, labels identify each branch:
cobegin
process Producer: produce();
process Consumer: consume();
coend
Implicit Infinite Loops: In process descriptions, each branch often runs forever:
cobegin
Producer: while true do produce() end;
Consumer: while true do consume() end;
coend // Never reached in this case
1234567891011121314151617181920212223242526272829303132333435363738
// Example 1: Producer-Consumer (Classic Academic Style)shared buffer: Queue;shared count: Integer := 0;shared mutex: Semaphore := 1; cobegin Producer: while true do produce_item(item); P(mutex); buffer.enqueue(item); count := count + 1; V(mutex); end; Consumer: while true do P(mutex); if count > 0 then item := buffer.dequeue(); count := count - 1; V(mutex); consume_item(item); else V(mutex); end; end;coend // Example 2: Parallel Summation (Finite Computation)function parallel_sum(A: array[1..n] of Integer): Integer local left_sum, right_sum: Integer; cobegin left_sum := sum(A[1..n/2]); right_sum := sum(A[n/2+1..n]); coend return left_sum + right_sum;end function;Academic pseudocode often assumes statement-level atomicity for clarity. In real systems, composite operations like 'buffer.enqueue(item)' are NOT atomic. Always verify atomicity assumptions when translating pseudocode to real implementations.
While cobegin/coend remains primarily a notation for academic discourse, some real programming languages have implemented direct equivalents. Let's examine notable examples:
1. Concurrent Pascal (Brinch Hansen, 1975):
Per Brinch Hansen's Concurrent Pascal was one of the first languages to implement structured parallelism directly. The language included:
cobegin
process1;
process2;
process3;
coend
Concurrent Pascal enforced structured concurrency—processes could only be created within cobegin blocks, and the program waited for all processes to complete.
2. Occam (1983):
Designed for the Transputer architecture, Occam included the PAR construct:
PAR
channel1 ! data
channel2 ? result
compute(x)
This is semantically equivalent to cobegin/coend, with simultaneous start and barrier synchronization at the end. Occam also distinguished between PAR (for parallel composition) and SEQ (for sequential composition), making the choice between parallelism and sequence explicit.
123456789101112131415
-- Occam's PAR construct-- Semantically equivalent to cobegin/coend PAR -- All three processes run in parallel SEQ x := compute.a() channel ! x SEQ y := compute.b() channel ! y SEQ z := compute.c() channel ! z-- Implicit barrier: all three complete here3. Chapel (Cray, 2004-present):
The Chapel language, developed at Cray for high-performance computing, includes a direct cobegin statement:
cobegin {
on Locales[0] do compute_partition(0);
on Locales[1] do compute_partition(1);
}
Chapel's cobegin supports distributed computing—different branches can run on different nodes of a supercomputer while maintaining the same structured semantics.
4. ALGOL 68:
ALGOL 68 included parallel clauses:
PAR BEGIN
task1;
task2;
task3
END
This influenced many subsequent language designs and demonstrated that parallel block notation could be integrated into a full programming language.
While explicit cobegin/coend keywords are rare in mainstream languages, the concept lives on in constructs like Go's goroutines with WaitGroup, Java's ForkJoinPool.invokeAll, and Kotlin's coroutineScope. The syntax differs, but the semantics—structured parallel execution with barrier synchronization—remain.
A subtle but important aspect of cobegin/coend notation is how individual statements (parallel branches) are delimited. This varies across notations and has implications for compound statements.
Semicolon Separation:
In most uses of cobegin/coend, statements are separated by semicolons:
cobegin
S1;
S2;
S3;
coend
Each semicolon-terminated statement becomes a parallel branch. This raises the question: what if a branch needs multiple sequential statements?
1234567891011121314151617181920212223242526272829303132
// Problem: Multiple statements in a branchcobegin x := read(); // Branch 1? process(x); // Branch 2? Or part of Branch 1? y := read(); // Branch 3? process(y); // Branch 4? Or part of Branch 3?coend // Solution 1: begin/end blockscobegin begin // Branch 1: compound statement x := read(); process(x); end; begin // Branch 2: compound statement y := read(); process(y); end;coend // Solution 2: Explicit branch labels (some notations)cobegin branch1: x := read(); process(x); branch2: y := read(); process(y);coend // Solution 3: Parallel-only at top level (implicit sequential within)// Each line is a branch; statements on same line execute left-to-rightcobegin { x := read(); process(x); } // Branch 1 { y := read(); process(y); } // Branch 2coendThe begin/end Convention:
The most common convention uses nested begin/end blocks to create compound branches:
cobegin
begin
// Multiple statements for Branch 1
allocate_resource();
use_resource();
release_resource();
end;
begin
// Multiple statements for Branch 2
allocate_other_resource();
use_other_resource();
release_other_resource();
end;
coend
This leverages the existing sequential composition notation (begin/end) within the parallel block. Each begin/end block constitutes a single parallel branch that happens to contain multiple sequential statements internally.
Modern languages with parallel constructs typically use braces or indentation to delimit branches, avoiding the ambiguity of semicolon separation. Each brace-enclosed or indented block is unambiguously one parallel branch.
One of the primary reasons cobegin/coend notation persists in academic literature is its suitability for formal verification. The structured nature of the construct enables rigorous reasoning about concurrent programs.
Hoare Logic for Parallel Composition:
In sequential Hoare logic, we reason about programs using triples {P} S {Q} meaning 'if precondition P holds before statement S, then postcondition Q holds after.' For cobegin/coend, we need a parallel composition rule.
Owicki-Gries Method (1976):
Susan Owicki and David Gries developed a proof method for parallel programs that extends Hoare logic:
{P1} S1 {Q1} {P2} S2 {Q2} ... {Pn} Sn {Qn}
────────────────────────────────────────────────────
{P1 ∧ P2 ∧ ... ∧ Pn} cobegin S1; S2; ...; Sn coend {Q1 ∧ Q2 ∧ ... ∧ Qn}
This rule says: if each Si transforms Pi to Qi, then the parallel composition transforms the conjunction of preconditions to the conjunction of postconditions.
But there's a crucial caveat: the individual proofs must be interference-free—no statement in one branch may invalidate the proof of another branch.
123456789101112131415161718192021222324
// Example: Interference-Free Parallel Program// We want to prove: {x = 0} cobegin S1; S2 coend {x = 2} // S1: x := x + 1// S2: x := x + 1 // Individual proofs:// {x = 0 ∨ x = 1} x := x + 1 {x = 1 ∨ x = 2} (for S1)// {x = 0 ∨ x = 1} x := x + 1 {x = 1 ∨ x = 2} (for S2) // Interference check:// After S1 assigns x := x + 1, does S2's precondition still hold?// If x was 0, now x = 1. Can S2 execute with x = 1? Yes.// If x was 1, now x = 2. Can S2 execute with x = 2? We need x = 0 ∨ x = 1...// // This reveals the need for weaker preconditions or synchronization! // Correct proof requires considering all interleavings:// Interleaving 1: x := x + 1 (S1 reads 0, writes 1)// x := x + 1 (S2 reads 1, writes 2) → x = 2 ✓// Interleaving 2: x := x + 1 (S2 reads 0, writes 1) // x := x + 1 (S1 reads 1, writes 2) → x = 2 ✓// BUT if reads and writes interleave (non-atomic):// S1 reads x (= 0), S2 reads x (= 0), S1 writes 1, S2 writes 1 → x = 1 ✗Why Structured Parallelism Helps Verification:
Bounded Scope: The set of concurrently executing statements is statically visible (everything between cobegin and coend)
Clear Boundaries: We know exactly where parallel execution starts and ends
Compositionality: If we verify each branch independently and check interference-freedom, we can conclude correctness of the whole
No Dynamic Creation: Unlike thread pools or spawning loops, the parallelism is static and finite
Termination: If each branch terminates, the construct terminates—no need to analyze thread lifecycles
Model checkers like SPIN and UPPAAL work well with cobegin/coend-style parallelism because the state space is bounded. The number of parallel branches is fixed, and the tool can enumerate all possible interleavings. Unstructured parallelism with dynamic thread creation creates unbounded state spaces that are harder to verify.
We've explored the cobegin/coend notation—its origins, its equivalence to parbegin/parend, its variations, and its role in both practical languages and formal verification. Let's consolidate our understanding:
What's Next:
With both notations understood, we'll examine synchronization points in detail—how parend/coend acts as an implicit barrier, and how explicit synchronization can be introduced within parallel blocks for finer-grained coordination.
You now understand the cobegin/coend notation and its relationship to parbegin/parend. You can read academic papers and textbooks using either notation, recognize syntactic variations, and appreciate why this structured approach enables formal reasoning about concurrent programs.