Loading learning content...
In a Resource Allocation Graph, request edges are the arrows of desire—they represent processes reaching out for resources they need but cannot yet obtain. Every request edge tells a story: a process has reached a point in its execution where it cannot proceed without a specific resource, and it is now waiting, suspended, until that resource becomes available.
Understanding request edges is essential for deadlock analysis because they represent the forward pressure in the system. While assignment edges show what is currently held, request edges reveal what processes aspire to hold. When these aspirations form a cycle, the system may be heading toward—or already in—a deadlock.
This page explores the complete lifecycle of request edges: when they are created, what conditions maintain them, how they are resolved or withdrawn, and how they participate in deadlock detection algorithms.
By the end of this page, you will understand: the precise semantics of request edges, when and how they are created in the kernel, the distinction between pending and blocked requests, edge removal conditions, and how request edges drive cycle detection algorithms.
A request edge Pᵢ → Rⱼ in a Resource Allocation Graph carries precise semantic meaning:
Formal Definition:
A request edge (Pᵢ, Rⱼ) ∈ E indicates that process Pᵢ has issued a request for at least one instance of resource type Rⱼ, and this request has not yet been satisfied.
Semantic Implications:
Process state — Pᵢ is typically in a BLOCKED or WAITING state, unable to proceed until Rⱼ is granted
Request is outstanding — The request has been registered with the resource manager but not fulfilled
Temporal persistence — The edge exists from the moment of request until either satisfaction or cancellation
Priority may apply — Among multiple request edges to the same resource, ordering may be determined by arrival time, priority, or other policies
Request Edge vs. Claim Edge:
Some RAG variants include claim edges (Pᵢ ⇢ Rⱼ, drawn as dashed lines) that indicate a process may request a resource in the future. Claim edges are used in deadlock avoidance algorithms. Request edges, by contrast, represent active, outstanding requests.
| Property | Description | Example |
|---|---|---|
| Source | Always a process vertex Pᵢ | P₁ → R₂ |
| Target | Always a resource vertex Rⱼ | P₁ → R₂ |
| Multiplicity | One edge per requested instance | P₁ needs 2 instances → 2 edges |
| Lifetime | From request issue to satisfaction/cancellation | Microseconds to indefinite |
| State implication | Source process is blocked | P₁ cannot execute |
Request edges are created when a process attempts to acquire a resource that is not currently available. The creation point is precisely defined in the resource acquisition control path.
Creation Trigger:
A request edge is added when:
Atomicity Requirements:
The creation of a request edge must be atomic with:
If these operations are not atomic, race conditions can cause the RAG to be inconsistent with actual system state.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
/** * Request Edge Creation in Resource Acquisition Path */ int resource_acquire(struct resource *res, struct task_struct *task){ unsigned long flags; int ret = 0; spin_lock_irqsave(&res->lock, flags); /* Fast path: resource available */ if (res->available > 0) { res->available--; /* Add assignment edge instead */ rag_add_assignment_edge(res, task); spin_unlock_irqrestore(&res->lock, flags); return 0; } /* Slow path: must wait - create request edge */ /* Step 1: Add to wait queue */ add_to_wait_queue(&res->waiters, task); /* Step 2: Create request edge in RAG (atomically) */ rag_add_request_edge(task, res); /* Step 3: Optional - check for deadlock NOW */ if (rag_would_cause_deadlock(task, res)) { /* Remove from wait queue and RAG */ remove_from_wait_queue(&res->waiters, task); rag_remove_request_edge(task, res); spin_unlock_irqrestore(&res->lock, flags); return -EDEADLK; /* Would deadlock! */ } /* Step 4: Block the process */ set_current_state(TASK_UNINTERRUPTIBLE); spin_unlock_irqrestore(&res->lock, flags); schedule(); /* Context switch away */ /* When we resume, resource has been granted */ return 0;}Between adding the request edge and actually blocking, the system must ensure no signals are lost. If the resource becomes available in this window, the process must still be informed. The state-before-check pattern from blocking synchronization applies here as well.
Not all resource requests result in immediate blocking. Understanding the distinction between pending and blocked requests clarifies when request edges truly exist.
Immediate Satisfaction (No Request Edge):
When a process requests a resource and the resource is immediately available:
Blocked Request (Request Edge Created):
When a process requests a resource that is not available:
available > 0available == 0Non-Blocking Requests (Try-Lock Pattern):
Some acquisition primitives are non-blocking (e.g., trylock(), trywait()). These never create request edges:
Since the process never waits, no request edge is added to the RAG. This is why non-blocking primitives cannot contribute to deadlock—they never create the waiting relationships that form cycles.
Request edges are removed under several conditions. Each removal scenario has distinct semantics and implications for the RAG.
Removal Scenario 1: Request Satisfied
The most common case—the resource becomes available and is granted to the waiting process:
Removal Scenario 2: Request Cancelled/Interrupted
The waiting process is interrupted (e.g., by a signal) before receiving the resource:
Removal Scenario 3: Timeout Expired
Timd requests that exceed their deadline:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283
/** * Request Edge Removal Scenarios */ /* Scenario 1: Request satisfied - called by resource releaser */void resource_grant_to_waiter(struct resource *res){ struct task_struct *waiter; spin_lock(&res->lock); /* Get first waiter from queue */ waiter = dequeue_first_waiter(&res->waiters); if (!waiter) { res->available++; /* No waiters, just increment */ spin_unlock(&res->lock); return; } /* Transition: Request edge → Assignment edge */ rag_remove_request_edge(waiter, res); rag_add_assignment_edge(res, waiter); /* Wake the waiter */ wake_up_process(waiter); spin_unlock(&res->lock);} /* Scenario 2: Request cancelled due to signal */int resource_acquire_interruptible(struct resource *res){ /* ... setup and add request edge ... */ while (!resource_available(res)) { set_current_state(TASK_INTERRUPTIBLE); spin_unlock(&res->lock); schedule(); spin_lock(&res->lock); /* Check if we were signaled */ if (signal_pending(current)) { /* Remove request edge - we're giving up */ rag_remove_request_edge(current, res); remove_from_wait_queue(&res->waiters, current); spin_unlock(&res->lock); return -EINTR; } } /* Request satisfied - edge already transitioned */ return 0;} /* Scenario 3: Timeout */int resource_acquire_timeout(struct resource *res, long timeout_jiffies){ long remaining = timeout_jiffies; /* ... setup and add request edge ... */ while (!resource_available(res) && remaining > 0) { set_current_state(TASK_UNINTERRUPTIBLE); spin_unlock(&res->lock); remaining = schedule_timeout(remaining); spin_lock(&res->lock); } if (remaining <= 0 && !resource_available(res)) { /* Timeout expired - remove request edge */ rag_remove_request_edge(current, res); remove_from_wait_queue(&res->waiters, current); spin_unlock(&res->lock); return -ETIMEDOUT; } /* Success */ return 0;}| Scenario | RAG Change | Process State | Return Value |
|---|---|---|---|
| Grant | Request → Assignment edge | BLOCKED → RUNNABLE | Success (0) |
| Signal interrupt | Request edge deleted | BLOCKED → RUNNABLE | -EINTR |
| Timeout | Request edge deleted | BLOCKED → RUNNABLE | -ETIMEDOUT |
| Process killed | Request edge deleted | Terminated | N/A |
A single process may have multiple request edges, and understanding these patterns is critical for deadlock analysis.
Case 1: Multiple Resources Requested
A process may need several different resources simultaneously:
This occurs when a process issues requests before any are satisfied, or when using constructs like WaitForMultipleObjects() on Windows.
Case 2: Multiple Instances of Same Resource
A process may need multiple instances of the same resource type:
Visually, two arrows connect P₁ to the same resource rectangle.
Case 3: Sequential vs Simultaneous
Sequential requests don't create multiple edges simultaneously—each previous request is satisfied (converted to assignment) before the next request is made.
Simultaneous (atomic) requests for multiple resources may create multiple request edges at once.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172
"""Multiple Request Edge Patterns""" def analyze_process_requests(rag, process_name: str) -> dict: """ Analyze all outstanding requests from a process. Returns details about the request edge pattern. """ process = rag.processes.get(process_name) if not process: raise ValueError(f"Unknown process: {process_name}") requests = process.requested_resources # Count distinct resources distinct_resources = len([r for r, count in requests.items() if count > 0]) # Count total request edges total_edges = sum(count for count in requests.values()) # Count multi-instance requests multi_instance = [(r, c) for r, c in requests.items() if c > 1] return { "process": process_name, "distinct_resources_requested": distinct_resources, "total_request_edges": total_edges, "resources": dict(requests), "multi_instance_requests": multi_instance, "is_blocked": process.is_blocked(), "pattern": classify_request_pattern(distinct_resources, multi_instance) } def classify_request_pattern(distinct: int, multi: list) -> str: """Classify the request pattern.""" if distinct == 0: return "NO_REQUESTS" elif distinct == 1 and not multi: return "SINGLE_RESOURCE_SINGLE_INSTANCE" elif distinct == 1 and multi: return "SINGLE_RESOURCE_MULTI_INSTANCE" elif distinct > 1 and not multi: return "MULTI_RESOURCE_SINGLE_INSTANCE_EACH" else: return "COMPLEX_MULTI_RESOURCE_MULTI_INSTANCE" # Example: Deadlock involving multiple request edgesdef example_multiple_requests(): rag = ResourceAllocationGraph() # Setup rag.add_process("P1") rag.add_process("P2") rag.add_resource("R1", instances=1) rag.add_resource("R2", instances=1) # P1 holds R1, wants R2 rag.add_assignment_edge("R1", "P1") rag.add_request_edge("P1", "R2") # P2 holds R2, wants R1 rag.add_assignment_edge("R2", "P2") rag.add_request_edge("P2", "R1") # Analyze each process print(analyze_process_requests(rag, "P1")) print(analyze_process_requests(rag, "P2")) # Both have single request edges, forming a cycle → DEADLOCKRequest edges are the critical component in deadlock detection because they represent the dependencies that can form cycles.
The Cycle Detection Insight:
A cycle in a RAG indicates potential deadlock. But how does the cycle traverse both edge types?
The alternating pattern (request → assignment → request → ...) is inherent to the bipartite structure.
Wait-For Graph Derivation:
For single-instance resources, we can derive a simpler wait-for graph containing only processes:
This condenses the two-edge hops into single edges, making cycle detection more direct.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
def detect_cycle_via_request_edges(rag) -> list: """ Detect cycles in the RAG by following request and assignment edges. Returns a list of processes involved in the cycle, or empty list. """ visited = set() rec_stack = set() path = [] def get_blocking_processes(process_name: str) -> list: """Get processes blocking this process via held resources.""" blocking = [] process = rag.processes[process_name] for resource_name in process.requested_resources: # Who holds this resource? resource = rag.resources[resource_name] for holder in resource.allocated_instances: if holder != process_name: blocking.append(holder) return blocking def dfs(process: str) -> bool: visited.add(process) rec_stack.add(process) path.append(process) for blocking_process in get_blocking_processes(process): if blocking_process not in visited: if dfs(blocking_process): return True elif blocking_process in rec_stack: # Found cycle - extract it cycle_start = path.index(blocking_process) return True path.pop() rec_stack.remove(process) return False for process in rag.processes: if process not in visited: if dfs(process): return path return [] # No cycle foundIn deadlock detection, request edges are where you start. Each request edge represents a dependency: 'I need what someone else has.' Following these dependencies through the assignment edges reveals the circular wait pattern that defines deadlock.
You now understand request edge semantics, lifecycle, and their central role in deadlock detection. Next, we'll explore assignment edges—the 'have' side of resource relationships.