Loading content...
A stochastic state evolution process models systems that transition between discrete states according to probabilistic rules. The defining characteristic of this process is the memoryless property (also known as the Markov property): the probability of transitioning to any future state depends solely on the current state, not on the sequence of states that preceded it.
This mathematical framework is represented by a transition probability matrix P of dimensions n × n, where n is the total number of possible states. Each element P[i][j] represents the probability of moving from state i to state j. Since these are probabilities, each row must form a valid probability distribution, meaning:
The Evolution Process:
Given an initial state s₀, the system evolves through time by:
The output is the complete state trajectory: a sequence of state indices from the initial state through all subsequent states visited during the simulation.
Your Task:
Implement a function that simulates this stochastic process. Given a transition probability matrix, an initial state index, and a number of steps to simulate, generate and return the complete sequence of states visited (including the initial state).
Important: For deterministic testing, use numpy's random number generator. The test cases are generated with a fixed seed, so your implementation should produce consistent results when using the same seed.
transition_matrix = np.array([[0.8, 0.2], [0.3, 0.7]])
initial_state = 0
num_steps = 3[0, 0, 1, 1]Starting from state 0, the system evolves through 3 time steps:
• Step 0 → 1: Current state is 0. Row 0 of the matrix [0.8, 0.2] governs the transition. There's an 80% chance of staying in state 0 and 20% chance of moving to state 1. Based on the random selection (seeded), the system stays in state 0.
• Step 1 → 2: Still in state 0. Using the same probabilities [0.8, 0.2], the random selection chooses state 1 this time (the 20% probability event occurred).
• Step 2 → 3: Now in state 1. Row 1 of the matrix [0.3, 0.7] applies. There's a 30% chance of going to state 0 and 70% chance of staying in state 1. The system stays in state 1.
The complete trajectory is [0, 0, 1, 1], representing the initial state followed by states visited after each of the 3 transitions.
transition_matrix = np.array([[0.5, 0.3, 0.2], [0.2, 0.5, 0.3], [0.3, 0.2, 0.5]])
initial_state = 1
num_steps = 4[1, 1, 2, 2, 2]This is a 3-state system starting from state 1:
• Initial: State 1 • After step 1: From state 1, using probabilities [0.2, 0.5, 0.3], system stays in state 1 • After step 2: From state 1, system transitions to state 2 • After step 3: From state 2, using probabilities [0.3, 0.2, 0.5], system stays in state 2 • After step 4: From state 2, system remains in state 2
The trajectory [1, 1, 2, 2, 2] has length 5 (initial state + 4 steps).
transition_matrix = np.array([[0.0, 1.0, 0.0], [0.0, 0.0, 1.0], [1.0, 0.0, 0.0]])
initial_state = 0
num_steps = 5[0, 1, 2, 0, 1, 2]This is a deterministic cyclic system where each transition happens with probability 1:
• State 0 always transitions to State 1 (P[0][1] = 1.0) • State 1 always transitions to State 2 (P[1][2] = 1.0) • State 2 always transitions to State 0 (P[2][0] = 1.0)
Starting from state 0, the system follows a predictable cycle: 0 → 1 → 2 → 0 → 1 → 2
This demonstrates that stochastic processes can include deterministic behavior as a special case when probabilities are 0 or 1. The period of this cycle is 3, and the pattern repeats indefinitely.
Constraints