Loading problem...
In neural network design, activation functions introduce non-linearity into the model, enabling it to learn complex patterns. However, some architectures benefit from using bounded activation functions that constrain neuron outputs within a specific range, preventing gradient explosion and ensuring stable training dynamics.
The Bounded Linear Activation (also known as the Clipped Linear Unit) is a piecewise linear function that acts as a hard limiter on input values. It operates by:
Mathematically, for an input x with bounds ([min_val, max_val]):
$$f(x) = \begin{cases} min_val & \text{if } x < min_val \ x & \text{if } min_val \leq x \leq max_val \ max_val & \text{if } x > max_val \end{cases}$$
This creates a linear region between the bounds with flat regions (zero gradient) outside. With default bounds of -1.0 and 1.0, the function effectively squashes all inputs into the symmetric interval ([-1, 1]).
Key Properties:
Your Task: Write a Python function that implements the bounded linear activation. Given an input value and optional minimum/maximum bounds, return the activated value clipped to the specified range.
x = 0.50.5With default bounds [-1.0, 1.0], the input value 0.5 lies within the acceptable range. Since -1.0 ≤ 0.5 ≤ 1.0, the value passes through the linear region unchanged. The function acts as an identity mapping for this input, returning 0.5.
x = -2.5-1.0The input value -2.5 is below the minimum bound of -1.0. Since -2.5 < -1.0, the function clamps this value to the lower threshold. The output is -1.0, preventing excessively negative activations from propagating through the network.
x = 3.01.0The input value 3.0 exceeds the maximum bound of 1.0. Since 3.0 > 1.0, the function caps this value at the upper threshold. The output is 1.0, ensuring the activation stays within the bounded range and preventing potential overflow or instability.
Constraints