Loading content...
In deep learning, activation functions are crucial non-linear transformations that enable neural networks to learn complex patterns. Without them, stacking multiple linear layers would simply collapse into a single linear transformation, severely limiting the network's expressive power.
The Rectified Linear activation function has become one of the most widely adopted activation functions in modern neural network architectures. Its elegant simplicity and computational efficiency have made it the default choice for hidden layers in many state-of-the-art models.
Mathematical Definition:
The rectified linear function is defined as:
$$f(z) = \max(0, z)$$
This can also be expressed piecewise as:
$$f(z) = \begin{cases} z & \text{if } z > 0 \ 0 & \text{if } z \leq 0 \end{cases}$$
Key Properties:
Your Task:
Write a function that implements the rectified linear activation function. Given a single floating-point input value, your function should:
z = 00When the input is exactly zero, the rectified linear function returns 0. The function only passes through positive values, and zero is treated as a non-positive input that gets mapped to zero output.
z = 11When the input is positive (z = 1 > 0), the rectified linear function acts as an identity function and returns the input unchanged. The value 1 passes through without modification.
z = -10For negative inputs, the rectified linear function outputs zero. Since -1 < 0, the function suppresses this activation, returning 0 instead of the negative value. This 'rectification' behavior is what gives the function its name.
Constraints