Loading content...
In machine learning and statistical modeling, polynomial basis expansion is a powerful feature transformation technique that maps input data into a higher-dimensional space by generating polynomial features. This transformation enables linear models to capture nonlinear relationships in the data, effectively fitting curves to data that would otherwise require complex nonlinear algorithms.
Given a set of scalar data points and a specified polynomial degree d, the polynomial basis expansion transforms each data point x into a feature vector containing its powers from 0 to d:
$$\phi(x) = [x^0, x^1, x^2, \ldots, x^d] = [1, x, x^2, \ldots, x^d]$$
This expanded representation allows models like linear regression to learn polynomial relationships. For example, with degree 2, a simple input x becomes the feature vector [1, x, x²], enabling the model to fit quadratic curves.
Why This Matters:
Your Task: Write a Python function that performs polynomial basis expansion on a list of data points. The function should:
data = [1.0, 2.0]
degree = 2[[1.0, 1.0, 1.0], [1.0, 2.0, 4.0]]For each data point, we generate polynomial features from x⁰ to x²:
• For x = 1.0: [1.0⁰, 1.0¹, 1.0²] = [1.0, 1.0, 1.0] • For x = 2.0: [2.0⁰, 2.0¹, 2.0²] = [1.0, 2.0, 4.0]
The result is a 2×3 feature matrix where each row represents the polynomial expansion of a data point.
data = [3.0]
degree = 3[[1.0, 3.0, 9.0, 27.0]]For a single data point x = 3.0 with degree 3, we compute powers from 0 to 3:
• [3.0⁰, 3.0¹, 3.0², 3.0³] = [1.0, 3.0, 9.0, 27.0]
This creates a 4-dimensional feature vector from a single scalar input, which could be used to fit a cubic polynomial.
data = [1.0, 2.0, 3.0]
degree = 0[[1.0], [1.0], [1.0]]With degree 0, each data point is transformed to only include x⁰ = 1:
• For x = 1.0: [1.0⁰] = [1.0] • For x = 2.0: [2.0⁰] = [1.0] • For x = 3.0: [3.0⁰] = [1.0]
This represents the simplest case where only the bias term is included, equivalent to fitting a horizontal line (constant model).
Constraints