Exercises#

Please fill the missing code pieces as indicated by the .... The imports are always provided at the top of the code chunks. This should give you a hint for which functions/classes to use.

Exercise 1: Model Selection#

Today we are working with the California Housing dataset, which you are already familiar with, as we previously used it while exploring resampling method. This dataset is based on the 1990 U.S. Census and includes features describing California districts.

  1. Familiarize yourself with the data

    • What kind of features are in the dataset? What is the target?

from sklearn.datasets import fetch_california_housing

data = fetch_california_housing(as_frame=True)

X = ...
y = ...
  1. Baseline model

    • Create a baseline linear regression model using all features and evaluate the model through 5-fold cross validation, using R² as the performance metric

    • Print the individual and average R²

from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
import numpy as np

# Regression model
model = ...
scores = ...

# Print the results
print("R² scores from each fold:", scores)
print("Average R² score:", np.mean(scores))
  1. Apply a forward stepwise selection to find a simpler suitable model.

    • Split the data into 80% training data and 20% testing data (print the shape to confirm it was sucessful)

    • Perform a forward stepwise selection with a linear regression model, 5-fold CV, R² score, and parsimonious feature selection (refer to documentation for further information)

    • Print the best CV R² as well as the chosen features

from mlxtend.feature_selection import SequentialFeatureSelector
from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = ...

print(X_train.shape, X_test.shape)
print(y_train.shape, y_test.shape)

# Forward Sequential Feature Selector
sfs_forward = ...


print(f">> Forward SFS:")
print(f"   Best CV R²      : {sfs_forward.k_score_:.3f}")
print(f"   Optimal # feats : {len(sfs_forward.k_feature_idx_)}")
print(f"   Feature names   : {sfs_forward.k_feature_names_}")
  1. Evaluate the model on the test set

selected_features = list(sfs_forward.k_feature_names_)

X_train_selected = ...
X_test_selected = ...

# Train and evaluate
...

Exercise 2: LASSO#

Please implement a Lasso regression model similar to the Ridge model in the Regularization section.

import pandas as pd
import numpy as np
import statsmodels.api as sm 

from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LassoCV
from sklearn.model_selection import train_test_split

# Data related processing
hitters = sm.datasets.get_rdataset("Hitters", "ISLR").data
hitters_subset = hitters[["Salary", "AtBat", "Runs","RBI", "CHits", "CAtBat", "CRuns", "CWalks", "Assists", "Hits", "HmRun", "Years", "Errors", "Walks"]].copy()

# TODO: Drop highly correlated features and rows with missing data
...

# TODO: Get the target (y) and features (X), then split into training and test set
...

# TODO: Scale predictors to mean=0 and std=1
...

# TODO: Implement Lasso 
...

Exercise 3: GAMs (1)#

Objective: Understand how the number of basis functions (df) and the polynomial degree (degree) affect the flexibility of a spline and the resulting fit in a Generalized Additive Model.

  1. Use the diabetes dataset and focus on the relationship between bmi and target.

  2. We want to test different combinations of parameters. For the dfs, please use 4, 6, 12. For the degree, please use 2 and 3 (quadratic and cubic).

  3. Fit the GAMs for each parameter combination. The resulting models will be plotted automatically for visual comparison.

import matplotlib.pyplot as plt
from sklearn.datasets import load_diabetes
from statsmodels.gam.api import GLMGam, BSplines


# TODO: 1. Get bmi as x and the target as y
data = load_diabetes(as_frame=True)
x = ...
y = ...

# TODO: 2. Define possible parameters
...

# TODO: 3. Plot partial effect for each combination of df and degree
...

Exercise 4: GAMs (2)#

We now use the wage dataset, which contains income information for a group of workers, along with demographic and employment-related features such as age, education, marital status, and job class.

  1. Explore the dataset

    • Which variables are numeric?

    • Which ones are categorical?

  2. Fit a GAM predicting wage from age, year, education, jobclass, and maritl

Note: For categorical features we use a one-hot encoding with pd.get_dummies()

import pandas as pd
from ISLP import load_data
from statsmodels.gam.api import GLMGam, BSplines

# Load data
Wage = load_data('Wage')

# TODO: Get continuous features
smoooth_features = ...
X_spline = Wage[smoooth_features]

# TODO: Get categorical features — one-hot encode
categoricals = ...
X_cat = pd.get_dummies(Wage[categoricals], drop_first=True)

# TODO: Define target variable
y = ...

# TODO: Create BSpline basis
...

# TODO: Fit GAM and print summary
...

Exercise 5: KNN#

You will implement a K-Nearest Neighbours (KNN) classifier to predict whether a patient is likely to have malignant or benign breast cancer based on several features. The data is already loaded for you, but please have a look a the documentation to quickly refresh you memory about the dataset.

import pandas as pd
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier

# Load the data
data = load_breast_cancer()
X, y = data.data, data.target

# TODO: Create a combined DataFrame for easier inspection and manipulation
df = ...

Please implement the following:

  1. Subset the dataframe to use mean area, mean radius, and mean smoothness as features (X), and target as the target (y)

  2. Scale the predictors to mean 0 and variance 1

  3. Split the data into a training and a testing set (70/30)

  4. Train a kNN classifier with \(k=5\)

  5. Evaluate the performance for the testing set. Please use the accuracy_score() as well as the classification_report() function.

# TODO: 1. Select features and target
X = ...
y = ...

# TODO: 2. Scale the features
scaler = ...
X_scaled = ...

# TODO: 3. Split the data into training and testing sets
...
#  TODO: 4. Perform KNN classification
...

# TODO: 5. Get predictions
...

print("Accuracy:", accuracy_score(...))
print("\nClassification Report:\n", classification_report(...))

The classification model from the previous step has two main limitations:

  1. It is trained and evaluated on a single data split

  2. It uses a single \(k\) even though we do not know if it is optimal

Please do the following:

  1. Implement 5-fold cross validation

  2. Train models for \(k\) ranging from 1 to 200 and plot the mean accuracy over all folds

from sklearn.model_selection import cross_val_score
import seaborn as sns
sns.set_theme(style="whitegrid")

k_values = range(1, 200)
mean_accuracies = []

# TODO: 5-fold cross-validation for different k values
...

# TODO: Plot
fig, ax = plt.subplots()

sns.lineplot(...)
ax.set(xlabel=..., ylabel=..., title=...);

Discuss the results. How is the performance in general? Which \(k\) would you chose?

Exercise 6: LDA, QDA & Naïve Bayes#

Once again, we will use the Iris dataset for classificationa analysis. Your task is to compare the performance of LDA, QDA, and Gaussian Naïve Bayes!

  1. Load the iris dataset from sklearn.datasets. We will use only the first two features (sepal length and width)

  2. TODO: Split the data into training and test sets (use stratification!)

  3. TODO: Fit LDA, QDA, and Naïve Bayes classifiers to the training data and orint the classification report for all models on the test data

  4. Plot the decision boundaries for both models

import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from matplotlib.colors import ListedColormap

# 1. Load data
iris = load_iris()
X = iris.data[:, :2]
y = iris.target
target_names = iris.target_names

# 2. Split into train/test
X_train, X_test, y_train, y_test = train_test_split(...)
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis

# 3. TODO: Fit a LDA model and print the classification report
lda = ...

print(classification_report(y_test, lda.predict(X_test)))
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis

# 3. TODO: Fit a QDA model and print the classification report
qda = ...

print(classification_report(y_test, qda.predict(X_test)))
from sklearn.naive_bayes import GaussianNB

# 3. TODO: Fit a Gaussian Naive Bayes model and print the classification report
gnb = ...

print(classification_report(y_test, gnb.predict(X_test)))

Once you have trained all three models, you can simply run the following code to plot the decision boundaries:

# 4. Plot the decision boundaries for all 3 classifiers

# Plotting function
def plot_decision_boundary(model, X, y, title, ax):
    h = .02
    x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
    y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
    xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
                         np.arange(y_min, y_max, h))
    Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
    Z = Z.reshape(xx.shape)

    cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
    cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])

    ax.contourf(xx, yy, Z, cmap=cmap_light, alpha=0.2)
    scatter = ax.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold, s=30)
    ax.set_xlim(xx.min(), xx.max())
    ax.set_ylim(yy.min(), yy.max())
    ax.set_title(title)
    ax.set_xlabel('Sepal length')
    ax.set_ylabel('Sepal width')

# Create plots for all 3 classifiers
fig, axes = plt.subplots(1, 3, figsize=(12, 4))
plot_decision_boundary(lda, X_train, y_train, "LDA Decision Boundary", axes[0])
plot_decision_boundary(qda, X_train, y_train, "QDA Decision Boundary", axes[1])
plot_decision_boundary(gnb, X_train, y_train, "Naïve Bayes Decision Boundary", axes[2])
plt.tight_layout()

Exercise 7: Trees#

  1. Inspect the data

    • How many features are there and what are they?

    • What is the target?

  2. Split the data into a train and test set, and make sure the classes are equally distributed (stratify=y)

  3. Fit the DecisionTreeClassifier(max_depth=3) and report train vs. test accuracy.

  4. Tree inspection (discuss in group)

    • After fitting the model, the tree will be plotted automatically

    • What is the very first split (feature name and threshold)?

    • Which leaf nodes are pure, and which have mixed classes?

from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt

# 1) Load and inspect data
diab = fetch_openml("diabetes", version=1, as_frame=True)
X = diab.data
y = diab.target

print(...)

# 2) Split data
X_train, X_test, y_train, y_test = train_test_split(...)

# 3) Fit tree
clf = DecisionTreeClassifier(...)
clf.fit(...)

print("\nTrain accuracy:", accuracy_score(y_train, clf.predict(X_train)))
print("Test accuracy: ", accuracy_score(y_test,  clf.predict(X_test)))

# 5) Plot tree
plt.figure(figsize=(14,7))
plot_tree(clf, feature_names=X.columns, class_names=["neg","pos"], filled=True, rounded=True);

Let’s see if we can improve the classification performance with a random forest classifier and hyperparameter tuning!

  1. Set up the clasifier + a parameter grid for grid search with 5-fold CV

    • n_estimators: 50, 100, 200

    • max_depth: None, 10, 20

    • min_samples_split: 2, 5, 10

    • max_features: “sqrt”, “log2”, 0.5

  2. Fit the model with the grid search

  3. Print the best hyperparameters

  4. Evaluate the best model on the test set

from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import accuracy_score, classification_report

# 1) Set up Random Forest + parameter grid
rf = RandomForestClassifier(random_state=0)

param_grid = {
    'n_estimators':      ...,
    'max_depth':         ...,
    'min_samples_split': ...,
    'max_features':      ...
}

# 2) Fit on training data
grid = GridSearchCV(...)
grid.fit(...)

# 3) Print best hyperparameters
print("Best parameters:", grid.best_params_)
print(f"CV accuracy: {grid.best_score_:.3f}")

# 4) Evaluate on the held‐out test set
best_rf = grid.best_estimator_
y_pred = ...

print(f"\nTest accuracy: {accuracy_score(y_test, y_pred):.3f}\n")
print("Classification Report:")
print(classification_report(y_test, y_pred, target_names=['neg','pos']))

Exercise 8: SVC#

For the SVC exercise we will use the fmri dataset from seaborn, which contains measurements of brain activity (signal) in two brain regions (frontal and parietal) under two event types (stim vs. cue).

import seaborn as sns
df = sns.load_dataset("fmri")
df

We will try to answer a very simple research question:

Can we distinguish between cue and stim events based on the fMRI signal in the parietal and frontal brain regions?

To do this, we need to turn the long‐format data into a classic “feature matrix” (one row = one sample, two columns = our two brain‐region signals) plus a corresponding label vector (cue/stim):

df_wide = df.pivot_table(
    index=["subject","timepoint","event"],
    columns="region",
    values="signal"
).reset_index()
df_wide.columns.name = None

X = df_wide[["frontal","parietal"]] 
y = df_wide["event"].map({"cue":0,"stim":1})

print("\nFeatures:")
print(X)
print("\nTarget:")
print(y)

With the features and target in the correct form, please perform the following tasks:

  1. Split the data into a train and test set

  2. Scale the predictors to mean 0 and std 1

  3. Fit a linear as well as a rbf SVC and discuss the classification reports

import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.metrics import classification_report

# 1. TODO: Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(...)

# 2. TODO: Scale the features after splitting (important to avoid data leakage)
scaler = StandardScaler()
X_train_sc = scaler.fit_transform(...)
X_test_sc  = scaler.transform(...)

# 3. TODO: Fit the SVC models and compare the classification reports
clf_lin = SVC(...)
clf_lin.fit(...)
y_pred_lin = clf_lin.predict(...)
print("Linear SVC\n", classification_report(...))

clf_rbf = SVC(...)
clf_rbf.fit(...)
y_pred_rbf = clf_rbf.predict(...)
print("RBF SVC\n", classification_report(...))

After fitting both models, you can run the code chunk below to plot the decision boundary:

import matplotlib.pyplot as plt
from matplotlib.lines import Line2D

def plot_svc_decision_function(model, ax=None):
    """Plot the decision boundary for a trained 2D SVC model."""
    # Set up grid
    xlim = ax.get_xlim()
    ylim = ax.get_ylim()
    xx, yy = np.meshgrid(np.linspace(*xlim, 100), np.linspace(*ylim, 100))
    grid = np.c_[xx.ravel(), yy.ravel()]
    decision_values = model.decision_function(grid).reshape(xx.shape)
    ax.contour(xx, yy, decision_values, levels=[0], linestyles=['-'], colors='k')

# Plot
fig, ax = plt.subplots(1,2, figsize=(12, 6))

legend_elements = [
    Line2D([0], [0], marker='o', linestyle='None', markersize=8, label='Cue', markerfacecolor="#0173B2", markeredgecolor='None'),
    Line2D([0], [0], marker='o', linestyle='None', markersize=8, label='Stim', markerfacecolor="#DE8F05", markeredgecolor='None'),
    Line2D([0], [0], color='k', linestyle='-', label='Decision boundary')]

# Linear SVC
sns.scatterplot(x = X_train_sc[:, 0], y = X_train_sc[:, 1], hue = y_train.map({0:"cue",1:"stim"}), palette = ["#0173B2", "#DE8F05"], s = 60, ax = ax[0], legend=None)
ax[0].set(xlabel = "Frontal signal (scaled)", ylabel = "Parietal signal (scaled)", title  = "Linear SVC Decision Boundary")
plot_svc_decision_function(clf_lin, ax=ax[0])
ax[0].legend(handles=legend_elements, loc="upper left", handlelength=1)

# RBF SVC
sns.scatterplot(x = X_train_sc[:, 0], y = X_train_sc[:, 1], hue = y_train.map({0:"cue",1:"stim"}), palette = ["#0173B2", "#DE8F05"], s = 60, ax = ax[1], legend=None)
ax[1].set(xlabel = "Frontal signal (scaled)", ylabel = "Parietal signal (scaled)", title  = "RBF SVC Decision Boundary")
plot_svc_decision_function(clf_rbf, ax=ax[1])
ax[1].legend(handles=legend_elements, loc="upper left", handlelength=1);

Training a SVC on more complex datasets usually requires a parameter search to find the optimal hyperparameters. Please implement a grid search with the following options:

  • Kernel: rbf

  • C: np.logspace(-2,2,5)

  • gamma: np.logspace(-3,1,5)

  • cv: 5-fold

  • scoring: accuracy

Print the optimal parameters and the corresponding accuracies for taining and testing.

from sklearn.model_selection import GridSearchCV

param_grid = {
    ...
}
grid = GridSearchCV(...)
grid.fit(...)

print("Best params:", grid.best_params_)
print("CV accuracy:", grid.best_score_)
print("Test accuracy:", grid.score(X_test_sc, y_test))

Exercise 9: Neural Networks#

In this exercise, you will use tensorflow to create a single layer neural network to classify handwritten numbers from 0 to 9 from the MNIST dataset.

Hint: Tensorflow is one of the most widely used machine learning learning libraries. It was initially developed by Google, but is open source and available for everyone. Tensorflow requires Python <=3.12. If you have an environment with Python 3.13, you either need to create a new one or simply use Google Colab for this exercise.

from tensorflow.keras.datasets import mnist
import matplotlib.pyplot as plt

# Load data and plot examples
(x_train, y_train), (x_test, y_test) = mnist.load_data()

fig, ax = plt.subplots(1,5)
for i in range(5):
    ax[i].imshow(x_train[i], cmap='gray')
    ax[i].set_axis_off()
plt.show()

We can then create the network with the following characteristics:

  • Input: A flattened version of the MNIST image (a vector of size 784)

  • Architecture: A single dense (fully connected) layer with 10 neurons (one for each class)

  • Activation function: softmax(outputs probabilities summing to 1)

  • Output: A probability distribution over digits 0–9; the highest is chosen

  • Learning rule: categorical_crossentropy loss and stochastic gradient descent (SGD) optimiser

  • Evaluation metric: accuracy, measuring the percentage of correctly classified images

Tasks:

  1. Explore the code and try to understand what it does (change things and see how they affect the result!)

  2. Improve the model to achieve a better predicion accuracy (>97%). Potential change you can make:

    • Change the number of epochs or batch size (the number of training examples processed at once before the model weights are updated)

    • Change the learning rate or optimiser (use e.g. Adam, which uses an adaptive learning rate and is faster)

    • Change the model structure by e.g. adding a hidden layer with e.g. 64 or 128 neurons and a ReLu activation function

  3. Compare your model with other students. Who managed to get the highest testing accuracy?

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.optimizers import SGD, Adam

# 1) Load and preprocess data (flatten & scale)
(x_train, y_train), (x_test, y_test) = mnist.load_data()

x_train = x_train.reshape(-1, 28*28).astype('float32') / 255.0
x_test  = x_test.reshape(-1, 28*28).astype('float32') / 255.0
y_train = to_categorical(y_train, 10)
y_test  = to_categorical(y_test, 10)

# 2) Create the model: One dense (fully connected) with a softmax activation function
model = Sequential([
    Input(shape=(784,)),
    Dense(10, activation='softmax')
])

# 3) Compile & train the model
model.compile(loss='categorical_crossentropy',
              optimizer=SGD(learning_rate=0.01), 
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=5, batch_size=64, verbose=1)

# 4) Evaluate the model
loss, acc = model.evaluate(x_test, y_test, verbose=0)
print(f"Test accuracy: {acc:.4f}")

Exercise 10: Principal Component Analysis#

For today’s practical session, we will work with the Diabetes dataset built into scikit-learn. This dataset contains medical information from 442 diabetes patients:

  • Features (X): 10 baseline variables (age, sex, BMI, average blood pressure, and six blood serum measures).

  • Target (y): a quantitative measure of disease progression one year after baseline.

You can read more here: https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html

Tasks:

  1. Inspect & clean (already implemented)

    • Display summary statistics (df.describe()) for all 10 features.

    • Check for missing values. (Hint: this dataset has none, but verify.)

  2. Standardize

    • Use StandardScaler() to transform each feature to mean 0, variance 1.

  3. PCA & scree plot

    • Fit PCA() to the standardized feature matrix.

    • Plot the explained variance ratio for each principal component (a scree plot).

    • Decide how many components to retain (e.g.\ cumulative variance ≥ 80%).

  4. Interpret loadings

    • Examine pca.components_.

    • For the first two retained PCs, list the top 3 features by absolute loading.

    • Infer what physiological patterns these components might represent.

  5. Project the data for visualization

    • Compute the PCA projection: X_pca = pca.transform(X_std).

  6. Plot the results (already implemented)

    • Create a 2D scatter of PC1 vs. PC2, coloring points by whether the target is above or below the median progression value.

    • Do patients with more rapid progression cluster differently?

from sklearn.datasets import load_diabetes

# Load the data as a DataFrame
diabetes = load_diabetes(as_frame=True)
df = diabetes.frame
df.rename(columns={'target': 'Disease progression'}, inplace=True)

X = df.drop(columns='Disease progression')
y = df['Disease progression']

# 1. Inspect the data
df.head()
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA

import seaborn as sns
sns.set_theme(style="whitegrid")

# 2. Standardize the data
scaler = StandardScaler()
X_std = ...

# 3. Perform the PCA
pca = ...

# 4. Get the explained variance ratio
explained_variance = ...

# 5. Project into PCA space
X_pca = ...

# 6. Plot the explained variance and 2D PCA projection
fig, ax = plt.subplots(1,2, figsize=(15, 5))

ax[0].plot(np.arange(1, len(explained_variance)+1), explained_variance.cumsum(), marker='o')
ax[0].set(xlabel='Number of Components', ylabel='Cumulative Explained Variance', title='Scree Plot')

sns.scatterplot(x=X_pca[:, 0], y=X_pca[:, 1], hue=y, palette='viridis', alpha=0.6, ax=ax[1])
ax[1].set(xlabel='Principal Component 1', ylabel='Principal Component 2');