Linear Algebra for Machine Learning
Photo by Dan Cristian Pădureț on Unsplash

Linear Algebra for Machine Learning

Master the Mathematical Backbone of ML Models

Linear Algebra is the foundation of many ML algorithms. This notebook walks through core concepts like vectors, matrices, and operations using NumPy, with visual and practical examples.

Import Libraries

import numpy as np
import matplotlib.pyplot as plt        

Scalars, Vectors, Matrices

# Scalar
a = 5

# Vector
v = np.array([1, 2, 3])

# Matrix
M = np.array([[1, 2], [3, 4]])

print("Scalar:", a)
print("Vector:", v)
print("Matrix:\n", M)        

Output

Scalar: 5
Vector: [1 2 3]
Matrix:
 [[1 2]
 [3 4]]        

Vector Operations

# Addition, Scalar Multiplication, Dot Product
v1 = np.array([1, 2])
v2 = np.array([3, 4])

print("Addition:", v1 + v2)
print("Scalar Multiply:", 2 * v1)
print("Dot Product:", np.dot(v1, v2))        

Output

Addition: [4 6]
Scalar Multiply: [2 4]
Dot Product: 11        

Matrix Operations

A = np.array([[1, 2], [3, 4]])
B = np.array([[2, 0], [1, 2]])

print("Matrix Addition:\n", A + B)
print("Matrix Multiplication:\n", A @ B)
print("Transpose:\n", A.T)        

Output

Matrix Addition:
 [[3 2]
 [4 6]]
Matrix Multiplication:
 [[ 4  4]
 [10  8]]
Transpose:
 [[1 3]
 [2 4]]        

Identity Matrix & Inverse

I = np.eye(2)
print("Identity Matrix:\n", I)

A_inv = np.linalg.inv(A)
print("Inverse of A:\n", A_inv)

print("A * A_inv:\n", A @ A_inv)  # Should return Identity        

Output

Identity Matrix:
 [[1. 0.]
 [0. 1.]]
Inverse of A:
 [[-2.   1. ]
 [ 1.5 -0.5]]
A * A_inv:
 [[1.0000000e+00 0.0000000e+00]
 [8.8817842e-16 1.0000000e+00]]        

Determinant and Rank

det_A = np.linalg.det(A)
rank_A = np.linalg.matrix_rank(A)

print("Determinant of A:", det_A)
print("Rank of A:", rank_A)        

Output

Determinant of A: -2.0000000000000004
Rank of A: 2        

Eigenvalues and Eigenvectors

e_vals, e_vecs = np.linalg.eig(A)
print("Eigenvalues:\n", e_vals)
print("Eigenvectors:\n", e_vecs)        

Output

Eigenvalues:
 [-0.37228132  5.37228132]
Eigenvectors:
 [[-0.82456484 -0.41597356]
 [ 0.56576746 -0.90937671]]        

Application: Linear Transformation Visualization

# Visualize how a matrix transforms a vector
v = np.array([[2], [1]])
T = np.array([[2, 0], [0, 1]])

transformed_v = T @ v

plt.quiver(0, 0, v[0], v[1], angles='xy', scale_units='xy', scale=1, color='blue', label='Original')
plt.quiver(0, 0, transformed_v[0], transformed_v[1], angles='xy', scale_units='xy', scale=1, color='red', label='Transformed')
plt.xlim(-1, 5)
plt.ylim(-1, 5)
plt.grid()
plt.legend()
plt.title("Linear Transformation")
plt.show()        
Article content
Image taken by the Author

Summary

  • You learned the basics of linear algebra relevant to ML: vectors, matrices, dot products, eigenvalues, and more.
  • In ML, concepts like PCA, linear regression, and neural networks heavily rely on linear algebra.

Next up: statistics.ipynb to cover the statistical tools needed in ML.

Before you go

To view or add a comment, sign in

Explore topics