Linear Algebra

From BloomWiki
Revision as of 14:38, 23 April 2026 by Wordpad (talk | contribs) (BloomWiki: Linear Algebra)
Jump to navigation Jump to search

How to read this page: This article maps the topic from beginner to expert across six levels � Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. Scan the headings to see the full scope, then read from wherever your knowledge starts to feel uncertain. Learn more about how BloomWiki works ?

Linear Algebra is the branch of mathematics concerning linear equations, linear functions, and their representations through matrices and vector spaces. If algebra is the study of single numbers, linear algebra is the study of Grids and Arrows. It is the "Engine Room" of modern technology. Every time you filter a photo on Instagram, search Google, or use an AI like ChatGPT, millions of linear algebra calculations are happening in the background. It allows us to solve thousands of equations simultaneously, making it the most practical and widely used math in the 21st century.

Remembering

  • Vector — A quantity having both magnitude and direction; an array of numbers.
  • Matrix — A rectangular grid of numbers, arranged in rows and columns.
  • Scalar — A single number (as opposed to a vector or matrix).
  • Dot Product — An operation that takes two vectors and returns a single number (measuring how much they "point" in the same direction).
  • Matrix Multiplication — A way of combining two matrices to represent a complex transformation.
  • Identity Matrix ($I$) — A square matrix with 1s on the diagonal and 0s elsewhere; the "1" of matrices.
  • Inverse Matrix ($A^{-1}$) — A matrix that, when multiplied by $A$, result in the Identity matrix (used to "undo" a transformation).
  • Determinant — A single number that describes the "Scale" of a transformation (if it's 0, the matrix has "collapsed" the space).
  • Eigenvector — A vector whose direction does not change during a transformation.
  • Eigenvalue — The factor by which an eigenvector is scaled during a transformation.
  • Basis — A set of vectors that can be used to "build" any other vector in a space.
  • Dimension — The number of vectors in a basis (e.g., 2D, 3D, or 1000D).
  • Linear Transformation — A function that moves space while keeping lines straight and the origin fixed.
  • System of Linear Equations — A set of two or more linear equations with the same variables.

Understanding

Linear Algebra is about Transforming Space.

1. The Vector View: A vector $[3, 2]$ is a point in space. Linear algebra is the study of how to move, rotate, and stretch these points.

2. The Matrix as a Function: A matrix is a "Machine." You put a vector in, and a new vector comes out.

  • A specific matrix might Rotate everything 90 degrees.
  • Another might Scale everything to be 2x bigger.
  • Matrix Multiplication is just "Stacking" these machines. Rotating then Scaling is one combined matrix.

3. Eigenvectors (The Soul of the Matrix): When you transform space, most vectors get knocked around and change direction. But for every transformation, there are special vectors—Eigenvectors—that stay pointing the same way. They are the "Axes of Rotation."

  • Google PageRank: Google treats the internet as a massive matrix. The "Most important websites" are actually the Eigenvectors of that matrix.

High-Dimensional Space: We can only visualize 3D, but math doesn't care. Linear algebra works the same for a 1,000,000-dimensional vector. This is how AI works—a word is represented as a vector of 1536 numbers (Embeddings), and AI "calculates" the distance between words using the Dot Product.

Applying

Modeling 'Image Transformations' (Scaling and Rotating): <syntaxhighlight lang="python"> import numpy as np

def transform_point(x, y, scale_factor, rotation_deg):

   """
   Shows how a Matrix 'moves' a point.
   """
   theta = np.radians(rotation_deg)
   # The Rotation Matrix
   R = np.array([
       [np.cos(theta), -np.sin(theta)],
       [np.sin(theta),  np.cos(theta)]
   ])
   
   # The Scale Matrix
   S = np.array([
       [scale_factor, 0],
       [0, scale_factor]
   ])
   
   point = np.array([x, y])
   # Apply Scale then Rotation
   new_point = R @ S @ point
   return new_point
  1. Move point [1, 0] by scaling 2x and rotating 90 deg

print(f"New Position: {transform_point(1, 0, 2, 90)}")

  1. This is how GPUs render 3D games and movies.

</syntaxhighlight>

Linear Algebra in Tech
Data Compression (SVD) → Breaking a giant matrix (like an image) into smaller ones to save space without losing detail.
Principal Component Analysis (PCA) → Finding the most important "directions" in a massive dataset to simplify it.
Deep Learning → Every layer of a neural network is just a "Matrix Multiplication" followed by a small non-linear tweak.
Cryptography → Using matrix operations that are easy to do but "hard to undo" without a key (Lattice-based crypto).

Analyzing

Scalars vs. Vectors vs. Matrices
Type Definition Practical Use
Scalar A single number (5) Price, Age, Temperature
Vector A list of numbers [5, 2] Position (x,y), Color (R,G,B), Word Meaning
Matrix A grid of numbers An Image, A Network, A Transformation

The Concept of "Rank": The rank of a matrix tells you the "True Dimension" of the information inside it. If a 3x3 matrix has Rank 2, it means it has "collapsed" 3D space into a 2D flat plane. Analyzing the Null Space and Column Space of a matrix is how we find which equations in a system are actually redundant.

Evaluating

Evaluating a linear model:

  1. Condition Number: Is the matrix "well-behaved" or will a tiny error in input lead to a massive error in output?
  2. Sparsity: Is the matrix mostly zeros? (Symmetric/Sparse matrices are much faster for computers to solve).
  3. Orthogonality: Are the basis vectors "Perpendicular"? (Orthogonal bases are the "Gold Standard" for stability).
  4. Computational Cost: Is the matrix too big for a CPU? (This is why we use GPUs for AI).

Creating

Future Frontiers:

  1. Quantum Linear Algebra: Using "Qubits" to perform matrix multiplications that are impossible for classical computers.
  2. Geometric Deep Learning: Applying linear algebra to data that lives on "Graphs" or "Manifolds" rather than flat grids.
  3. Tensor Networks: Going beyond matrices (2D) to Tensors (3D, 4D, 5D) to model complex physical systems like superconductors.
  4. Fully Homomorphic Encryption: Performing linear algebra on "Encrypted" data so a server can process your data without ever seeing it.