Matrix Maths (Fundamentals of Linear Algebra)
The great thing about matrices is that they contain a bunch of elements that are predictably all the same as each other. This makes them ideal for parallel processing. We use GPUs (sometimes called TPUs or Tensor Processing Units) to load a load of data all at once, then CHUNK, we process all that data, all at the same time through multiple parallel CUDA cores.
This is great for graphics, where you have millions of 3d points that all need transforming very quickly. It’s also great for machine learning, where you have huge vectors of weights that all need transforming using the same algorithm.
Dot Products and Cross Products
for most operations, tensors, vectors, scalars and matrices work in the same way.
The very notable exception to this rule is the dot product. Say I have two scalars, a and b, and I want to multiply them togehter, I can write this in three ways.
a x b is the same as ab, sometimes written a.b
a x b = ab = a.b
a x b is the cross product. a.b or ab is the dot product. With scalars these give the same result, so we use them interchangeably. Not so with tensors.
To calculate the dot product of the rows of matrix (A) with the columns of matrix (B), we can use the following matrices:
-
Calculating (C_{11}):
-
Calculating (C_{12}):
-
Calculating (C_{21}):
-
Calculating (C_{22}):
So, the resulting matrix (C) from the dot product (matrix multiplication) of matrices (A) and (B) would be:
x \in \mathbb{R}