Introduction to Linear Algebra

What is Linear Algebra? A branch of mathematics that is concerned with mathematical structures closed under the operations of addition and scalar multiplication and that includes the theory of systems of linear equations, matrices, determinants, vector spaces, and linear transformations. Basically, it is the science of numbers which empowers diverse Data Science algorithms and applications. To fully comprehend machine learning, linear algebra fundamentals are the essential prerequisite. Why do you need to learn Linear Algebra? Linear algebra is a foundation of machine learning. Before you start to study machine learning, you need to get better knowledge and understanding of this field. If you are a fan and a practitioner of machine learning, this post will help you to realize where linear algebra is applied to and you can benefit from these insights. In machine learning, the majority of data is most often represented as vectors, matrices or tensors. Therefore, the machine learning heavily relies on the linear algebra.
  • A vector is a 1D array. For instance, a point in space can be defined as a vector of three coordinates (x, y, z). Usually, it is defined in such a way that it has both the magnitude and the direction.
  • A matrix is a two-dimensional array of numbers, that has a fixed number of rows and columns. It contains a number at the intersection of each row and each column. A matrix is usually denoted by square brackets [].
  • A tensor is a generalization of vectors and matrices. For instance, a tensor of dimension one is a vector. In addition, we can also have a tensor of two dimensions which is a matrix. Then, we can have a three-dimensional tensor such as the image with RGB colors. This continues to expand to four-dimensional tensors and so on.
Applications of Linear Algebra in Data Science
  • Coordinate Transformations
  • Linear Regression
  • Dimensionality Reduction
  • Natural Language Processing
  • Computer Vision
  • Network Graphs