115 KiB
Numeric matrix manipulation - The cheat sheet for MATLAB, Python NumPy, R, and Julia
At its core, this article is about a simple cheat sheet for basic operations on numeric matrices, which can be very useful if you working and experimenting with some of the most popular languages that are used for scientific computing, statistics, and data analysis.
Sections
Introduction
Matrices (or multidimensional arrays) are not only presenting the fundamental elements of many algebraic equations that are used in many popular fields, such as pattern classification, machine learning, data mining, and math and engineering in general. But in context of scientific computing, they also come in very handy for managing and storing data in an more organized tabular form.
Such multidimensional data structures are also very powerful performance-wise thanks to the concept of automatic vectorization: instead of the individual and sequential processing of operations on scalars in loop-structures, the whole computation can be parallelized in order to make optimal use of modern computer architectures.
**Note:** This article originated from an older article with containing a cheat sheet that was just about MATLAB matrices and NumPy arrays. Since then, I added a couple of more rows and doubled the width of the cheat sheet by adding those two other languages R and Julia. Instead of making further modifications, I wanted to keep this old article as is - for future reference and for people who may only be interested in this slimmer version: [Moving from MATLAB matrices to NumPy arrays - A Matrix Cheatsheet](http://sebastianraschka.com/Articles/2014_matlab_vs_numpy.html).
Language overview
Before we jump to the actual cheat sheet, I wanted to give you at least a brief overview of the different languages that we are dealing with.
All four languages, MATLAB/Octave, Python, R, and Julia are dynamically typed, have a command line interface for the interpreter, and come with great number of additional and useful libraries to support scientific and technical computing. Conveniently, these languages also offer great solutions for easy plotting and visualizations.
Combined with interactive notebook interfaces or dynamic report generation engines (MuPAD for MATLAB, IPython Notebook for Python, knitr for R, and IJulia for Julia based on IPython Notebook) data analysis and documentation has never been easier.
MATLAB/Octave
![matlab logo](../Images/matcheat_matlab_logo.png)
MATLAB (stands for MATrix LABoratory) is the name of an application and language that was developed by MathWorks back in 1984. One of its strengths is the variety of different and highly optimized "toolboxes" (including very powerful functions for image and other signal processing task), which makes suitable for tackling basically every possible science and engineering task.
Like the other languages, which will be covered in this article, it has cross-platform support and is using dynamic types, which allows for a convenient interface, but can also be quite "memory hungry" for computations on large data sets.
Even today, MATLAB is probably (still) the most popular language for numeric computation used for engineering tasks in academia as well as in industry.
GNU Octave
![matlab logo](../Images/matcheat_octave_logo.png)
It is also worth mentioning that MATLAB is the only language in this cheat sheet which is not free and open-sourced. But since it is so immensely popular, I want to mention it nonetheless. And as an alternative there is also the free GNU Octave re-implementation that follows the same syntactic rules so that the code is compatible to MATLAB (except for very specialized libraries).
* This image is a freely usable media under public domain and represents the first eigenfunction of the L-shaped membrane, resembling (but not identical to) MATLAB's logo trademarked by MathWorks Inc.
Python NumPy
![python logo](../Images/matcheat_numpy_logo.png)
Initially, the NumPy project started out under the name "Numeric" in 1995 (renamed to NumPy in 2006) as a Python library for numeric computations based on multi-dimensional data structures, such as arrays and matrices. Since it makes use of pre-compiled C code for operations on its "ndarray
" objects, it is considerably faster than using equivalent approaches in (C)Python.
Python NumPy is my personal favorite since I am a big fan of the Python programming language. Although similar tools exist for other languages, I found myself to be most productive doing my research and data analyses in IPython notebooks.
It allows me to easily combine Python code (sometimes optimized by compiling it via the Cython C-Extension or the just-in-time (JIT) Numba compiler if speed is a concern) with different libraries from the Scipy stack including matplotlib for inline data visualization (you can find some of my example benchmarks in this GitHub repository).
R
![R logo](../Images/matcheat_R_logo.png)
The R programming language was developed in 1993 and is a modern GNU implementation of an older statistical programming language called S, which was developed in the Bell Laboratories in 1976. Since its release, it has a fast-growing user base and is particularly popular among statisticians.
R was also the first language which kindled my fascination for statistics and computing. I have used it quite extensively a couple of years ago before I discovered Python as my new favorite language for data analysis.
Although R has great in-built functions for performing all sorts statistics, as well as a plethora of freely available libraries developed by the large R community, I often hear people complaining about its rather unintuitive syntax.
Julia
![python logo](../Images/matcheat_julia_logo.png)
With its first release in 2012, Julia is by far the youngest of the programming languages mentioned in this article. a While Julia can also be used as an interpreted language with dynamic types from the command line, it aims for high-performance in scientific computing that is superior to the other dynamic programming languages for technical computing thanks to its LLVM-based just-in-time (JIT) compiler.
Personally, I haven't used Julia that extensively, yet, but there are some exciting benchmarks that look very promising:
![Julia benchmark](../Images/matcheat_julia_benchmark.png) C compiled by gcc 4.8.1, taking best timing from all optimization levels (-O0 through -O3). C, Fortran and Julia use OpenBLAS v0.2.8. The Python implementations of rand_mat_stat and rand_mat_mul use NumPy (v1.6.1) functions; the rest are pure Python implementations.Bezanson, J., Karpinski, S., Shah, V.B. and Edelman, A. (2012), “Julia: A fast dynamic language for technical computing”.
(Source: http://julialang.org/benchmarks/, with permission from the copyright holder)
Cheat sheet
Cheat sheet overview
If you are interested in downloading this cheat sheet table for your references, you can find it here on GitHub
Task |
MATLAB/Octave |
Python NumPy |
R |
Julia |
Task |
CREATING MATRICES |
|||||
Creating
Matrices |
M>
A = [1 2 3; 4 5 6; 7 8 9] |
P>
A = np.array([ [1,2,3], [4,5,6], [7,8,9] ]) |
R>
A = matrix(c(1,2,3,4,5,6,7,8,9),nrow=3,byrow=T) |
J>
A=[1 2 3; 4 5 6; 7 8 9] |
Creating
Matrices |
Creating an 1D column vector |
M>
a = [1; 2; 3] |
P> a = np.array([1,2,3]).reshape(1,3)
|
R>
a = matrix(c(1,2,3), nrow=3, byrow=T) |
J>
a=[1; 2; 3] |
Creating an 1D column vector |
Creating
an |
M>
b = [1 2 3] |
P>
b = np.array([1,2,3]) #
note that numpy doesn't have P> b.shape (3,)
|
R>
b = matrix(c(1,2,3), ncol=3) |
J>
b=[1 2 3] |
Creating
an |
Creating
a |
M>
rand(3,2) |
P>
np.random.rand(3,2) |
R>
matrix(runif(3*2), ncol=2) |
J>
rand(3,2) |
Creating
a |
Creating
a |
M>
zeros(3,2) |
P>
np.zeros((3,2)) |
R>
mat.or.vec(3, 2) |
J>
zeros(3,2) |
Creating
a |
Creating
an |
M>
ones(3,2) |
P>
np.ones((3,2)) |
R>
mat.or.vec(3, 2) + 1 |
J>
ones(3,2) |
Creating
an |
Creating
an |
M>
eye(3) |
P>
np.eye(3) |
R>
diag(3) |
J>
eye(3) |
Creating
an |
Creating
a |
M>
a = [1 2 3] |
P>
a = np.array([1,2,3]) |
R>
diag(1:3) |
J>
a=[1, 2, 3] |
Creating
a |
ACCESSING MATRIX ELEMENTS |
|||||
Getting
the dimension |
M>
A = [1 2 3; 4 5 6] |
P>
A = np.array([ [1,2,3], [4,5,6] ]) |
R>
A = matrix(1:6,nrow=2,byrow=T) R>
dim(A) |
J>
A=[1 2 3; 4 5 6] |
Getting
the dimension |
Selecting rows |
M>
A = [1 2 3; 4 5 6; 7 8 9] |
P>
A = np.array([ [1,2,3], [4,5,6], [7,8,9] ]) |
R>
A = matrix(1:9,nrow=3,byrow=T)
|
J>
A=[1 2 3; 4 5 6; 7 8 9]; |
Selecting rows |
Selecting columns |
M>
A = [1 2 3; 4 5 6; 7 8 9] |
P>
A = np.array([ [1,2,3], [4,5,6], [7,8,9] ]) |
R>
A = matrix(1:9,nrow=3,byrow=T)
|
J>
A=[1 2 3; 4 5 6; 7 8 9]; |
Selecting columns |
Extracting
rows and columns by criteria |
M>
A = [1 2 3; 4 5 9; 7 8 9] |
P>
A = np.array([ [1,2,3], [4,5,9], [7,8,9]]) |
R>
A = matrix(1:9,nrow=3,byrow=T)
|
J>
A=[1 2 3; 4 5 9; 7 8 9] |
Extracting
rows and columns by criteria |
Accessing
elements |
M>
A = [1 2 3; 4 5 6; 7 8 9] |
P>
A = np.array([ [1,2,3], [4,5,6], [7,8,9] ]) |
R>
A = matrix(c(1,2,3,4,5,9,7,8,9),nrow=3,byrow=T)
|
J>
A=[1 2 3; 4 5 6; 7 8 9]; |
Accessing
elements |
MANIPULATING SHAPE AND DIMENSIONS |
|||||
Converting |
M>
b = [1 2 3]
|
P>
b = np.array([1, 2, 3]) |
R>
b = matrix(c(1,2,3), ncol=3) |
J>
b=vec([1 2 3]) |
Converting |
Reshaping
Matrices |
M>
A = [1 2 3; 4 5 6; 7 8 9] |
P>
A = np.array(1,2,3],[4,5,6],[7,8,9) P>
B = A.reshape(1, total_elements) |
R>
A = matrix(1:9,nrow=3,byrow=T)
|
J>
A=[1 2 3; 4 5 6; 7 8 9] |
Reshaping
Matrices |
Concatenating matrices |
M>
A = [1 2 3; 4 5 6] |
P>
A = np.array(1, 2, 3], [4, 5, 6) |
R>
A = matrix(1:6,nrow=2,byrow=T) |
J>
A=[1 2 3; 4 5 6]; |
Concatenating matrices |
Stacking |
M>
a = [1 2 3] |
P>
a = np.array([1,2,3]) |
R>
a = matrix(1:3, ncol=3) |
J>
a=[1 2 3]; |
Stacking |
BASIC MATRIX OPERATIONS |
|||||
Matrix-scalar |
M> A
= [1 2 3; 4 5 6; 7 8 9] |
P>
A = np.array([ [1,2,3], [4,5,6], [7,8,9] ]) #
Note that NumPy was optimized for |
R>
A = matrix(1:9, nrow=3, byrow=T) R>
A + 2 |
J>
A=[1 2 3; 4 5 6; 7 8 9]; |
Matrix-scalar |
Matrix-matrix |
M> A
= [1 2 3; 4 5 6; 7 8 9] |
P>
A = np.array([ [1,2,3], [4,5,6], [7,8,9] ]) |
R>
A = matrix(1:9, nrow=3, byrow=T) |
J>
A=[1 2 3; 4 5 6; 7 8 9]; |
Matrix-matrix |
Matrix-vector |
M>
A = [1 2 3; 4 5 6; 7 8 9] |
P>
A = np.array([ [1,2,3], [4,5,6], [7,8,9] ]) |
R>
A = matrix(1:9, ncol=3) |
J>
A=[1 2 3; 4 5 6; 7 8 9]; |
Matrix-vector |
Element-wise |
M> A
= [1 2 3; 4 5 6; 7 8 9] |
P>
A = np.array([ [1,2,3], [4,5,6], [7,8,9] ]) #
Note that NumPy was optimized for |
R>
A = matrix(1:9, nrow=3, byrow=T)
|
J>
A=[1 2 3; 4 5 6; 7 8 9]; |
Element-wise |
Matrix
elements to power n |
M> A
= [1 2 3; 4 5 6; 7 8 9] |
P>
A = np.array([ [1,2,3], [4,5,6], [7,8,9] ]) |
R>
A = matrix(1:9, nrow=3, byrow=T) |
J>
A=[1 2 3; 4 5 6; 7 8 9]; |
Matrix
elements to power n |
Matrix
to power n |
M> A
= [1 2 3; 4 5 6; 7 8 9] |
P>
A = np.array([ [1,2,3], [4,5,6], [7,8,9] ]) |
R>
A = matrix(1:9, ncol=3) |
J>
A=[1 2 3; 4 5 6; 7 8 9]; |
Matrix
to power n |
Matrix transpose |
M> A
= [1 2 3; 4 5 6; 7 8 9] |
P>
A = np.array([ [1,2,3], [4,5,6], [7,8,9] ]) |
R>
A = matrix(1:9, nrow=3, byrow=T)
|
J>
A=[1 2 3; 4 5 6; 7 8 9] |
Matrix transpose |
Determinant
of a matrix: |
M>
A = [6 1 1; 4 -2 5; 2 8 7] |
P> A
= np.array(6,1,1],[4,-2,5],[2,8,7) |
R>
A = matrix(c(6,1,1,4,-2,5,2,8,7), nrow=3, byrow=T) |
J>
A=[6 1 1; 4 -2 5; 2 8 7] |
Determinant
of a matrix: |
Inverse of a matrix |
M>
A = [4 7; 2 6] |
P>
A = np.array(4, 7], [2, 6) |
R>
A = matrix(c(4,7,2,6), nrow=2, byrow=T) |
J>
A=[4 7; 2 6] |
Inverse of a matrix |
ADVANCED MATRIX OPERATIONS |
|||||
Calculating
the covariance matrix |
M>
x1 = [4.0000 4.2000 3.9000 4.3000 4.1000]’ |
P>
x1 = np.array([ 4, 4.2, 3.9, 4.3, 4.1]) |
R>
x1 = matrix(c(4, 4.2, 3.9, 4.3, 4.1), ncol=5) |
J>
x1=[4.0 4.2 3.9 4.3 4.1]'; |
Calculating
the covariance matrix |
Calculating |
M>
A = [3 1; 1 3] |
P>
A = np.array(3, 1], [1, 3) |
R>
A = matrix(c(3,1,1,3), ncol=2) |
J>
A=[3 1; 1 3] |
Calculating |
Generating
a Gaussian dataset: |
%
requires statistics toolbox package |
P>
mean = np.array([0,0]) |
#
requires the ‘mass’ package |
#
requires the Distributions package from
https://github.com/JuliaStats/Distributions.jl |
Generating
a Gaussian dataset: |
(Thanks to Keith C. Campbell for providing me with the syntax for the Julia language.)
Alternative data structures: NumPy matrices vs. NumPy arrays
Python's NumPy library also has a dedicated "matrix" type with a syntax that is a little bit closer to the MATLAB matrix: For example, the " *
" operator would perform a matrix-matrix multiplication of NumPy matrices - same operator performs element-wise multiplication on NumPy arrays.
Vice versa, the ".dot()
" method is used for element-wise multiplication of NumPy matrices, wheras the equivalent operation would for NumPy arrays would be achieved via the " *
"-operator.
Most people recommend the usage of the NumPy array type over NumPy matrices, since arrays are what most of the NumPy functions return.