Parallel computing, the key that made our light field technique run in real time. What is parallel computing and how it is connected with computer vision.
SABATO CERUSO & RICARDO OLIVA
What is parallel computing?
Parallel computing is a type of computation where the calculations or processes are carried out simultaneously. Often large problems can be divided in smaller ones in such manner that they could be solved at the same time and then compose the result of each sub-problem into the final solution.
That is the key idea behind parallel computing in contrast to sequential computing where each problem is solved in one process without dividing it.
Regarding parallel computing, multiple level of parallelism exists: bit-level, instruction-level, data and task parallelism. For computer vision applications, maybe the most important type is the data parallelism, where the whole data is divided into smaller pieces and processed simultaneously by different processors.
Parallelism in computer vision
Parallelism becomes of great utility in computer vision application as the main data structure in this domain is the image.
An image in computer vision is nothing else than a matrix, where each pixel is a cell of the matrix and its value is the intensity of the color. So, for gray scale images, a matrix of 200 rows and 200 columns will represent an image of resolution 200×200, where each pixel will have values between (typically) 0 and 255, with 0 representing black and 255 white. Color images follows the same idea; however, they are a composition of 3 grayscale images representing each one of them the intensity of the 3 main color components (red, green and blue).
In computer vision often there are many tasks such as image classification, edge enhancement or image blurring that can be divided in elemental operations applied to the image (matrix). Depending on the operation that we need to carry on, the image could be efficiently divided to compute each final pixel value in parallel.
To illustrate the capabilities of parallel computing we will make an example by presenting the operation “average pooling”. This is a common operation in convolutional neural networks and the objective is to reduce the image resolution by averaging each “block” of pixels.
Here we show an example of a sequential average pooling with a block size of 2. It computes each block of four pixels sequentially generating at each step one pixel of the final solution: