Blog

Apr 17, 2019

Reaching Into The Parallel Universe With Intel’s Xeon Processor

Posted by in categories: cosmology, robotics/AI

Training bigger neural networks can be challenging when faced with accelerator memory limits. The size of the datasets being used by machine learning models is very large nowadays. For example, a standard image classification datasets like hashtagged Instagram contains millions of images. With the increasing quality of the images, the memory required will also increase. Today, the memory available on NVIDIA GPUs is only 32 GB.

Therefore, there needs to be a tradeoff between memory allocated for the features in a model and how the network gets activated. It is only understandable why the accelerator memory limit needs to be breached.


Comments are closed.