On GPUs (Graphical Processing Units)

By | January 11, 2017

We are excited to share an article by Eric Mizell in RT Insights.com – a web magazine dedicated to the advances in real time analysis, IOT and Big Data. Writing about how recent advances in input/output devices has pushed performance bottleneck to processing, he writes how the advent of a new technology – Graphical Processing Unit (GPU) with its ability to perform parallel deep analyses in real time is expected to help usher a new era in cognitive computing.

According to Eric Mizell the steady advances in CPU, storage and memory, networking technologies laid foundation for economical cognitive computing. This was pushed further in terms of price and performance in data analytics by the appearance of solid state storage and Random Access Memory (RAM). These advances have managed to shift the performance bottleneck from input/output devices to processing and laid stress on greater processing rate. But this makes cognitive computing and other mature analytical applications the preserve of a few very large organizations with multi-core CPUs deployed in several clusters of servers.

And as an answer to this need for affordable processing power came Graphical Processing Units (GPU)s with its capability for parallel processing bestowing ability to process data 100 times faster than configurations containing cores. GPUs were initially designed for graphics and installed on a separate card with its own memory (Video RAM) and this configuration was popular with gamers who were looking for real-time graphics. And as the processing power and programmability of the GPUs increased over time, it came to be used in additional applications.

We do know the real benefit from cognitive computing can be derived when it is real-time, and this can be achieved economically when we use GPU acceleration.  With variety of analytical processes like artificial intelligence, business intelligence, machine learning, natural language processing, Cognitive computing is best suited for acceleration using GPUs. The cognitive computing workloads with its repeated and similar instructions are well suited for parallel processing by the thousands of small efficient cores of GPUs.

Presently GPU acceleration is being deployed by Amazon and Nimbix, Google is readying to equip its cloud platform with GPUs for Google Compute Engine and Google Cloud Machine Learning Services.

For more, please visit https://www.rtinsights.com/gpus-the-key-to-cognitive-computing/

Leave a Reply

Your email address will not be published. Required fields are marked *