Skip to main content

2 posts tagged with "GPU"

GPU

View All Tags

Something You Need to Know About GPUs

· 8 min read
note

When using the provided server, everything including the driver and CUDA toolkit is already installed, so you might not need to worry about these details initially. However, I strongly encourage you to understand these concepts because you might one day need to maintain your own server (though hopefully you won't have to).

Introduction

Back in the day, I always wondered why we could run PyTorch code on our local machine without a GPU, but when it came to compiling or training local library, we suddenly needed CUDA toolkit. What's going on under the hood?

In this article, we’ll break down the mystery behind CUDA, cuDNN, and all the other buzzwords. By the end, you’ll have a clearer (and hopefully less intimidating) understanding of how they all fit together.

Introduction to Multi-GPU Training

· 4 min read
note

This tutorial assumes you already know the basics of PyTorch and how to train a model.

Why Do We Even Need Multiple GPUs?

These days, training big neural networks is like trying to stuff an elephant into a suitcase — it just doesn’t fit!

As datasets and models keep getting bigger and bigger, a single GPU often can’t handle the memory requirements, or it’s just painfully slow. That’s where using multiple GPUs swoops in to save the day.