June 22, 2023

How to Build a Deep Learning PC

Machine learning is perhaps today’s hottest technology — and unlike in years past, you no longer need an MIT supercomputer to train AI models. In fact, aspiring AI innovators can train many of today’s most interesting ML models on a consumer PC.

That’s not to say you can do it on any old computer. You’ll need a powerful rig with updated hardware to get the most from your ML experiments. We recommend using a PC builder and assembling your deep learning rig yourself to maximize the bang for your buck.

No, really — it’s not as hard as you might imagine, and you can save some substantial cash! Check out our guide below for an intro to the basics of deep learning PCs.

Assessing Your Needs

Before you start looking for parts, here are some critical factors to consider:

  • Workload: How large and complex are the models and datasets you want to work with? Machines that process bigger datasets with more complexity will usually require more powerful hardware to keep up.
  • Budget: While a deep learning machine will never be what you’d call a “budget” PC (at least for now), you should have a basic idea of what you’re trying to spend. Typical budgets for deep learning PCs range from $1,500 to $3,000 (but can easily go much higher).
  • Scalability: Do you want the ability to easily scale your models up to larger datasets in the future? You may want to build some headroom into your deep learning PC with components slightly above your current needs.


The GPU is the main workhorse of every deep learning rig, processing the millions of calculations per second essential for machine learning. High-end GPUs like the NVIDIA RTX 4000 series or AMD Radeon RX 7000 series are standard choices, and you’ll see many ML rigs with more than one of these cards.

Plan to spend the greatest amount of your budget and selection time here. Some (but not all) of the most important factors in your GPU choice include:

  • GPU Architecture: The latest GPU architectures, like NVIDIA Hopper and AMD RDNA3, include AI-specific features. Look for current-gen cards with these architectures to maximize your computing power.
  • VRAM Size: You’re looking for a GPU with the most VRAM memory you can get for your budget. VRAM is onboard memory crucial for speeding up ML processing, especially on complex models and large datasets.
  • CUDA Cores (NVIDIA)/Stream Processors (AMD): These tiny, highly specialized cores are a big factor in machine learning speeds. Broadly speaking, the more of either a card has, the better.
  • Matrix Processing: Matrix multiplication is an essential element of most ML models, so look for features designed to handle mixed-precision matrix work. These include NVIDIA’s Tensor Core architecture and AMD’s ROCm open-source stack.
  • Software Support: Certain models and software tools are built for compatibility with cards from certain manufacturers, so check to verify that any tools you want to use are compatible with your GPU.


Although the CPU plays second fiddle to the GPU in the computation-intensive tasks of deep learning, it’s still a critical component that directs the whole process of preparing and training a model. Here are some basics to look for in a deep learning CPU:

  • Core and Thread Count: Look for a CPU with as many cores and processing threads as your budget can accommodate. Deep learning requires highly efficient parallel processing, which is where extra cores and simultaneous threads shine.
  • AI Acceleration: Like GPUs, current CPU architectures often feature built-in capabilities for accelerating AI workloads.
  • PCIe Lanes: If you’ll be using more than one GPU, make sure your CPU offers enough PCI Express lanes to support the GPUs you want to connect to it.
  • Memory Support: Check whether a CPU supports the latest DDR5 memory, as well as the maximum amount of RAM it can support.


The final major component for your deep learning build is RAM — and you’re going to need a lot of it.

  • Capacity: The larger your datasets, the more GB capacity you’ll need. 32GB is the minimum standard for most deep learning PCs, but 64GB is common, and 128GB definitely isn’t unheard of in research-grade ML PCs.
  • DDR4 vs. DDR5: DDR4 RAM is far from obsolete, but most high-powered PCs today use DDR5. It can provide a performance boost, and it’s worth getting your whole system (including the CPU and motherboard) into the DDR5 ecosystem for easier upgrades later.

Other Components

These parts all have essential roles to play, even if they’re not as central to the core tasks of machine learning.

  • Power Supply: Deep learning PCs use many powerful components, so it’s crucial to find a high-efficiency power supply that can provide sufficient wattage. Leave some headroom (usually at least 100W) for higher loads and future upgrades, and make sure it has enough power ports if you’re using more than one GPU.
  • Case: You’re probably using a larger GPU, and maybe even more than one, so a full ATX tower case is usually the way to go. Airflow is another high priority since your GPU will likely generate quite a bit of heat, so look for something with features like a mesh front design.
  • Primary Storage: Most high-capacity SSDs will work just fine, although the extra speed of NVMe SSDs makes them preferable to SATA drives. You’ll want something high-capacity (2 TB is usually enough) since training datasets can take up several hundred GB of space each.
  • Motherboard: Make sure it’s compatible with your CPU, has enough lanes for your GPU(s), and is compatible with your RAM. To avoid compatibility hassles, many PC builders choose a computer parts bundle that includes a pre-selected CPU and motherboard.

Expect to do more research before you start building. These are the basics to consider, but there’s definitely more to know! Keep learning, keep experimenting, and you just might train one of the models that defines 21st-century tech.

About the author 

Kyrie Mattos

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}