Open Source

Follow up post, decided to build the 2x RTX PRO 6000 tower.

A custom PC build packs 192GB of GDDR7 VRAM and 128 PCIe 5.0 lanes for massive AI workloads.

Deep Dive

An AI hardware enthusiast has gone viral by detailing a custom, no-compromise workstation build designed to consolidate serious compute power. The system centers on an AMD Threadripper PRO 7965WX platform paired with an ASUS WRX90E-SAGE SE motherboard, providing 128 PCIe 5.0 lanes to feed two flagship NVIDIA RTX PRO 6000 Blackwell GPUs. This dual-GPU configuration delivers a combined 192GB of GDDR7 ECC video memory, a critical resource for running and fine-tuning the largest open-source AI models without hitting memory constraints.

The builder selected components for maximum stability and throughput under sustained load. Power is supplied by a 1600W 80+ Titanium PSU on a dedicated 20A circuit, while a Samsung 9100 PRO 8TB PCIe 5.0 SSD offers blistering 14,800 MB/s storage for models. The build is explicitly tailored for AI development workflows, including local LLM inference, creating embeddings for retrieval-augmented generation (RAG), and running vector databases like Qdrant. It represents a peak DIY approach to creating a 'local datacenter' capable of handling tasks typically reserved for cloud instances.

Key Points
  • Dual NVIDIA RTX PRO 6000 Blackwell GPUs provide 192GB of GDDR7 ECC VRAM for running massive AI models.
  • AMD Threadripper PRO 7965WX CPU and ASUS WRX90 motherboard offer 128 PCIe 5.0 lanes for unimpeded GPU data transfer.
  • System includes a 1600W PSU on a dedicated circuit and PCIe 5.0 storage for a full-stack AI development environment.

Why It Matters

It showcases the extreme hardware now available for professionals to run cutting-edge AI workloads entirely on-premises, bypassing cloud costs and latency.