[D] Seeking Advice: WSL2 vs Dual Boot for ML development with an RTX 5080
A developer with an RTX 5080 and spare 2TB NVMe seeks the optimal local ML environment.
A developer building a machine learning workstation is facing a classic infrastructure dilemma: convenience versus performance. Their powerful new PC, equipped with an unreleased NVIDIA RTX 5080 GPU, an AMD 9800X3D CPU, and 64GB of RAM, is intended for local model training. The core question is whether to use Windows Subsystem for Linux 2 (WSL2) with GPU passthrough or to dedicate a separate 2TB NVMe drive to a native Ubuntu dual-boot installation. The developer's workflow involves using a MacBook Pro to SSH into the Linux environment to leverage the desktop's GPU for compute, making remote access a key requirement.
While WSL2 offers seamless integration with their existing Windows 11 setup—eliminating the need to reboot—the developer is concerned about potential performance overhead and long-term support for cutting-edge ML libraries and CUDA toolkits. The presence of a completely unused, high-speed Samsung 990 EVO Plus SSD is a strong argument for a dedicated Linux install, which is often considered more stable and "future-proof" for intensive development. The community discussion highlights that while WSL2 with CUDA support has matured significantly, some developers still encounter subtle driver compatibility issues or performance penalties in I/O-heavy training pipelines that a bare-metal Linux installation avoids.
This debate underscores a persistent friction point in the AI development ecosystem, where the user-friendliness of Windows clashes with the de facto standard of Linux for production-grade ML. The choice impacts daily productivity, software dependency management, and ultimately, the efficiency of utilizing expensive hardware like the upcoming RTX 5080. The developer's specific hardware configuration makes this a notable case study for professionals investing in high-end local rigs.
- Hardware includes an unreleased NVIDIA RTX 5080 and a spare 2TB Samsung 990 EVO Plus NVMe drive.
- Workflow plan involves SSH-ing from a MacBook Pro to use the desktop's GPU for remote ML training.
- Core trade-off is WSL2's convenience on Windows 11 versus a native Linux dual-boot's perceived performance and compatibility.
Why It Matters
The optimal local setup directly impacts productivity and hardware utilization for developers running expensive training jobs.