Self hosting, Power consumption, rentability and the cost of privacy, in France
A detailed breakdown shows running dual RTX 3090s for AI inference costs €357-435 annually in France.
A detailed, viral analysis from a French AI hobbyist has laid bare the significant energy and financial costs of self-hosting large language models. The user, who runs a rig with two NVIDIA RTX 3090 GPUs, measured its power consumption at 120W idle and a massive 700-800W during active inference. Using France's multi-tiered electricity pricing—including the standard Tarif Bleu (€0.194/kWh), off-peak Heure Creuse, and the dynamic Tempo tariff—they calculated an annual cost between €357 and €435 just for electricity, not including hardware, cooling, or internet.
The analysis highlights the trade-off between privacy, control, and expense. While self-hosting avoids API fees and data sharing, the poster notes the total cost of ownership must also factor in hardware depreciation, upfront capital expenditure, and maintenance. The post has sparked a broader community discussion on practical strategies, such as shifting compute to off-peak hours, powering down idle rigs, or supplementing with cloud APIs like Anthropic's Claude or OpenAI's ChatGPT Pro during peak pricing periods. For professionals and enthusiasts, it serves as a crucial real-world case study in evaluating whether local control is worth the operational overhead compared to managed services.
- Dual RTX 3090 AI rig consumes 700-800W during inference, costing €357-435/year in electricity in France.
- Analysis compares complex French utility tariffs: Tempo, Heure Creuse, and standard Tarif Bleu.
- Post sparks debate on cost-benefit of self-hosting for privacy vs. using cloud APIs like Claude or ChatGPT.
Why It Matters
For businesses and developers, this real-world data is essential for calculating the true TCO of on-premise AI versus cloud services.