Infra

Building an Efficient GPU Server with NVIDIA GeForce RTX 4090s/5090s

Marco Mascorro Posted April 3, 2025

Building an Efficient GPU Server with NVIDIA GeForce RTX 4090s/5090s Table of Contents

In today’s AI-driven world, the ability to train AI models locally and perform fast inference on GPUs at an optimal cost is more important than ever. Building your own GPU server with an RTX 4090 or RTX 5090 — like the one described here — enables a high-performance eight-GPU setup running on PCIe 5.0 with full x16 lanes. This configuration ensures maximum interconnect speed for all eight GPUs. In contrast, most similar setups are limited by the PCIe bus version (such as PCIe 4.0 or even lower), due to the challenges of running longer PCIe extensions.

Running models locally means no API calls to external services, no data leakage, and no usage throttles. Additionally, your data stays yours and there is no sharing logs with cloud providers or sending sensitive documents to external model providers. It’s perfect for research and for privacy-conscious developers.

With that in mind, we decided to build our own GPU server using readily available, and affordable, hardware. It certainly is not production-ready, but is more than capable as a platform. (Disclaimer: This project was developed solely for research and educational purposes.)

This guide will walk you through our process of building a highly efficient GPU server using NVIDIA’s GeForce RTX 4090s. We’ll be constructing two identical servers, each housing eight RTX 4090 GPUs and delivering an impressive amount of computational power in a relatively simple and cost-effective package. All GPUs will run on their full 16 lanes with PCIe 4.0. (Note: We have built and tested our servers exclusively with the GeForce RTX 4090. While we have not yet tested them with the RTX 5090, they should be compatible and are expected to run with PCIe 5.0.)

Why build this server?

In an era of rapidly evolving AI models and increasing reliance on cloud-based infrastructure, there’s a strong case for training and running models locally, especially for research, experimentation, and gaining hands-on experience in building a custom GPU-server setup. 

The NVIDIA RTX series of GPUs presents a compelling option for this type of project, offering phenomenal performance at a competitive cost.

The RTX 4090 and RTX 5090 are absolute beasts. With 24GB of VRAM and 16,384 CUDA cores on the RTX 4090, and an expected 32GB of VRAM and 21,760 CUDA cores on the RTX 5090, both deliver exceptional FP16/BF16 and tensor performance — rivaling datacenter GPUs at a fraction of the cost. While enterprise-grade options like the H100 or H200 offer top-tier performance, they come with a hefty price tag. For less of the cost of a single H100, you could stack multiple (4-8) RTX 4090s or 5090s and still achieve serious throughput for inference and even training smaller models.

Building a small GPU server, particularly with powerhouse GPUs like the NVIDIA RTX 4090 or the new RTX 5090, provides exceptional flexibility, performance, and privacy for running large language models (LLMs) such as LLaMA, DeepSeek, and Mistral, as well as diffusion models, or even custom fine-tuned variants. Modern open-source models are designed with efficient inference in mind, often using Mixture of Experts (MoE) architectures, and the 4090s can easily handle those workloads. Depending on their parameter size, many of these models can also run as dense models on a small server like ours, without requiring quantization.

Want to build your own Copilot? A personal chatbot? A local RAG pipeline? Done.

Using libraries like vLLM, GGUF/llama.cpp, or even full PyTorch inference with DeepSpeed, you can take advantage of:

  • Model parallelism
  • Tensor or pipeline parallelism
  • Quantization to reduce VRAM load
  • Memory-efficient inference with paged attention or streaming

You’re in full control of how your GPU server is optimized, patched, and updated.

Sample setup

Before we dive into the build process, let’s discuss why this particular server configuration is worth considering:

  1. Simplicity: While building a high-performance GPU server might seem daunting, the parts we use and our adaptations are accessible to those with intermediate technical skills.
  2. PCIe 5.0 future-proofing: The server offers eight PCIe 5.0 x16 slots, providing maximum bandwidth and future-proofing for high-performance GPUs. While the RTX 4090 is limited to PCIe 4.0 speeds, this setup allows for seamless upgrades to next-generation PCIe 5.0 GPUs, such as the GeForce RTX 5090.
  3. PCIe board configuration for eight 3-slot GPUs (e.g., RTX 4090 or RTX 5090): In this setup, the PCIe board is separate from the main motherboard, a unique design that enables two independent PCIe 5.0 PCB boards to be mounted individually. This configuration accommodates all eight GPUs (four on the bottom and four on the top) without requiring additional complex and expensive PCIe retimers or redrivers. Typically, such components are needed to maintain signal integrity when PCIe signals travel across longer traces, cables, or connectors. By minimizing signal path length and complexity, this design ensures full-speed connectivity with greater simplicity and reliability.
  4. Superior to traditional server layouts: Many server alternatives that offer eight PCIe 5.0 x16 lanes have them directly integrated into the main motherboard. However, this layout makes it physically impossible to fit eight RTX 4090s, due to their 3-slot width. Our configuration solves this limitation by separating the PCIe boards from the motherboard, enabling full support for eight triple-slot GPUs without compromise, with a custom aluminum frame built to hold four external GPUs.
  5. Direct PCIe connection: The PCIe PCB card connects to the motherboard using the original communication cables that come with the server, eliminating the need for PCIe extender cables, retimers, or switches. This is a crucial advantage, as extender cables can disrupt the PCIe bus impedance, potentially causing the system to downgrade to lower PCIe versions (such as 3.0 or even 1.0), resulting in significant performance loss.
  6. Custom frame solution: We’ll be using a custom frame built with elements used commonly in robotics components from GoBilda to securely hold the top 4 external GPUs. This enables eight 3-slot GPUs to fit in this server setup with the original PCIe5.0 cards and cables, without the need of PCIe redrivers or PCIe cable extensions.
  7. Simple power distribution: Power is distributed to both PCIe boards using an ATX 24-Pin and a 6-pin motherboard power extension cables, an ATX 24-pin Y splitter and a 6-pin Y splitter.
  8. High-performance infrastructure: We operate our GPU setup on a 220V power supply and utilize a symmetrical 10G single-mode fiber internet connection.

Server specifications

Before we begin the build process, let’s review the key components and specifications of our GPU server:

  • Server model: ASUS ESC8000A-E12P
  • GPUs: 8x NVIDIA RTX 4090
  • CPU: 2x AMD EPYC 9254 Processor (24-core, 2.90GHz, 128MB Cache)
  • RAM: 24x 16GB PC5-38400 4800MHz DDR5 ECC RDIMM (384GB total)
  • Storage: 1.92TB Micron 7450 PRO Series M.2 PCIe 4.0 x4 NVMe SSD (110mm)
  • Operating system: Ubuntu Linux 22.04 LTS Server Edition (64-bit)
  • Networking: 2 x 10GbE LAN ports (RJ45, X710-AT2), one utilized at 10Gb
  • Additional PCIe 5.0 card: ASUS 90SC0M60-M0XBN0

Build process

Next, let’s walk through the step-by-step process of assembling our high-performance GPU server.

Step 1: Prepare the Server Chassis

  1. Start with the ASUS ESC8000A-E12P server chassis.
  2. Remove the top cover and any unnecessary internal components to make room for our custom configuration.

Step 2: Install RAM

  1. Install the 24x 16GB DDR5 ECC RDIMM modules into the appropriate slots on the motherboard.
  2. Ensure they are properly seated and locked in place.

Step 3: Install storage

  1. Locate the M.2 slot on the motherboard.
  2. Install the 1.92TB Micron 7450 PRO Series M.2 PCIe 4.0 NVMe SSD.

Step 4: Prepare the PCIe boards

  1. Install the ASUS 90SC0M60-M0XBN0 PCIe 5.0 additional card.
  2. Redirect four pairs of the cables (which are labeled with numbers) from the original PCIe card card that is already installed in the server (bottom PCIe card). We alternated the sequence: set 1 stays in the bottom PCIe card, set 2 goes to the top card, set 3 stays in the bottom PCIe card and so on.

Step 5: Create a “Y” splitter cable for ATX 24-pin and the 6-pin connectors

  1. Create “Y” splitter cable extensions to supply power to the external 90SC0M60-M0XBN0 PCIe 5.0 expansion card, which will be mounted on top of the server.
  2. Ensure that the “Y” splitter cable extensions are of the appropriate gauge to safely handle the power requirements of both the external PCIe card and the GPUs.

24-pin and 6-pin power connectors

Step 6: Install lower GPUs

  1. Install four NVIDIA RTX 4090 GPUs into the lower PCIe slots on the original PCIe card located next to the motherboard.
  2. Ensure proper seating and secure them in place.

Step 7: Prepare custom frame for upper GPUs and install them

  1. We used a custom built frame using GoBilda components.
  2. Ensure the frame is sturdy and properly sized to hold four RTX 4090 GPUs.
  3. Ensure you are using power cables with the proper gauge to handle each GPU.

Step 8: Networking setup

  1. Identify the two 10GbE LAN ports (RJ45, X710-AT2) on the server.
  2. Connect one of the ports to your 10G single-mode fiber network interface. 

Step 9: Final assembly and cable management

  1. Double-check all connections and component placements.
  2. Implement proper cable management to ensure optimal airflow and thermal performance, like ensuring there’s enough space between servers.

Step 10: Operating system installation

  1. Create a bootable USB drive with Ubuntu Linux 22.04 LTS Server Edition (64-bit).
  2. Boot the server from the USB drive and follow the installation prompts.
  3. Once installed, update the system and install necessary drivers for the GPUs and other components.

Final build

Once you’ve completed all the steps, you should have a GPU server that looks like this and is ready to get to work!

Want More a16z Infra?

Analysis and news covering the latest trends reshaping AI and infrastructure.

Learn More

Want More Infra?

Analysis and news covering the latest trends reshaping AI and infrastructure.

Sign Up On Substack

Views expressed in “posts” (including podcasts, videos, and social media) are those of the individual a16z personnel quoted therein and are not the views of a16z Capital Management, L.L.C. (“a16z”) or its respective affiliates. a16z Capital Management is an investment adviser registered with the Securities and Exchange Commission. Registration as an investment adviser does not imply any special skill or training. The posts are not directed to any investors or potential investors, and do not constitute an offer to sell — or a solicitation of an offer to buy — any securities, and may not be used or relied upon in evaluating the merits of any investment.

The contents in here — and available on any associated distribution platforms and any public a16z online social media accounts, platforms, and sites (collectively, “content distribution outlets”) — should not be construed as or relied upon in any manner as investment, legal, tax, or other advice. You should consult your own advisers as to legal, business, tax, and other related matters concerning any investment. Any projections, estimates, forecasts, targets, prospects and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Any charts provided here or on a16z content distribution outlets are for informational purposes only, and should not be relied upon when making any investment decision. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. In addition, posts may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein. All content speaks only as of the date indicated.

Under no circumstances should any posts or other information provided on this website — or on associated content distribution outlets — be construed as an offer soliciting the purchase or sale of any security or interest in any pooled investment vehicle sponsored, discussed, or mentioned by a16z personnel. Nor should it be construed as an offer to provide investment advisory services; an offer to invest in an a16z-managed pooled investment vehicle will be made separately and only by means of the confidential offering documents of the specific pooled investment vehicles — which should be read in their entirety, and only to those who, among other requirements, meet certain qualifications under federal securities laws. Such investors, defined as accredited investors and qualified purchasers, are generally deemed capable of evaluating the merits and risks of prospective investments and financial matters.

There can be no assurances that a16z’s investment objectives will be achieved or investment strategies will be successful. Any investment in a vehicle managed by a16z involves a high degree of risk including the risk that the entire amount invested is lost. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by a16z is available here: https://a16z.com/investments/. Past results of a16z’s investments, pooled investment vehicles, or investment strategies are not necessarily indicative of future results. Excluded from this list are investments (and certain publicly traded cryptocurrencies/ digital assets) for which the issuer has not provided permission for a16z to disclose publicly. As for its investments in any cryptocurrency or token project, a16z is acting in its own financial interest, not necessarily in the interests of other token holders. a16z has no special role in any of these projects or power over their management. a16z does not undertake to continue to have any involvement in these projects other than as an investor and token holder, and other token holders should not expect that it will or rely on it to have any particular involvement.

With respect to funds managed by a16z that are registered in Japan, a16z will provide to any member of the Japanese public a copy of such documents as are required to be made publicly available pursuant to Article 63 of the Financial Instruments and Exchange Act of Japan. Please contact compliance@a16z.com to request such documents.

For other site terms of use, please go here. Additional important information about a16z, including our Form ADV Part 2A Brochure, is available at the SEC’s website: http://www.adviserinfo.sec.gov.