this post was submitted on 27 May 2024
16 points (100.0% liked)

homelab

6602 readers
1 users here now

founded 4 years ago
MODERATORS
 

I've noticed recently that my network speed isn't what I would expect from a 10Gb network. For reference, I have a Proxmox server and a TrueNAS server, both connected to my primary switch with DAC. I've tested the speed by transferring files from the NAS with SMB and by using OpenSpeedTest running on a VM in Proxmox.

So far, this is what my testing has shown:

  • Using a Windows PC connected directly to my primary switch with CAT6: OpenSpeedTest shows around 2.5-3Gb to Proxmox, which is much slower than I'd expect. Transferring a file from my NAS hits a max of around 700-800MB (bytes, not bits), which is about what I'd expect given hard drive speed and overhead.
  • Using a Windows VM on Proxmox: OpenSpeedTest shows around 1.5-2Gb, which is much slower than I would expect. I'm using VirtIO network drivers, so I should realistically only be limited by CPU; it's all running internally in Proxmox. Transferring a file from my NAS hits a max of around 200-300MB, which is still unacceptably slow, even given the HDD bottleneck and SMB overhead.

The summary I get from this is:

  • The slowest transfer rate is between two VMs on my Proxmox server. This should be the fastest transfer rate.
  • Transferring from a VM to a bare-metal PC is significantly slower than expected, but better than between VMs.
  • Transferring from my NAS to a VM is faster than between two VMs, but still slower than it should be.
  • Transferring from my NAS to a bare-metal PC gives me the speeds I would expect.

Ultimately, this shows that the bottleneck is Proxmox. The more VMs involved in the transfer, the slower it gets. I'm not really sure where to look next, though. Is there a setting in Proxmox I should be looking at? My server is old (two Xeon 2650v2); is it just too slow to pass the data across the Linux network bridge at an acceptable rate? CPU usage on the VMs themselves doesn't get past 60% or so, but maybe Proxmox itself is CPU-bound?

The bulk of my network traffic is coming in-and-out of the VMs on Proxmox, so it's important that I figure this out. Any suggestions for testing or for a fix are very much appreciated.

all 12 comments
sorted by: hot top controversial new old
[–] biscuitswalrus@aussie.zone 5 points 5 months ago* (last edited 5 months ago)

I've used virtio for Nutanix before and not using open speed test, but instead using iperf, gathered line rate across hosts.

However I also know network cards matter a lot. Some network cards, especially cheap Intel x710 suck. They don't have specific compute offloading that can be done so the CPU does all the work and the host cpu itself processes network traffic significantly slowing throughput.

My change to mellanox 25g cards showed all vm network performance increase to the expected line rate even on same host.

That was not a home lab though, that was production at a client.

Edit sorry I meant to wrap up:

  • to test use iperf (you could use UDP at 10Gbit and run it continuous, in UDP mode you need to set the size you try to send)
  • while testing look for CPU on the host

If you want to exclude proxmox you could attempt to live boot another usb Linux and test iperf over the lan to another device.

[–] lemming741@lemmy.world 2 points 5 months ago* (last edited 5 months ago)
[–] possiblylinux127@lemmy.zip 1 points 5 months ago* (last edited 5 months ago) (1 children)

My guess is there is a "glitch" somewhere in the middle. If not then it might be SMB or your drive speeds.

Can you try doing a speed check in between hosts? Also, I would make sure that the networking is paravirtualized properly. You also could try swapping out your network cables.

[–] corroded@lemmy.world 1 points 5 months ago (1 children)

When I use OpenSpeedTest to to test to another VM, it doesn't read or write from the HDD, and it doesn't leave the Proxmox NIC. It's all direct from one VM to another. The only limitations are CPU are perhaps RAM. Network cables wouldn't have any effect on this.

I'm using VirtIO (paravirtualized) for the NICs on all my VMs. Are there other paravirtualization options I need to be looking into?

[–] possiblylinux127@lemmy.zip 1 points 5 months ago

I don't have a lot of experience in high speed but as soon as you start getting faster there tends to be exponential overhead. I think you should try mounting the network share on the Proxmox host to test speed without the complexity of the VMs. If you get the results you are looking for then you are good but if it is bottle necked there the bottle neck is on the NAS or SMB. SMB is particularly hard to overcome as it seems to be slow no matter what you do.