Jump to content
Not connected, Your IP: 3.138.134.77

Recommended Posts

Something I just read.

 

 

UDP tuning

User Datagram Protocol (UDP) is a datagram protocol that is used by Network File System (NFS), name server (named), Trivial File Transfer Protocol (TFTP), and other special purpose protocols.

Since UDP is a datagram protocol, the entire message (datagram) must be copied into the kernel on a send operation as one atomic operation. The datagram is also received as one complete message on the recv or recvfrom system call. You must set the udp_sendspace and udp_recvspace parameters to handle the buffering requirements on a per-socket basis.

The largest UDP datagram that can be sent is 64 KB, minus the UDP header size (8 bytes) and the IP header size (20 bytes for IPv4 or 40 bytes for IPv6 headers).
The following tunables affect UDP performance:

    udp_sendspace
    udp_recvspace
    UDP packet chaining
    Adapter options, like interrupt coalescing

 

Set the udp_sendspace tunable value to a value that is equal to or greater than the largest UDP datagram that will be sent.

For simplicity, set this parameter to 65536, which is large enough to handle the largest possible UDP packet. There is no advantage to setting this value larger.

 

The udp_recvspace tunable controls the amount of space for incoming data that is queued on each UDP socket. Once the udp_recvspace limit is reached for a socket, incoming packets are discarded.

[...]

You should set the value for the udp_recvspace tunable high due to the fact that multiple UDP datagrams might arrive and wait on a socket for the application to read them. Also, many UDP applications use a particular socket to receive packets. This socket is used to receive packets from all clients talking to the server application. Therefore, the receive space needs to be large enough to handle a burst of datagrams that might arrive from multiple clients, and be queued on the socket, waiting to be read. If this value is too low, incoming packets are discarded and the sender has to retransmit the packet. This might cause poor performance.

Because the communication subsystem accounts for buffers used, and not the contents of the buffers, you must account for this when setting udp_recvspace. For example, an 8 KB datagram would be fragmented into 6 packets which would use 6 receive buffers. These will be 2048 byte buffers for Ethernet. So, the total amount of socket buffer consumed by this one 8 KB datagram is as follows: 6*2048=12,288 bytes

Thus, you can see that the udp_recvspace must be adjusted higher depending on how efficient the incoming buffering is. This will vary by datagram size and by device driver. Sending a 64 byte datagram would consume a 2 KB buffer for each 64 byte datagram.

Then, you must account for the number of datagrams that may be queued onto this one socket. For example, NFS server receives UDP packets at one well-known socket from all clients. If the queue depth of this socket could be 30 packets, then you would use 30 * 12,288 = 368,640 for the udp_recvspace if NFS is using 8 KB datagrams. NFS Version 3 allows up to 32 KB datagrams.

A suggested starting value for udp_recvspace is 10 times the value of udp_sendspace, because UDP may not be able to pass a packet to the application before another one arrives. Also, several nodes can send to one node at the same time. To provide some staging space, this size is set to allow 10 packets to be staged before subsequent packets are discarded. For large parallel applications using UDP, the value may have to be increased.

When UDP Datagrams to be transmitted are larger than the adapters MTU size, the IP protocol layer will fragment the datagram into MTU size fragments. Ethernet interfaces include a UPD packet chaining feature. [...]

UDP packet chaining causes IP to build the entire chain of fragments and pass that chain down to the Ethernet device driver in one call. This improves performance by reducing the calls down through the ARP and interface layers and to the driver. [...] It also helps the cache affinity of the code loops. These changes reduce the CPU utilization of the sender.

To avoid flooding the host system with too many interrupts, packets are collected and one single interrupt is generated for multiple packets. This is called interrupt coalescing.

For receive operations, interrupts typically inform the host CPU that packets have arrived on the device's input queue. Without some form of interrupt moderation logic on the adapter, this might lead to an interrupt for each incoming packet. However, as the incoming packet rate increases, the device driver finishes processing one packet and checks to see if any more packets are on the receive queue before exiting the driver and clearing the interrupt. The driver then finds that there are more packets to handle and ends up handling multiple packets per interrupt as the packet rate increases, which means that the system gets more efficient as the load increases.

However, some adapters provide additional features that can provide even more control on when receive interrupts are generated. This is often called interrupt coalescing or interrupt moderation logic, which allows several packets to be received and to generate one interrupt for several packets. A timer starts when the first packet arrives, and then the interrupt is delayed for n microseconds or until m packets arrive. The methods vary by adapter and by which of the features the device driver allows the user to control.

Under light loads, interrupt coalescing adds latency to the packet arrival time. The packet is in host memory, but the host is not aware of the packet until some time later. However, under higher packet loads, the system performs more efficiently by using fewer CPU cycles because fewer interrupts are generated and the host processes several packets per interrupt.

[...]

The Gigabit Ethernet adapters offer the interrupt moderation features. [...]

 

Packet chaining, interrupt coalescing and other additional features like chimney offload are not supported by every interface or interface driver. It generally works with Realtek and Intel interfaces, though I experienced some exceptions in the past..

Would be interesting to know if these are available in the TAP driver...


NOT AN AIRVPN TEAM MEMBER. USE TICKETS FOR PROFESSIONAL SUPPORT.

LZ1's New User Guide to AirVPN « Plenty of stuff for advanced users, too!

Want to contact me directly? All relevant methods are on my About me page.

Share this post


Link to post

This might englighten you a little more:

https://community.openvpn.net/openvpn/wiki/Gigabit_Networks_Linux

 

Did you mean TAP driver for Windows?

Tap for Windows could probably be as fast as a water tap.

 

I couldn't find anywhere an accurate measurement where Windows performs faster than Linux/BSD in terms of networking.


Occasional moderator, sometimes BOFH. Opinions are my own, except when my wife disagrees.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Security Check
    Play CAPTCHA Audio
    Refresh Image

×
×
  • Create New...