Jump to content
Not connected, Your IP: 54.227.104.229
hashswag

Eddie / UDP 443 on Windows vs Linux

Recommended Posts

I recently built a new PC to replace my aging workhorse.  The older PC is running Windows 7; the new one is running Linux Mint 17.1.  I have Eddie 2.8.8 installed on both and both are hardwired.  My internet service is Verizon FiOS 75/75.

Before running on Linux, I was convinced that Verizon was doing deep packet inspection causing a throttle in all OpenVPN traffic except for SSL.  While they may still be doing, the results below are interesting.

I connected to the closest servers (Chicago) to optimize performance for this test and used speedtest.net for the benchmark numbers, connecting to a Chicago-based Comcast server.

Linux:
UDP 443, Pavonis: 77.70 Down / 87.14 Up
UDP 443, Alkaid: 79.07 Down / 87.23 Up
SSL, Pavonis: 76.91 Down / 82.13 Up
SSL, Alkaid: 74.83 Down / 84.90 Up

Windows:
UDP 443, Pavonis: 6.45 Down / 8.78 Up
UDP 443, Alkaid: 6.76 Down / 43.81 Up
SSL, Pavonis: 51.95 Down / 32.89 Up
SSL, Alkaid: 58.85 Down / 36.15 Up

 

Notice the UDP traffic on Linux is not throttled at all, while the Windows UDP traffic appears to be.  Is this something to do with the way Windows handles UDP traffic?

 

I tried all other protocol options on Windows, and none (except for SSL) were getting me to a rate that was acceptable.  But on Linux, the basic UDP 443 connection is blazing fast.

 

Can anyone explain why UDP on Windows is so slow?

 

It isn't due to the Windows hardware or network capacity, as enabling the more CPU intensive SSL protocol speeds things up. I am running Comodo Firewall on Windows (for DNS leaks), but disabling that did not affect the performance numbers.

 

The Windows hardware is an Intel Core 2 Quad 3.0Ghz with 8GB memory and a PCI Intel network card running a 1Gbps.

 

The Linux hardware is an Intel Haswell i7 4790k 4.6Ghz run 16GB memory using an onboard Intel network interface running at 1Gbps.

 

 

Share this post


Link to post

some openvpn options are only operable on non-windows.  fast-io and mtu-disc come to mind.

 

so I'm not surprised to see differences even though those two options are probably not enabled by default.

Share this post


Link to post

@hashswag

 

Hello!

 

Another option to be taken into consideration is that on your Windows system some packet inspection/filtering tool or "QoS management" is running. Inspecting encrypted packets is not only useless but it might slow down considerably the throughput. You could check that. An additional idea/speculation is that the QoS tool gives higher priority to TCP and de-prioritizes UDP. That would explain the performance gap you observe between UDP direct connection and "OpenVPN over SSL" connection: the performance hit due to double tunneling and forcing OpenVPN to work in TCP would be much lower than the one caused by the tool. You could also measure the performance in TCP direct (without an additional SSL tunnel) to gather an additional clue.

 

Kind regards

Share this post


Link to post

Thanks for the responses.

 

I tested Windows and Linux, to the same AirVPN server, using speedtest.net, to the same test site (Comcast/Chicago).

 

Windows:
TCP 443, Pavonis: 12.57 Down / 17.48 Up

 

Linux:
TCP 443, Pavonis: 79.15 Down / 87.35 Up

 

Appears it is not limited to just UDP traffic.

 

This is the same issues I was seeing before I realized Linux works a ton better (at least for me).  All protocols, UDP and TCP, were throttled on Windows, except for SSL, which lead me to believe that Verizon FiOS was throttling openvpn traffic.

 

It now appears that they are not throttling (unless they throttle Windows-only traffic, which is doubtful), as Linux works very nicely with non-SSL UDP and TCP protocols.

 

My guess is it's either the Windows Eddie client, or the Windows network stack.

 

Thoughts?

Share this post


Link to post

did you see what Staff wrote about QoS on your windows machine?

Yes: "...the QoS tool gives higher priority to TCP and de-prioritizes UDP".  That's why I tried TCP to eliminate that theory.

 

I also turned off the QoS Packet Scheduler on Windows and retested.  Same results.

Share this post


Link to post

Hi,

 

I know this topic is old but maybe my response can help someone. I have struggled with this bug for more than 3 days. (Linux speed fine, poor Windows speed)

 

Solution (works for Windows 7 64bit Home Premium)

Goto AirVPN->Preferences->Adanced->OVPN directives

Add in left field:

 

sndbuf 65536
rcvbuf 65536
 
you can try higher values but 65536 is minimum (I get best results using 524288 (512KB) - default value is 8192 and it's too low)
 
After adding this 2 lines my speed are similar on Linux and Windows boxes

Share this post


Link to post

Always nice when someone provides a solution, even if it's to their own problem and they come back later to report the fix. It's always helpful for other users searching for resolutions to their own problems. Thanks for signing up to provide a resolution khrs. 

 

So khrs (or Staff), what are these values and what are they 'fixing' under Windows environments? I'm just curious, as we only run Linux, BSD and OS X here anyway so don't have Windows boxes. I have seen this same speed issue on Windows in years past though, hence the curiosity. Thanks.

Share this post


Link to post

This entries set receive/send buffer size for socket. On *nix/MacOS X boxes it's usually set by default to 64KB-128KB. On my Windows 7 box default was 8KB (no idea why - OpenVPN 2.3.x documentation say that the default is 64KB but it was not in my case).

 

You can see your current buffer sizes in AirVPN->Logs

 

for eg:

. 2015.04.24 12:22:01 - OpenVPN > Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
. 2015.04.24 12:22:01 - OpenVPN > Socket Buffers: R=[8192->524288] S=[8192->524288]

Share this post


Link to post

Wow. I feel I really have to spend much more time reading technical things.. In my whole life this never really came to my mind because I could solve most problems by trial and error. It worked really good so far. Interesting lesson in life, Mr. khrs.

 

I noticed this buffer before but never really paid attention to it since OpenVPN didn't change my system's default SKBs (64 kB). Anyway, my internet speed is too slow to be really affected by such details - one more factor.

 

More explanation with an example of how to calculate this

 

This is funny. A few days ago I announced I'd want to help everyone with their strange speed issues..


NOT AN AIRVPN TEAM MEMBER. USE TICKETS FOR PROFESSIONAL SUPPORT.

LZ1's New User Guide to AirVPN « Plenty of stuff for advanced users, too!

Want to contact me directly? All relevant methods are on my About me page.

Share this post


Link to post

@khrs thanks for sharing.

 

My connection is not good enough to show drastic differences as earlier posts but I did find adding the OVPN Directives doubled my (short/Flash) speedtests on 1 port.

In Sept/Oct 2013 I found UDP 2018 performed the best for me on Verizon (straight up, no tunnelling a tunnel).
UDP 53 never connects and TCP 80 was the 2nd best.

I too have experienced a considerable improvement when tunnelling over SSL.
I appended the OVPN Directives, but did not experiment with different values, just went with 512 KB and will post them here in case future Verizon users are interested.

 

UDP 2018 showed a considerable improvement.
TCP 80 showed no difference.
OpenVPN over SSL became worse.

I connect to Europe but ran tests to Chicago as @hashswag did.
Started out with tests on a variety of ports to Alkaid and ran the speed test to GigeNET, IL after two Comcast, IL tests.
Then I appended the OVPN Directives, changing sndbuf and rcvbuf from the defaults and ran a few tests.
Commented the directives out and tested twice on 1 port and re-enabled the directives to confirm my UDP 2018 improved.

TCP 80, Alkaid: 19:53 UTC, 246/1000, 404 ms, 111 users, 9.35 Down / 10.35 Up, 35 ms to Comcast, IL
UDP 443, Alkaid: 19:59 UTC, 246/1000, 404 ms, 111 users, 3.53 Down / 13.05 Up, 35 ms to Comcast, IL
UDP 443, Alkaid: 20:00 UTC, 246/1000, 404 ms, 111 users, 5.32 Down / 13.85 Up, 35 ms to GigeNET, IL
UDP 2018, Alkaid: 20:04 UTC, 246/1000, 257 ms, 113 users, 9.55 Down / 13.88 Up, 33 ms to GigeNET, IL
SSL, Alkaid: 20:06 UTC, 254/1000, 363 ms, 111 users, 18.94 Down / 14.13 Up, 35 ms to GigeNET, IL
SSL, Alkaid, 20:14 UTC, 262/1000, 320 ms, 111 users, 17.42 Down / 14.16 Up, 35 ms to GigeNET, IL
SSL, Alkaid, 20:20 UTC, 270/1000, 272 ms, 115 users, 20.39 Down / 14.08 Up, 35 ms to GigeNET, IL
TCP 443, Alkaid: 20:26 UTC, 244/1000, 496 ms, 113 users, 9.72 Down / 9/96 Up, 33 ms to GigeNET, IL
TCP 443, Alkaid, 20:43 UTC, 268/1000, 393 ms, 112 users, 9.92 Down / 9.98 Up, 34 ms to GigeNET, IL
SSL, Alkaid: 20:47 UTC, 234/1000, 326 ms, 115 users, 17.26 Down / 14.07 Up, 34 ms to GigeNET, IL
------
Appended OVPN Directives: sndbuf=524288 rcvbuf=524288
. 2015.04.24 hh:54:12 - OpenVPN > Socket Buffers: R=[8192->524288] S=[8192->524288]
SSL, Alkaid: 20:58 UTC, 274/1000, 334 ms, 113 users, 11.72 Down / 12.88 Up, 35 ms to GigeNET, IL- Worse
SSL, Alkaid: 21:13 UTC, 226/1000, 136 ms, 113 users, 11.72 Down / 13.48 Up, 33 ms GigeNET, IL - Worse
UDP 2018, Alkaid: 21:17 UTC, 202/1000, 343 ms, 112 users, 23.13 Down / 14.04 Up, 35 ms GigeNET, IL - Better
TCP 80, Alkaid: 21:21 UTC, 224/1000, 442 ms, 110 users, 9.69 Down / 10.46 Up, 34 ms to GigeNET, IL - No Change
UDP 2018, Alkaid: 21:23 UTC, 226/1000, 111 userss, 23.14 Down / 14.03 Up, 35 ms to GigeNET, IL - Better
-------
Commented out OVPN Directives
. 2015.04.24 hh:24:01 - OpenVPN > Socket Buffers: R=[8192->8192] S=[8192->8192]
UDP 2018, Alkaid: 21:26 UTC, 252/1000, 276 ms, 114 users, 9.20 Down / 13.91 Up, 34 ms to GigeNET, IL
UDP 2018, Alkaid: 21:30 UTC, 196/1000, 233 ms, 114 users, 8.44 Down / 13.88 Up, 35 ms to GigeNET, IL
------
Appended OVPN Directives: sndbuf=524288 rcvbuf=524288
. 2015.04.24 hh:31:02 - OpenVPN > Socket Buffers: R=[8192->524288] S=[8192->524288]
UDP 2018, Alkaid: 21:33 UTC, 206/1000, 531 ms, 114 users, 23.15 Down / 13.97 Up, 35 ms to GigeNET, IL - Better

 

 

EDIT: Should test at different times of the week but right now, the connection to Amsterdam is awesome also:

UDP 2018, Syrma: 00:44 UTC, 114/1000, 723 ms, 60 users, 22.13 Down / 10.08 Up, 95 ms to Networking4All B.V., NL

Before appending the OVPN Directives, 10 Down / 10 Up would be a good day.

Share this post


Link to post

Thanks!  This is great.  Always had problems getting good UDP speeds on Comcast, but SSL always improved it.  Assumed it was traffic shaping.  Big changes when increasing buffer sizes.  Used speedof.me - Pollux speeds are to a test server in Atlanta.  Nashira speeds are to a test server in London.

 

50/12 Comcast connection, Windows 7, i7-3520M
 
Server/Protocol/Port/Buffer size/Download speed/Upload speed/latency (mean of 3 values, speeds in Mbps, latency in ms)
 
No VPN - 56.65/12.31/22 
 
Pollux UDP 80 (8192) - 8.02/11.13/37
Pollux UDP 80 (262144) - 22.05/11.08/35
Pollux UDP 80 (393216) - 49.72/11.05/36
Pollux UDP 80 (524288) - 48.93/11.2/36
Pollux UDP 80 (655360) - 50.76/10.97/34
Pollux SSL 443 (8192) - 51.2/6.7/36
Pollux SSL 443 (524288)- 50.22/7/35
 
Nashira UDP 80 (8192) - 2.12/2.08/172
Nashira UDP 80 (262144) - 12.1/2.55/163
Nashira UDP 80 (393216) - 12.6/2.4/166
Nashira UDP 80 (524288) - 15.02/2.36/175
Nashira UDP 80 (655360) - 17.18/2.57/164
Nashira UDP 80 (786432) - 16.07/2.32/170
Nashira SSL 443 (8192) - 11.9/1.22/166
Nashira SSL 443 (524288)- 14.52/1.18/171

Share this post


Link to post

Very glad to see that there is a solution to the Windows speed problem (compared to Linux).  Nice find!  Thanks!

Share this post


Link to post

Hi knighthawk,

Those values only make sense in TCP mode.

You mean just the 0 setting or in general? If in general that would seem to go against most of the results in this thread and my own results tinkering with just using 65536 and improvements I've seen in udp based connections to Air. 

Share this post


Link to post

Hi guys.

 

I can't say this made a huge difference in speed for me, only increasing it by around 100 k per sec. No major difference when changing ports on either UDP or TCP, with UDP producing better results slightly.

 

Any other suggestions would be great, as I'm only getting around 25 percent of my actual speed from Australia using a UK server for tv viewing.

 

Note: I'm not looking for the fastest server when running tests, just trying to use UK servers for UK tv viewing.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Security Check
    Play CAPTCHA Audio
    Refresh Image

×
×
  • Create New...