Jump to content
Not connected, Your IP: 3.142.173.227

reversevpn

Members2
  • Content Count

    27
  • Joined

    ...
  • Last visited

    ...
  • Days Won

    3

Everything posted by reversevpn

  1. As long as the cipher you encrypted the database with is secure (ie. AES-256, not blowfish or 3DES), and you are sure your adversaries don't have the decryption key, there is no issue with sending your data under the Atlantic. The TCP protocol will take care of sending your data reliably. However, if you have reason to suspect that your data would be compromised by an adversary in the middle of transit, i suggest you take a sha256sum hash of the data on the sending side, before sending, then another sha256sum on the receiving side, after the data has been received. If the 2 hashes are byte-for-byte equal, you can be certain that your data has not been tampered with or corrupted. Also, if these premises are satisfied (secure encryption scheme, secure encryption key, equal hashes), then using AirVPN adds an extra benefit only if you do not want anybody to know that you are sending data under the Atlantic (in that case, pick an AirVPN server in America that the sender connects to, and another AirVPN server in Europe that the receiver connects to, then all anybody tapping the wires under the Atlantic will see is one AirVPN server talking to another. They will not be able to trace the activity back to you.) If you do not care that the data transfer can be traced back to you, then AirVPN does not help you at all.
  2. That depends who you are hiding the sensitive, private data from, and whether it was already encrypted before you sent it through AirVPN. If who you are hiding it from has no power over the jurisdiction of the AirVPN server you connected to AND no power over the jurisdiction that you are sending your data to, NOR over any intermediate points between the AirVPN server and the final destination of your data, and the data was not encrypted to begin with, then yes, your security has improved a little, because your data is now being decrypted in a jurisdiction that your adversary has no power over. In this sense, AirVPN prevents adversaries from sniffing your data. However, in today's internet, it is bad practice to rely on your adversary not being in any jurisdiction, because it is hard(but not impossible) to know the full path that your data travels over, especially once it leaves the AirVPN server. It would be better if you had encrypted the data BEFORE sending it through AirVPN. If the data was already end-to-end encrypted so that only your intended recipient can decrypt your data, then AirVPN helps only in the sense that 3rd part observers will not know that YOU are the one sending data to your intended recipient(provided that your recipient is not cooperating with your adversary and has not been compromised by your adversary). If your goal is to hide your data from everybody other than your intended recipient (this would be the norm), but you do not care that people see that you are sending something to your intended recipient (provided that they cannot understand what you are sending), then using AirVPN would not really improve your security. If your goal is to hide your data from everybody other than your intended recipient and you do not want them to know that you are even sending anything to your intended recipient (they will still see that you are sending something to AirVPN, not that they can understand what you are sending), then yes, AirVPN does improve your security. Either way, it would be best to encrypt your data end-to-end before sending it. DO NOT rely on AirVPN to keep the data encrypted end-to-end, because the only way AirVPN can send the data to your recipient is to decrypt your data and send it to the recipient.
  3. It appears you don't have wintun on your computer. Try installing wintun and repeating the connection procedure with wintun already installed.
  4. If the distro that your router runs supports iptables-persistent, then iptables-persistent is the canonical way of making iptables rules survive past reboot. As for /etc/rc.local, that is the generic way of running commands at startup if your distro doesn't have systemd. However, if jffs is idiomatic for Asus Merlin, then you've probably done the right thing. Depends on the idioms of your distro.
  5. Either put them in /etc/rc.local, or install the iptables-persistent package.
  6. ad 1. In theory, yes. In practice, for some reason, this was not necessary. I was able to access the resources of the branch network even through a wireguard tunnel with an inner MTU of 1420 and an outer MTU of 1500 which passed through AirVPN's tunnel with an inner MTU of 1420 and an outer MTU of 1500. I think wireguard automatically adjusts its UDP packet size to correct for MTU issues. However, trying to send TCP traffic through a wireguard tunnel without decreasing the TCP traffic's MTU at some point before reaching the container connected to AirVPN's server via Wireguard is problematic. By problematic, I mean sites like microsoft.com and duckduckgo would fail to load. ad.2. There is another way: Host the frontend on a commercial (or even free, as in Github pages or google sites) provider who offers to host just static sites (these providers will ask you to bundle up your site in a folder, then they just share what's in the folder), then just keep your backend behind AirVPN. Of course, this means that you have to correctly configure CORS on the backend that you're keeping behind AirVPN. But still, this approach allows you to run powerful sites not limited by what your user's browsers can do at a fraction of the cost of a VPS. There is a reason that I called this AirVPN as App Backend instead of just AirVPN as app platform.
  7. Actually, it is possible to chain AirVPN connections so that you enter from one country and exit from another, if you are using Linux. Keep in mind though that doing so will consume 2 airvpn sessions out of the 5 you are allowed instead of just 1. To do that, here are the steps you need to follow: 1. Set up a systemd-nspawn container that connects to the AirVPN server in Miami 2. Keep in mind the entry ipv4 of the AirVPN server you want to exit from. You can find this in the Endpoint= line in the wireguard conf you download. 3. Set up IP Masquerading in the container from step 1 using iptables -t nat -A POSTROUTING -i host0 -o (name of AirVPN wireguard interface) -j MASQUERADE 4. Remember to also allow ip forwarding on both the contaienr and the host machine 5. On the host machine, run ip rule add to (whatever entry address the miami server has) lookup main and ip route add (whatever the entry address of the airvpn server you want to exit from is) via (whatever the address of the systemd-nspawn container is). 6. Adjust the MTU of the inner VPN(the one where you want to exit from) to 1340 7. Start the inner VPN 8. Run ip rule show. Make sure that whatever ip rule that wiregaurd setup has a lower priority than the rule you entered in step 4. If you need more help, feel free to ask.
  8. A High-Level Guide to Both Use Cases (Ask if you need to go deeper down to implementation details): Site-to-Site VPN: Example Scenario: You have a head office whose LAN is 192.168.100.0/24, and you have a branch office whose LAN is 192.168.200.0/24. You want seamless IP routing between both offices, so that any machine on one LAN can access any machine on the other LAN. 1. Download an AirVPN Wireguard Config File for a server physically close to the head office. 2. Forward a random UDP port using AirVPN's port-forwarding menu, but remember what port it is. Let's call this port X. 3. Create a systemd-nspawn container on a machine on a Linux box in the head office. 4. Upload the wireguard config file from step 1 into the container in step 3. 5. Using iptables in the container, port-forward port X as-is from the container to the machine that the container is running on(iptables -t nat -A PREROUTING). 6. Also using iptables on the container, masquerade traffic coming from the host machine and exiting through the AirVPN wireguard interface, and vice-versa (iptables -t nat -A POSTROUTING (insert -i and -o directives here) -j MASQUERADE) 7. On the container, block all traffic that neither goes to/comes from the AirVPN server, nor is to/from port X, nor has been established yet. 8. On the host machine of the container in the headoffice, setup a listening wireguard process (configuration in /etc/wireguard) that listens on port x, has address 192.168.y.z, where y and z are arbitrary numbers between 0 and 255 that do not correspond to an existing IP address in either the head office or the branch office, and that has a peer for whom the allowed IPS are 192.168.y.w (y is the same y as you chose earlier, w is a number that causes 192.168.y.w as a whole to not be a currently used IP address ) and 192.168.200.0/24. 9. Appropriately setup routing rules on both the host machine that the container in the headoffice is running on, and the router in the head office, if the host machine of the container is not also the router. 10. On a Linux machine in the branch office, set up a wireguard process that has IP address 192.168.y.w and has a peer whose endpoint is a.b.c.d:X , whose AllowedIPs are 192.168.y.z and 192.168.100.0/24, whose PublicKey is a match for the private key of the wireguard process in step 8, and whose PersistentKeepalive is 10. a.b.c.d is the Exit Ipv4 address of the AirVPN server you picked in step 1. You can find this in the Sessions section of AirVPN's client area. 11. Appropriately set up routing rules on both the box that Wireguard is running on in the branch office, and the router of the branch office, if the machine that the wireguard process you created in step 10 is running on is not also the router of the branch office. 12. If you did everything right, the site-to-site VPN should now be fully operational. AirVPN as app backend: 1. Follow steps 1-7 from the Site-to-Site VPN guide, except that the head office is now simply where you have your physical server, and you are now forwarding TCP instead of UDP on port X. 2. Change the host->container mtu to 1420, but leave the container->host mtu at 1500. 3. Install nginx on the host machine of the container in step 1. 4. For each HTTP endpoint that your app uses, add a location/endpoint {} block in /etc/nginx/sites-enabled with a single proxy_pass directive to whatever process your backend is. For example, if you have a gunicorn server listening at 127.0.0.1:5000, then you should write proxy_pass http://127.0.0.1:5000/{name_of_endpoint}l in each endpoint's location block. 5. Set up SSL on the nginx server, so that traffic between your users is HTTPS. It's ok that the traffic between nginx and your backend is unencrypted HTTP, provided that both are running on the same machine and that you configured the backend to listen ONLY on the localhost interface. This completes the backend of your app. 6. In the frontend of your app(could be a PWA, Desktop app, Android App, or an iOS app; the point is that this is the part of your app that your users interact with), direct all http requests to a.b.c.d:X, where a.b.c.d is the exit address of the AirVPN server you chose, and X is the random port you chose. 7. Test your app to verify that it is working as intended. Interesting note: Provided that you ship your app as a native app(Desktop app, android app, or iOS app) instead of a PWA app, most of your users will never notice that you are using port X. The more technically inclined among them may find out using tcpdump or wireshark, but the vast majority will behave as though you hosted your backend on AWS or similar instead of hosting it on a machine sitting behind AirVPN. However, if you buy a 3-year plan from AirVPN during Halloween, you have probably by now both reduced your recurring cost to 20% of what it would have been had you gone with a modest VPS plan AND you now have unlimited egress/ingress traffic thanks to AirVPN's unlimited bandwidth policy. In case you do not want a single point of failure but want several copies of the backend running in different places, you can have up to 5 backends(1 for each session AirVPN gives you) by repeating steps 1 to 4 for each copy of your backend. Just configure your frontend to randomly choose which backend to connect to, then choose a different one if the connection fails. Note that this method is agnostic of what your application actually does. It could be a scheduling app, a turn-based game, an online store, or whatever you can imagine, except perhaps a real-time game where even single frames matter. The only drawback is the increased latency because of including the AirVPN server in the path between your users and your backend, but if your app is not latency-sensitive, or if your server is extremely, physically close to one of AirVPN's servers(think same city block), latency will not be a problem.
  9. Which part, the site-to-site VPN or AirVPN as app backend?
  10. AirVPN's static ips and port forwarding can be leveraged to create site-to-site VPNs at a fraction of the cost of obtaining public ips from my ISP, which is great for me as a network admin. For those who are so inclined, you can combine nginx, flask, airvpn, and your choice of hardware to replace even VPS services as a backend for your apps. If you have high-speed internet, you will never find a VPS solution more cost-effective than AirVPN + your own hardware. As an added bonus, your services become somewhat shielded from DDoS attacks because you don't have to reveal your machine's physical IP, and you can use the 5 allowed sessions to perform multihoming and provide redundancy.
  11. Well, if you are connected to AirVPN via Wireguard, you can set your DNS server to 10.128.0.1. This holds regardless of which AirVPN server you are connected to, as long as you are using Wireguard. By running AirVPN on your wifi router or between your wifi router and your ISP, you can directly replace cloudflare with AirVPN. I use a script like this to automatically switch between AirVPN servers whenever any of them go down. I hereby release this bash script for all to use under the 0BSD license. You can add as many AirVPN servers as you like to maximize reliability, but I find that 5 are enough for me. This script simply ping-tests the currently connected AirVPN server. The moment the current AirVPN server fails to respond within 0.5 seconds, the router switches to a different AirVPN server. This will keep your DNS running as long as at least 1 of the AirVPN servers you put into rotation are still working.
  12. If you are on a Linux system, add the line PostUp = ip rule add to $PIHOLE_ADDRESS lookup main in the [Interface] section of the Wireguard conf, in addition to changing the DNS like you were already doing. Substitute $PIHOLE_ADDRESS with whatever the real IP address of your PIHOLE is. For example, if your PIHOLE is at 192.168.22.5, then add PostUp = ip rule add to 192.168.22.5 lookup main
  13. @tranquivox69 , from my experiments with Eddie, I can confirm that indeed, Eddie does not honor the route-nopull directive. In my earlier experiments, I was using the standard openvpn binary(the one you can most likely download from your distro's repository), and wrongly assumed that just because the normal openvpn binary honors the route-nopull directive, so would Eddie. For that, I apologize.
  14. I wonder though, why do you hide the VPN's routing table entries instead of just not pulling them in at all? If they are hidden anyway, what good does it do for them to enter the routing table in the first place?
  15. systemd-nspawn is deeply tied to systemd, which is a major part of most Linux distributions today. I seriously doubt it will be abandoned any time soon. What I am suggesting is not the download and use of a pre-built container, but rather to setup the container yourself as though it were a server distinct from the hosting server, then to install applications on it as though it were another server.
  16. If you want to continue using the web API of transmission, simply configure the host OS to forward packets from the remote device to the transmission remote control port, while masquerading such requests. This will cause transmission to answer back to the host OS's internal IP address, and the host OS will ultimately answer back to the web API's consumer.
  17. In that case, transmission should be confined to a systemd-nspawn container that has only a local ipv4 and the ipv4 and ipv6 addresses assigned by AirVPN. This container would not have the ipv6 of the host, having its own network namespace. You can continue to use ipv6 as you wish in the host OS while leaving transmission with only a harmless internal IP address and IP addresses from AirVPN.
  18. Well, my recommended technique is to use systemd-nspawn containers, IP Masquerading, policy-based routing, and user-based packet mangling to achieve split-tunnel behavior. As for ipv6, you can block ipv6 traffic for transmission entirely by using the -m owner --uid owner option in ip6tables filter table, OUTPUT chain, provided that you run transmission as another linux user. To continue allowing local routing to work, simply add a route for the local network for the user-specific routing table that you make for the transmission user account. See my first anwer to this problem for how to deploy such a setup. The advantage of that method is that there is no need for the application to cooperate with IP binding; you enforce split tunneling through the OS. Actually, come to think of it, if you suspect a program to be leaking your ipv6 address, I suggest you just run it inside the systemd-nspawn container configured with no ipv6 address instead of running it in the host OS and routing traffic through the container. That would completely negate the need for advanced tricks like packet mangling.
  19. Indeed, the use of ip rule manipulates the OS rather than Eddie. That sort of manipulation undoes the default routing policy database that wg-quick would normally do upon gateway redirection. The advantage of this method is that you would not need to enter wireguard parameters manually. Perhaps you would want to use the wg tool manually in order to avoid having to undo anything at all, but the drawback is that you would have to enter parameters manually. There are steps for using wg rather than wg-quick on wireguard.com. Either way, these manipulations are all performed outside Eddie. As far as I can tell, Eddie does not allow Wireguard customization. If you don't want to use tcpdump for packet inspection during your experiments, you can use Wireshark instead. It's a GUI tool for packet inspection of individual interfaces, and you can use it because your system is not headless. Just select the eddie interface from Wireshark's list of interfaces, after you have activated Eddie and instructed it to name its interface eddie.
  20. That directive prevents AirVPN from making itself the default gateway of your system. This will indeed cause your system to continue sending most traffic outside the VPN tunnel. As for only applications bound to that interface using the VPN, that does appear to be the case in my experiments involving Linux ISOs, but I suggest you conduct your own experiments as well to verify. Torrent testing tools like ipleak.net do not seem to be able to detect a torrent client configured this way, but a torrent client configured this way does actually succeed in downloading Linux ISOs, a tcpdump of the VPN interface does reveal packets flowing through the VPN, and an nload of the VPN interface reveals a significant amount of traffic flowing through it when the torrent client is activated and bound to the eddie interface. Additionally, to make the name of the interface that Eddie creates predictable, I suggest you replace the line that says dev tun with two lines that say dev eddie dev-type tun This will cause eddie's VPN interface to always be named "eddie" rather than tun0, tun1, etc. If you want to achieve the same effect using Wireguard, you will have to manipulate the routing policy database after allowing 0.0.0.0/0. ::/0 for Air's server in the Wireguard conf itself. This involves multiple ip rule commands executed after the wireguard interface is raised , and making a mistake may cause you to lose remote access to your server, which would force you to physicaly go to your server to correct the mistake. I do not do this because I prefer the container method since it does not require me to override Wireguard's default behavior.
  21. It is, but make sure to set mssfix 1420 in Eddie's OpenVPN Directives to ensure that tcp packets will fit inside the inner tunnel that Eddie makes. The outer tunnel is the Wireguard tunnel.
  22. As for the converted bash script, I don't think it will work as-is because the On-Link keyword that it expects is not present in Linux's route -n. That being said, if your motivation behind using the original script is just to prevent Eddie from making AirVPN your system's default gateway, you can accomplish that effect by adding route-nopull to Eddie's Settings>OpenVPN directives.
  23. https://github.com/Intika-Linux-Firewall/App-Route-Jail Perhaps this may be of interest to you. This essentially does the same thing as ForceBindIP, with a few extra steps. I do not vouch for the security of this code, but this does appear to be very close to your existing solution.
  24. I suppose an approach like this might work if you only ran 1 app at a time, but if you want to run several apps concurrently, and the apps you intend to run concurrently are a mix of apps you want to run through AirVPN and apps that you don't want to run through AirVPN, the cheapest option is to use systemd-nspawn, a container that is well-integrated into Linux's systemd. Additionally, it appears that script is written in batch script, so if you want to use that same approach in Linux, you will have to rewrite it in bash. Alternatively, if you can spare an extra physical linux machine, like a mini PC, a raspberry pi, an old laptop or old desktop, etc., that physical machine can be used instead of a systemd-nspawn container.
  25. To split traffic on a per-app basis in your Linux server (assuming you intend to use ipv4), do the following: 1. Install wireguard and systemd-nspawn on your server. 2. Create a new Linux user. The name is up to you, but for the purposes of my answer, let's call this user airapp. 3. For each app that you want to route through AirVPN, start that app under the user airapp. This can be accomplished by using sudo -u airapp [name of app here], or by setting User=airapp in the [Service] section of the app's service file. 4. Edit /etc/iproute2/rt_tables and add a number for a new table we will call vpntable. Again, the name is up to you, but for my answer, we will call this table vpntable. 4. Also in /etc/iproute2/rt_tables, add a number for a new table we will call killswitch. Again, the name is up to you, but for my answer, we will call this table killswitch. 5. Create a systemd-nspawn container, which we will call airouter in my answer(you can change its name; just be consistent with the name you've chosen if you do). Make sure this container starts up along with your server by using systemctl enable systemd-nspawn@airouter && systemctl start systemd-nspawn@airouter. Note that these 2 commands work only after you create the airouter container in /var/lib/machines. If on a Debian or Debian derivative system, you do this using debootstrap(with appropriate arguments). After creating the server, you have to use systemd-nspawn -UD airouter within /var/lib/machines to download and install Linux packages on it, such as systemd-networkd. While in the shell induced by calling systemd-nspawn -UD airouter, the airouter container will temporarily share the networks and interfaces of your host machine, but if you exit this shell and ssh or machinectl login your way into airouter after executing systemctl start systemd-nspawn@airouter, it will have a network system different from your main server, which is the key to using it as a router for the apps you wish to split-tunnel. 6. Configure systemd-networkd to put both your main server and airouter on the same RFC 1918 network,both with static addresses, but distinct from your server's main network. For example, if your server's physical ethernet card is on 192.168.1.0/24, I suggest you put the ve-airouter interface of your host server and the host0 interface of the airouter container on 192.168.34.0/24. Let us for now pretend that your host server's ve-airouter interface has address 192.168.34.1/24, and that airouter's host0 interface has address 192.168.34.2/14. For now, the default gateway of airouter is the IP address of your server, 192.168.34.1. Also, make it so that airouter sees the link as having an MTU of 1500, but your main server sees the link as having an MTU of 1420. This is to ensure that TCP apps on your server answering back to the airouter container will size their packets appropriately for the wireguard conf you are about to create in airouter. 6. Enable IP masquerading and IP forwarding between the container and your main system, making sure to masquerade traffic heading out of airouter and into AirVPN, traffic heading out of airouter and into your host server, , traffic heading out of your host server and into airouter, and traffic heading out of your host server and onto your server's default gateway. 7. Install wireguard-tools and resolvconf in the airouter container. You may also wish to install openssh-server on airouter so you can manage airouter like any other server. 8. Generate a wireguard config from AirVPN's client area, and put it in the airouter container's /etc/wireguard directory. Let us assume this file is called airvpn.conf, but again you can rename it freely. I suggest you rename it so that the new name can be used as an interface name, because the default name may have too many characters for the Linux interface system. 9.As root within that container, or using the sudo command, issue systemctl start wg-quick@airvpn && systemctl enable wg-quick@airvpn (must be consistent with the prefix before .conf that you chose in step 8). This will make it so that the container connects to AirVPN automatically on startup. 10. Establish appropriate firewall rules within the container to block any traffic that you do not need to authorize. 11. On the main server, discover the uid of the newly created airapp user by consulting /etc/passwd or executing the id command as airapp. Let us pretend for now that airapp's uid is 1001. 12. On the main server and as root, enter iptables -t mangle -A OUTPUT -m owner --uid-owner 1001 -j MARK --set-mark 107.(107 is an arbitrary number; you can choose a different one if you like, but make sure to use that same number in the next steps). this marks packets from the airapp user so that they can be routed through airouter later on. 13. As root, enter ip rule add fwmark 107 lookup vpntable prio 100. This will make all traffic from the apps you associated with the airapp user lookup the vpntable instead of your system's main routing table. Add this command to a startup script for your server. 100 is arbitrary, but make sure to pick a number higher than it in the next step. 14. As root, enter ip rule add fwmark 107 lookup killswitch prio 101. This will make all traffic from the apps you associated with the airapp user lookup the killswitch table if the vpntable is empty or has no applicable routes. Add this command to a startup script for your server. 15. As root, also enter ip route add default via 192.168.34.2 dev ve-airouter table vpntable. This route makes the default gateway for the airapp user the airouter container instead of your server's normal default gateway. Also add this command to a startup script for your server, preferably in a while loop to ensure that if the container reboots or starts later than the startup script with this rule, the rule is added nonetheless . 15b. In case you want these apps to be accessible from your LAN also, issue ip route add 192.168.1.0/24 dev eth0 table vpntable. This assumes that your server is on the LAN with subnet 192.168.1.0/24, and that the interface your server is using to connect to the LAN is eth0. Replace eth0 and 192.168.1.0/24 with your actual interface name and subnet. If your server has no connected LAN to speak of, ignore this step. 16. As root, also issue ip route add default via 127.0.0.1 dev lo table killswitch. This will make it so that if the apps cannot answer back to the internet through the airouter container(and by extension AirVPN), they cannot answer back to the internet at all. Also add this command to a startup script for your server, preferably in a while loop to ensure that if the container reboots or starts later than the startup script with this rule, the rule is added nonetheless . 17. If the apps accept incoming connections, test them using the Port Forward feature in AirVPN's client area(after using iptables -t nat -I PREROUTING -p- udp --dport X -j DNAT --to 192.168.34.1:X inside the airouter container, where X is the port forward you got from AirVPN). If the apps make outbound connections, ssh into airouter and install tcpdump, then execute tcpdump -i airvpn as you use the app to verify that traffic from the app is indeed flowing into airvpn, by checking tcpdump as you use the app's networking capabilities. Whenever the app tries to access the network, packets should start appearing on tcpdump's standard output . And about a minute after you close the app (and assuming no other apps you configured to route to airvpn are open either), tcpdump should stop printing new packets to standard out. I use -i airvpn bgecause I am assuming you rename the downloaded Wireguard profile from step 8 as airvpn.conf. If you instead renamed it as remote.conf, you should instead execute tcpdump -i remote. The name of the interface you are dumping is also the name you gave the wireguard file. I strongly suggest using iptables-persistent both on your main server and in the airouter container in order to ensure that all firewall rules, whether in the container or outside it, remain even after a reboot. If you need more involved help, feel free to message me directly using AirVPN's message feature.
×
×
  • Create New...