Jump to content
Not connected, Your IP: 216.73.216.40

reversevpn

Members2
  • Content Count

    39
  • Joined

    ...
  • Last visited

    ...
  • Days Won

    5

Everything posted by reversevpn

  1. If you want to continue using the web API of transmission, simply configure the host OS to forward packets from the remote device to the transmission remote control port, while masquerading such requests. This will cause transmission to answer back to the host OS's internal IP address, and the host OS will ultimately answer back to the web API's consumer.
  2. In that case, transmission should be confined to a systemd-nspawn container that has only a local ipv4 and the ipv4 and ipv6 addresses assigned by AirVPN. This container would not have the ipv6 of the host, having its own network namespace. You can continue to use ipv6 as you wish in the host OS while leaving transmission with only a harmless internal IP address and IP addresses from AirVPN.
  3. Well, my recommended technique is to use systemd-nspawn containers, IP Masquerading, policy-based routing, and user-based packet mangling to achieve split-tunnel behavior. As for ipv6, you can block ipv6 traffic for transmission entirely by using the -m owner --uid owner option in ip6tables filter table, OUTPUT chain, provided that you run transmission as another linux user. To continue allowing local routing to work, simply add a route for the local network for the user-specific routing table that you make for the transmission user account. See my first anwer to this problem for how to deploy such a setup. The advantage of that method is that there is no need for the application to cooperate with IP binding; you enforce split tunneling through the OS. Actually, come to think of it, if you suspect a program to be leaking your ipv6 address, I suggest you just run it inside the systemd-nspawn container configured with no ipv6 address instead of running it in the host OS and routing traffic through the container. That would completely negate the need for advanced tricks like packet mangling.
  4. Indeed, the use of ip rule manipulates the OS rather than Eddie. That sort of manipulation undoes the default routing policy database that wg-quick would normally do upon gateway redirection. The advantage of this method is that you would not need to enter wireguard parameters manually. Perhaps you would want to use the wg tool manually in order to avoid having to undo anything at all, but the drawback is that you would have to enter parameters manually. There are steps for using wg rather than wg-quick on wireguard.com. Either way, these manipulations are all performed outside Eddie. As far as I can tell, Eddie does not allow Wireguard customization. If you don't want to use tcpdump for packet inspection during your experiments, you can use Wireshark instead. It's a GUI tool for packet inspection of individual interfaces, and you can use it because your system is not headless. Just select the eddie interface from Wireshark's list of interfaces, after you have activated Eddie and instructed it to name its interface eddie.
  5. That directive prevents AirVPN from making itself the default gateway of your system. This will indeed cause your system to continue sending most traffic outside the VPN tunnel. As for only applications bound to that interface using the VPN, that does appear to be the case in my experiments involving Linux ISOs, but I suggest you conduct your own experiments as well to verify. Torrent testing tools like ipleak.net do not seem to be able to detect a torrent client configured this way, but a torrent client configured this way does actually succeed in downloading Linux ISOs, a tcpdump of the VPN interface does reveal packets flowing through the VPN, and an nload of the VPN interface reveals a significant amount of traffic flowing through it when the torrent client is activated and bound to the eddie interface. Additionally, to make the name of the interface that Eddie creates predictable, I suggest you replace the line that says dev tun with two lines that say dev eddie dev-type tun This will cause eddie's VPN interface to always be named "eddie" rather than tun0, tun1, etc. If you want to achieve the same effect using Wireguard, you will have to manipulate the routing policy database after allowing 0.0.0.0/0. ::/0 for Air's server in the Wireguard conf itself. This involves multiple ip rule commands executed after the wireguard interface is raised , and making a mistake may cause you to lose remote access to your server, which would force you to physicaly go to your server to correct the mistake. I do not do this because I prefer the container method since it does not require me to override Wireguard's default behavior.
  6. It is, but make sure to set mssfix 1420 in Eddie's OpenVPN Directives to ensure that tcp packets will fit inside the inner tunnel that Eddie makes. The outer tunnel is the Wireguard tunnel.
  7. As for the converted bash script, I don't think it will work as-is because the On-Link keyword that it expects is not present in Linux's route -n. That being said, if your motivation behind using the original script is just to prevent Eddie from making AirVPN your system's default gateway, you can accomplish that effect by adding route-nopull to Eddie's Settings>OpenVPN directives.
  8. https://github.com/Intika-Linux-Firewall/App-Route-Jail Perhaps this may be of interest to you. This essentially does the same thing as ForceBindIP, with a few extra steps. I do not vouch for the security of this code, but this does appear to be very close to your existing solution.
  9. I suppose an approach like this might work if you only ran 1 app at a time, but if you want to run several apps concurrently, and the apps you intend to run concurrently are a mix of apps you want to run through AirVPN and apps that you don't want to run through AirVPN, the cheapest option is to use systemd-nspawn, a container that is well-integrated into Linux's systemd. Additionally, it appears that script is written in batch script, so if you want to use that same approach in Linux, you will have to rewrite it in bash. Alternatively, if you can spare an extra physical linux machine, like a mini PC, a raspberry pi, an old laptop or old desktop, etc., that physical machine can be used instead of a systemd-nspawn container.
  10. To split traffic on a per-app basis in your Linux server (assuming you intend to use ipv4), do the following: 1. Install wireguard and systemd-nspawn on your server. 2. Create a new Linux user. The name is up to you, but for the purposes of my answer, let's call this user airapp. 3. For each app that you want to route through AirVPN, start that app under the user airapp. This can be accomplished by using sudo -u airapp [name of app here], or by setting User=airapp in the [Service] section of the app's service file. 4. Edit /etc/iproute2/rt_tables and add a number for a new table we will call vpntable. Again, the name is up to you, but for my answer, we will call this table vpntable. 4. Also in /etc/iproute2/rt_tables, add a number for a new table we will call killswitch. Again, the name is up to you, but for my answer, we will call this table killswitch. 5. Create a systemd-nspawn container, which we will call airouter in my answer(you can change its name; just be consistent with the name you've chosen if you do). Make sure this container starts up along with your server by using systemctl enable systemd-nspawn@airouter && systemctl start systemd-nspawn@airouter. Note that these 2 commands work only after you create the airouter container in /var/lib/machines. If on a Debian or Debian derivative system, you do this using debootstrap(with appropriate arguments). After creating the server, you have to use systemd-nspawn -UD airouter within /var/lib/machines to download and install Linux packages on it, such as systemd-networkd. While in the shell induced by calling systemd-nspawn -UD airouter, the airouter container will temporarily share the networks and interfaces of your host machine, but if you exit this shell and ssh or machinectl login your way into airouter after executing systemctl start systemd-nspawn@airouter, it will have a network system different from your main server, which is the key to using it as a router for the apps you wish to split-tunnel. 6. Configure systemd-networkd to put both your main server and airouter on the same RFC 1918 network,both with static addresses, but distinct from your server's main network. For example, if your server's physical ethernet card is on 192.168.1.0/24, I suggest you put the ve-airouter interface of your host server and the host0 interface of the airouter container on 192.168.34.0/24. Let us for now pretend that your host server's ve-airouter interface has address 192.168.34.1/24, and that airouter's host0 interface has address 192.168.34.2/14. For now, the default gateway of airouter is the IP address of your server, 192.168.34.1. Also, make it so that airouter sees the link as having an MTU of 1500, but your main server sees the link as having an MTU of 1420. This is to ensure that TCP apps on your server answering back to the airouter container will size their packets appropriately for the wireguard conf you are about to create in airouter. 6. Enable IP masquerading and IP forwarding between the container and your main system, making sure to masquerade traffic heading out of airouter and into AirVPN, traffic heading out of airouter and into your host server, , traffic heading out of your host server and into airouter, and traffic heading out of your host server and onto your server's default gateway. 7. Install wireguard-tools and resolvconf in the airouter container. You may also wish to install openssh-server on airouter so you can manage airouter like any other server. 8. Generate a wireguard config from AirVPN's client area, and put it in the airouter container's /etc/wireguard directory. Let us assume this file is called airvpn.conf, but again you can rename it freely. I suggest you rename it so that the new name can be used as an interface name, because the default name may have too many characters for the Linux interface system. 9.As root within that container, or using the sudo command, issue systemctl start wg-quick@airvpn && systemctl enable wg-quick@airvpn (must be consistent with the prefix before .conf that you chose in step 8). This will make it so that the container connects to AirVPN automatically on startup. 10. Establish appropriate firewall rules within the container to block any traffic that you do not need to authorize. 11. On the main server, discover the uid of the newly created airapp user by consulting /etc/passwd or executing the id command as airapp. Let us pretend for now that airapp's uid is 1001. 12. On the main server and as root, enter iptables -t mangle -A OUTPUT -m owner --uid-owner 1001 -j MARK --set-mark 107.(107 is an arbitrary number; you can choose a different one if you like, but make sure to use that same number in the next steps). this marks packets from the airapp user so that they can be routed through airouter later on. 13. As root, enter ip rule add fwmark 107 lookup vpntable prio 100. This will make all traffic from the apps you associated with the airapp user lookup the vpntable instead of your system's main routing table. Add this command to a startup script for your server. 100 is arbitrary, but make sure to pick a number higher than it in the next step. 14. As root, enter ip rule add fwmark 107 lookup killswitch prio 101. This will make all traffic from the apps you associated with the airapp user lookup the killswitch table if the vpntable is empty or has no applicable routes. Add this command to a startup script for your server. 15. As root, also enter ip route add default via 192.168.34.2 dev ve-airouter table vpntable. This route makes the default gateway for the airapp user the airouter container instead of your server's normal default gateway. Also add this command to a startup script for your server, preferably in a while loop to ensure that if the container reboots or starts later than the startup script with this rule, the rule is added nonetheless . 15b. In case you want these apps to be accessible from your LAN also, issue ip route add 192.168.1.0/24 dev eth0 table vpntable. This assumes that your server is on the LAN with subnet 192.168.1.0/24, and that the interface your server is using to connect to the LAN is eth0. Replace eth0 and 192.168.1.0/24 with your actual interface name and subnet. If your server has no connected LAN to speak of, ignore this step. 16. As root, also issue ip route add default via 127.0.0.1 dev lo table killswitch. This will make it so that if the apps cannot answer back to the internet through the airouter container(and by extension AirVPN), they cannot answer back to the internet at all. Also add this command to a startup script for your server, preferably in a while loop to ensure that if the container reboots or starts later than the startup script with this rule, the rule is added nonetheless . 17. If the apps accept incoming connections, test them using the Port Forward feature in AirVPN's client area(after using iptables -t nat -I PREROUTING -p- udp --dport X -j DNAT --to 192.168.34.1:X inside the airouter container, where X is the port forward you got from AirVPN). If the apps make outbound connections, ssh into airouter and install tcpdump, then execute tcpdump -i airvpn as you use the app to verify that traffic from the app is indeed flowing into airvpn, by checking tcpdump as you use the app's networking capabilities. Whenever the app tries to access the network, packets should start appearing on tcpdump's standard output . And about a minute after you close the app (and assuming no other apps you configured to route to airvpn are open either), tcpdump should stop printing new packets to standard out. I use -i airvpn bgecause I am assuming you rename the downloaded Wireguard profile from step 8 as airvpn.conf. If you instead renamed it as remote.conf, you should instead execute tcpdump -i remote. The name of the interface you are dumping is also the name you gave the wireguard file. I strongly suggest using iptables-persistent both on your main server and in the airouter container in order to ensure that all firewall rules, whether in the container or outside it, remain even after a reboot. If you need more involved help, feel free to message me directly using AirVPN's message feature.
  11. Since your examples suggest that the services you plan to deploy require SSL certificates, the first thing you should do is to buy a domain name from a domain registrar. Therefore, you cannot use AirVPN's DDNS to apply for SSL certificates, since you do not have ownership of airdns.org. Assuming that you have bought a domain name, these are the next steps: 1. Go to AirVPN client area. 2. Choose sessions. 3. Take note of the Exit iPv4 of each of the sessions that you have connected to your PFSense Router. 4. Go to your domain registrar, and register an A record for each ipv4 exit IP address from step 3, and register an AAAA record for each exit IPv6 from step 3. 5. Take the DNS-01 challenge from Let's encrypt to get your SSL certificates
  12. I think what desiderato means is that he wants to use AirVPN to connect to his office server remotely using AirVPN. Here are the steps he needs to follow: 1. From the AirVPN client area, use the Config generator to create a Wireguard profile for his office server. 2. Activate that Wireguard profile on his office server. 3. From the Client Area, use the AirVPN Port Forwarding Feature to forward a port from an AirVPN server of his choice to the port of whatever remote access app he is using (assuming that remote access app is encrypted; if not, he should consider ssh tunneling). 4. Connect to the forwarded port remotely.
×
×
  • Create New...