Jump to content
Not connected, Your IP: 3.15.31.27
princesskenny

Hummingbird unofficial Docker image

Recommended Posts

here,
i implemented tini, made the entrypoint sane on restart, and gave it a healthcheck.
here is the wip repo:
edit: don't use my repo, lol.

Share this post


Link to post
On 11/27/2021 at 5:49 PM, princesskenny said:


Hummingbird in Docker with a trustworthy firewall and killswitch is still desirable to me. Looking forward to testing something. 🙂


pretty sure it can't leak now. id love for you to run it through your tests. unfortunately, it's probably not filtering incoming connections in the standard way, and compose is not properly exposing services which share it's stack, but, the worst workaround in my environment is that i now have two nginx services.
On 11/27/2021 at 2:25 AM, fschaeck said:

Well… I dug into the hummingbird and NetFilter code a bit and it looks like the dependencies on systemd/sysinit for the network lock is fairly limited. I am looking into putting a remedy to those with a couple of “if docker then … else …” and see where it leads. I think the network lock is a pretty import piece of the picture and it should work well even in a docker environment.

i retract my statement. your insight into the source code is going to be very helpful. is hummingbird manipulating the netfilter behind iptables/nft? i'm stumped on that end. no matter what hummingbird does i never find anything in nft or iptables, but the way the container responds changes in a way that suggests firewall changes. what am I not understanding?

Share this post


Link to post

I just committed fixes to https://gitlab.com/fschaeckermann/hummingbird.git that now make the hummingbird client run in an Alpine Linux container as well.

The produced image of Dockerfile.alpine is merely 23.6MB in size… good enough for a service container with a single purpose I guess.

Next is to look closer at what happens with IPv6 in a docker container. There seems to be some kind of permission problem…

Share this post


Link to post
On 12/3/2021 at 5:21 AM, fschaeck said:

I just committed fixes to https://gitlab.com/fschaeckermann/hummingbird.git that now make the hummingbird client run in an Alpine Linux container as well.

 

You're amazing!! I was dissappointed with my work.  Being limited to TCP and having to restart with kill -1 is basically a hack.  there is contention on what "expected behavior": means when the network service container goes down, so, basically, any container with network_mode: service:hummingbird would have to be fully restarted whenever the TCP session breaks.  My main line out to the internet is a wireguard tunnel, so my hummingbird fork became prohibitively obnoxious to test.  I DO have a 217 mb wireguard:impish image with s6 overlay stolen and hacked apart from aptalca's work. i will share in a few days once testing is complete.
https://github.com/docker/compose/issues/6626
Your image is VERY appealing.  Have you tested it with a UDP connection?
This gentlemen is trying to solve the "restarting network-dependant containers" problem for his project, gluetun, so once he has something to show we can see how he handled it.  it would be nice to simply "refresh" the networking stack of the dependent containers instead of restarting them, as not every service plays nice with repeated up/downs.
https://github.com/qdm12/gluetun/issues/641

Share this post


Link to post
11 hours ago, whiteowl3 said:

Have you tested it with a UDP connection?

Yes I did. And it is working fine for me!

Is the problem with the dependent containers something that happens on mere network problems or only if the network providing container is actually going down? If the second is the case, it might be much simpler to provide some way of making the client reconnect inside the hummingbird container instead of having to restart the container and producing the problems in the dependent containers.

I know the code good enough by now, that I can look into having a specific signal caught and make the client re-establish a new connection without having to bring down the container.

I might even add a thread monitoring the connection and doing the reconnect automatically if any problems are detected… Maybe even checking the externally visible IP address of the tunnel against the clear-net one to make sure they are different by requesting http://ipv[46].icanhazip.com or some similar page. That would have the added benefit of knowing if the Internet is having a problem (if I can’t get the clear-net address) or the tunnel is broken (if I can’t get the address through the tunnel).

Share this post


Link to post
14 hours ago, fschaeck said:
Is the problem with the dependent containers something that happens on mere network problems or only if the network providing container is actually going down? If the second is the case, it might be much simpler to provide some way of making the client reconnect inside the hummingbird container instead of having to restart the container and producing the problems in the dependent containers.
I agree that this is the much better approach.  It's a hard limitation that autoheal/kill tini always requires the dependent container restart.  It's much better to manage restarting within the network container, but I doubt this will totally avoid network loss in all possible dependent containers.  Supposedly the gluetun dev is working on a solution that can refresh the internal network stack of the dependent containers using docker endpoints that are only exposed in golang.  hopefully it works out because such an ability would be a complete solution, and I suspect the only one.
14 hours ago, fschaeck said:
Maybe even checking the externally visible IP address of the tunnel against the clear-net one to make sure they are different by requesting http://ipv[46].icanhazip.com or some similar page.
i vote that you don't make the vpn client container aware of the hosts public facing ip under any circumstances.  much better maybe to get dig functionality into your container, grep the internal vpn dns ip, and compare digs of the internal dns vs 1.1.1.1
if you can dig google at 1.1.1.1, but not 10.128.0.1... etc etc
https://pkgs.alpinelinux.org/package/edge/main/x86_64/bind-tools

Share this post


Link to post

it occurred to me today that the way hummingbird handles the dns could be a problem.  shouldn't the container's resolv.conf always say "127.0.0.11" so that it can resolve the other containers?  there is, i think,  a way to inform the docker daemon that a particular DNS address should be used upstream of a particular container, such as the functionality implemented by the docker-compose "DNS" directive, but im not familiar with the best way to access it.
 

Share this post


Link to post
11 hours ago, whiteowl3 said:

I vote that you don't make the vpn client container aware of the hosts public facing ip under any circumstances. 


What would be the problem with that? I think it would even be a benefit, if the container could signal everybody if the public IP changes. The BitTorrent client transmission - as an example - needs to be restarted if the VPN exit address changes, otherwise it won’t get any connections anymore.

Anyway… if one starts the hummingbird container with -e "EXTRA_ARGS=—persist-tun“ then the hummingbird client can be made to reconnect by sending a USR2 signal to the process. Thus a command like

docker exec -it hummingbird /bin/sh -c 'ps -A | sed -nE "s|^\s*([0-9]+)\s.*/usr/bin/hummingbird.*$|kill -USR2 \1|p” | /bin/sh'

would initiate a reconnect.

Share this post


Link to post
37 minutes ago, whiteowl3 said:

it occurred to me today that the way hummingbird handles the dns could be a problem.  shouldn't the container's resolv.conf always say "127.0.0.11" so that it can resolve the other containers?  there is, i think,  a way to inform the docker daemon that a particular DNS address should be used upstream of a particular container, such as the functionality implemented by the docker-compose "DNS" directive, but im not familiar with the best way to access it.


Resolving the other container's IP addresses is not done through resolv.conf and DNS but through /etc/hosts. Those are re-written whenever there are new containers or containers are being removed.

Also, if you start a container with
—network container:hummingbird that container won’t have its own network stack anymore. It uses the one of the hummingbird container and even has the same IP-address as the hummingbird container. Therefore, specifying specific DNS servers for such dependent containers wouldn’t make a whole lot of sense, or am I missing something here?

Share this post


Link to post
14 hours ago, fschaeck said:

Resolving the other container's IP addresses is not done through resolv.conf and DNS but through /etc/hosts. Those are re-written whenever there are new containers or containers are being removed.
I only use docker-compose to up and down containers, so I don't want to put my foot further in my mouth, but in my compose setups i have never observed the host or container host files changing.  neighboring containers are resolved via dns at 127.0.0.11 in every deployment ive experienced, but it sounds like you know what you're talking about.
 
14 hours ago, fschaeck said:

Anyway… if one starts the hummingbird container with -e "EXTRA_ARGS=—persist-tun“ then the hummingbird client can be made to reconnect by sending a USR2 signal to the process. Thus a command like

docker exec -it hummingbird /bin/sh -c 'ps -A | sed -nE "s|^\s*([0-9]+)\s.*/usr/bin/hummingbird.*$|kill -USR2 \1|p” | /bin/sh'

would initiate a reconnect.
You're absolutely right.  My motivating concern is the amount of FUD it would bring to attempts to support the container, but i guess it's nothing a sticky post couldn't resolve.  The benefits are certainly there.

Well, I'm confident you've got something good going, you seem much more qualified than I. Thank you so much, this really is very impressive so far.

Share this post


Link to post

Wrote up a quick guide on how to get this working on synology dsm 7. I get slightly better up/down using this method than built 2.4.9 client

1. create folder for docker, for example I will create a folder /volume1/docker/hummingbird-vpn/
2. copy contents of https://gitlab.com/fschaeckermann/hummingbird to this folder
3. ssh into folder
4. run the following command

sudo chmod 666 /var/run/docker.sock or sudo su

5. run the following command
docker build -t hummingbird -f Dockerfile.debian /volume1/docker/hummingbird-vpn/
    let it run, dont worry about red text
    in the meantime download ovpn configuration file >= 2.5 from https://airvpn.org/generator/
    rename ovpn config file to config.ovpn
    place config.ovpn in /volume1/docker/hummingbird-vpn/

6. once it's done create a stack in portainer

here's my compose file as an example
version: '3'

services:
  hummingbird-vpn:
    image: hummingbird
      container_name: hummingbird-vpn
    cap_add:
      - net_admin
      - sys_module
    environment:
      TZ: 'America/Los_Angeles'
      NETWORK_LOCK: 'off'
    restart: always
    volumes:
      - /lib/modules:/lib/modules:ro
      - /dev/net:/dev/net:z
      - /volume1/docker/hummingbird-vpn/config.ovpn:/config.ovpn:Z
    ports:
      - "8080:8080"
      - "8765:80"

7. create the stack

8. For any stacks you want to use this network, do the following:

8.1. update hummingbird-vpn stack to add in ports the app uses and update stack
8.2. update the stack which you want to use the network and add/change
    network_mode: "container:hummingbird-vpn"
8.3. remove any ports from this stack and update stack

Share this post


Link to post

Hello,
I was lurking on this Thread for some time and I´ve got to say you guys are amazing. All the work you did for the community is awesome.
I installed the hummingbird-vpn as stated above on my OpenMediaVault server and the container is up and running. 
So I tried to connect my Filezilla container to the hummingbird-vpn container. At first look it worked. The WebUI came up and I could access my storage.
But when I tried to connect to a FTP server (eg. ftp.rapidgator.net) it gives me an error 

Connection attempt failed with "EAI_AGAIN - Temporary failure in name resolution".

Could this be a DNS error or am I doing something wrong? Maybe there is a config that I haven´t found yet to be done.
I hope you can help me. Thank you in advance.

Share this post


Link to post

Did you try to use the IP address of the server instead of its name?

The error message kind of implies, that whatever DNS server got configured in the hummingbird container is currently not working right. Could therefore really be a temporary problem with the DNS. Hence my question if you tried the server‘s IP.

Or did you configure any proxies in FileZilla? They could be the problem as well.

Share this post


Link to post

Thank you very much. I tried it some more times with the server name with other airvpn servers but I couldn´t get it working either. So there seems to be a general DNS Problem.
But with the IP adress it worked with a charm.

I have no proxies configured. I just installed the container, binded the network to hummingbird and tried to connect to a server. So it was a clean install right away. Maybe others have that problem to.
So connecting by IP is the deal.

Share this post


Link to post

Well… I‘ll have a look at it as soon as I find the time and check what is going on with the DNS and a FileZilla container. Once I have anything interesting I’ll post it here.

Share this post


Link to post

" where 'locktype' is one of "on", "iptables", "nftables", "pf" or "off" "
What are the differences in practice between these firewall options? People should go with the type that they are familiar with configuring? And if I'm not familiar with any, is there a recommend one?

Share this post


Link to post
1 hour ago, princesskenny said:

What are the differences in practice between these firewall options? People should go with the type that they are familiar with configuring?


You should always go with the developer's default if you don't know what you're configuring, simple as that. And the default is On, which selects one based on OS and availability of the options. Off is the famous I know what I'm doing switch. :)

NOT AN AIRVPN TEAM MEMBER. USE TICKETS FOR PROFESSIONAL SUPPORT.

LZ1's New User Guide to AirVPN « Plenty of stuff for advanced users, too!

Want to contact me directly? All relevant methods are on my About me page.

Share this post


Link to post

Attention! Long, but important post!

I got around to spend some more time on this docker image and there are some important - breaking - changes to entrypoint.sh! I changed it to except exactly the same options as the hummingbird client - in fact I am just passing through everything the container is started with (all the parameters behind the image name of the docker run command). Thus there is no need anymore for any environment variables.

That change allows for maximum flexibility for future improvements, of which one is already implemented with this version of the image: the hummingbird client inside the container understands the option

—bypass-vpn

now which can be specified multiple times and must be followed by a full domain name or an IPv4/IPv6 subnet specification of the form <ip address>/<prefix length>

The container will specify a route to the clear net and poke a whole into the netfilter for those addresses.
That allows to make the local network reachable even from inside the VPN container by passing in something like —bypass-vpn 193.168.9.0/24

Check the README.md for more information on how to start the container - especially if you are using IPv6: option

—sysctl net.ipv6.conf.all.disable_ipv6=0

is required in docker run command to enable tunneling IPv6.

On 1/27/2022 at 10:38 PM, princesskenny said:

" where 'locktype' is one of "on", "iptables", "nftables", "pf" or "off" "
What are the differences in practice between these firewall options? People should go with the type that they are familiar with configuring? And if I'm not familiar with any, is there a recommend one?


In entrypoint.sh I reset the iptables to all-open before starting the hummingbird client, because I had problems inside the container once the client exited without doing so upfront. Since I personally never used nftables or pf, I have no idea how to do the same with those firewalls. Anybody knowledgeable enough: let me know how and I am amending entrypoint.sh to do so for those as well -
or better, send a merge request! 😉

Generally I recommend using the firewall you are used to and especially the one you are using on the docker host anyway. That produces the least overhead. The firewall-all-open-code in entrypoint.sh is purely optional and there only for my convenience when testing the client by restarting it inside the same container over and over again.

As a side-note: I read through the thread again and found this sentence:

On 12/1/2021 at 7:27 AM, whiteowl3 said:

unfortunately, it's probably not filtering incoming connections in the standard way,


could you elaborate what the problem is you are seeing here? It would probably be easy to add some initial firewall setup for the container allowing for customization even for the input filtering, i.e. doing THAT instead of resetting the firewall in entrypoint.sh to all-open.

Share this post


Link to post
On 1/22/2022 at 10:40 AM, andiman3 said:

Thank you very much. I tried it some more times with the server name with other airvpn servers but I couldn´t get it working either. So there seems to be a general DNS Problem.
But with the IP adress it worked with a charm.

I have no proxies configured. I just installed the container, binded the network to hummingbird and tried to connect to a server. So it was a clean install right away. Maybe others have that problem to.
So connecting by IP is the deal.


What FileZilla container are you using exactly? My tests with a plain Alpine-Container attached to the hummingbird container network did not show any DNS problems at all. So it must be something specific to the FileZilla container.

Share this post


Link to post

I thought a little more about the DNS problem in the FileZilla container and I think I know what the problem is…

When hummingbird is started it uses the current system-provided DNS to resolve the VPN server name and establishes the VPN tunnel. Next, the VPN-server pushes its own DNS server IPs to the client, hummingbird is exchanging the current system-provided once in /etc/resolv.conf against those new ones and DNS resolution keeps happily hopping along - in its own container.

If the system-provided ones where pointing to some local server (like your router in you home LAN with an address like 192.168.0.1) those are not reachable anymore once the tunnel is up, but since hummingbirds /etc/resolv.conf is already pointing to some that are reachable - no problem!

Except… even though the FileZilla-container is sharing the hummingbird-containers network, it does not share its /etc/resolv.conf, thus the FileZilla-container is out of luck, if it’s /etc/resolv.conf is still pointing at some - now unreachable - local DNS servers.

There seem to be two possibilities to work around that:

1. share /etc/resolv.conf across all the containers sharing the hummingbird containers network - not sure if this is even possible, someone would have to try or
2. you have to make sure, the FileZilla container is using some external DNS servers (like 1.1.1.1) to begin with.

I have no way of checking right now, if hummingbird’s network lock might cripple all DNS traffic on port 53 and only letting the pushed IPs go through, but that should be easily worked around by using DNS over TLS and such.

I’ll have to investigate more to see if this is actually the problem and try if using 1.1.1.1 in the sharing (FileZilla-) container gets DNS working again - or not…

Share this post


Link to post

By the way: I put some ready-made docker images on docker hub:

fschaeckermann/airvpn-hummingbird

for x86_64, arm6, arm7 and arm64…

See https://github.com/fschaeck/airvpn-hummingbird-docker for more information.

Enjoy!

Share this post


Link to post
Posted ... (edited)

I know this is a dumb question, but I'm not sure how to set the options correctly. I don't know how to translate <hummingbird-command-options> into a run or docker-compose command. Thanks and sorry.
 

sudo docker run -it --cap_add=NET_ADMIN --cap_add=SYS_MODULE -v /lib/modules:/lib/modules:ro -v /mnt/data:/data -v /docker/appdata/airvpn-hummingbird:/config --device /dev/net:/dev/net fschaeckermann/airvpn-hummingbird <hummingbird-command-options> = /config/airvpn.ovpn
version: '3.7'
services:
  airvpn-hummingbird:
    container_name: airvpn-hummingbird
    restart: unless-stopped
    logging:
      driver: json-file
    volumes:
      - /docker/appdata/airvpn-hummingbird:/config
      - /etc/localtime:/etc/localtime:ro
      - /mnt/data:/data
    environment:
      - <hummingbird-command-options>='/config/airvpn.ovpn'
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    devices:
      - '/dev/net:/dev/net'
    image: fschaeckermann/airvpn-hummingbird


 

Edited ... by Snuffy2
typo

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Security Check
    Play CAPTCHA Audio
    Refresh Image

×
×
  • Create New...