25
submitted 6 months ago* (last edited 6 months ago) by onlinepersona@programming.dev to c/programming@programming.dev

I'd basically like to run some containers within a VPN and some outside of it. The containers running within the VPN should not be able to send or receive any traffic from outside the VPN (except localhost maybe).

The container could be docker, podman, or even a qemu VM or some other solution if need be.

Is that possible? Dunno if this is the right place to ask.

---Resolution-------

Use https://github.com/qdm12/gluetun folks.

Anti Commercial-AI license

top 11 comments
sorted by: hot top controversial new old
[-] adam@doomscroll.n8e.dev 16 points 6 months ago

Easily doable in docker using the network_mode: "service:VPN_CONTAINER" configuration (assuming your VPN is running as a container)

[-] PlexSheep@infosec.pub 1 points 6 months ago

That works but sucks when you redeploy the Vpn container iirc. I don't do this anymore.

(Now I just use lxc containers with docker inside, and I'll set the default gateway of the lxc to another lxc that is a gateway for a VPN network)

[-] FrostyCaveman@lemm.ee 12 points 6 months ago

It is very doable.

Take a look at https://github.com/qdm12/gluetun - it’s what I use for this.

[-] onlinepersona@programming.dev 3 points 6 months ago

Perfect, that's what I was looking for! Thanks dude.

Anti Commercial-AI license

[-] mynamesnotrick@lemmy.zip 3 points 6 months ago

second gluetun, easy to use and configure.

[-] TCB13@lemmy.world 6 points 6 months ago* (last edited 6 months ago)

Gluetun, is overkill if you already have a working setup. Your system is able to handle this in a much simple way with built in tools.

You can use systemd to restrict some daemon to your your VPN IP. For instance here's an example of doing that with transmission: override of the default unit by using the following command:

systemctl edit transmission-daemon.service

Then type what you need to override:

[Service]
IPAddressDeny=any
IPAddressAllow=10.0.0.1 # --> your VPN IP here

Another option, might be to restrict it to a single network interface:

[Service]
RestrictNetworkInterfaces=wg0 # --> your VPN interface

Save the file and run systemctl daemon-reload followed by systemctl restart transmission-daemon.service and it should be applied.

This is a simple and effective solution that doesn't require more stuff.

[-] onlinepersona@programming.dev 2 points 6 months ago

Thanks, great to know! systemd can really do a lot.

Anti Commercial-AI license

[-] BB_C@programming.dev 4 points 6 months ago

You don't even need full-fledged containers for that btw.

Learn how to script with ip netns and veth.

[-] Scipitie@lemmy.dbzer0.com 2 points 6 months ago

Do you have a link at hand on how start a process within a specific veth by chance? Own name spaces are easy enough and a lot of tutorials but I don't want my programs to ever be not in the vpn space, not at startup not as fail over etc.

That's the reason why I stuck with the container setup, only for gluetun plus vpned services.

[-] BB_C@programming.dev 6 points 6 months ago* (last edited 6 months ago)

start a process within a specific veth

That sentence doesn't make any sense.

Processes run in network namespaces (netns), and that's exactly what ip netns exec does.

A newly created netns via ip netns add has no network connectivity at all. Even (private) localhost is down and you have to run ip link set lo up to bring it up.

You use veth pairs to connect a virtual device in a network namespace, with a virtual device in the default namespace (or another namespace with internet connectivity).

You route the VPN server address via the netns veth device and nothing else. Then you run wireguard/OpenVPN inside netns.

Avoid using systemd since it runs in the default netns by default, even if called from a process running in another netns.

The way I do it is:

  1. A script for all the network setup:
ns_con AA
  1. A script to run a process in a netns (basically a wrapper around ip netns exec):
ns_run AA <cmd>
  1. Run a termnal app using 2.
  2. Run a tmux session on a separate socket inside terminal app. e.g.
export DISPLAY=:0 # for X11
export XDG_RUNTIME_DIR=/run/user/1000 # to connect to already running pipewire...
# double check this is running in AA ns
tmux -f -f <alternative_config_file_if_needed> -L NS_AA

I have this in my tmux config:

set-option -g status-left "[#{b:socket_path}:#I] "

So I always know which socket a tmux session is running on. You can include network info there if you're still not confident in your setup.

Now, I can detach that tmux session. Reattaching with tmux -L NS_AA attach from anywhere will give me the session still running in AA.

[-] Scipitie@lemmy.dbzer0.com 2 points 6 months ago

Yeah I had a brainfart, meant namespace...

And thanks a lot for this writeup I think with your help I figured out where I went wrong in my train of thought and I'll give it another try next week when I have a bit downtime.

The time you took to write this is highly appreciated! ♥

this post was submitted on 20 Jun 2024
25 points (87.9% liked)

Programming

17691 readers
94 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS