Disconnect on lxc-guest networking via veth

2 posts / 0 new
Last post
#1 Sun, 2016-11-20 15:07
z33ky
  • z33ky's picture
  • Offline
  • Last seen: 1 month 1 week ago
  • Joined: 2014-06-13

I'm trying to bridge the host eth0 interface via veth to a LXC container. I have a vanilla kernel for now (Linux 4.4.30 on Alpine 3.4.6), since I'm unsure about any interactions from grsecurity with LXC.

On the host I have the following networking setup:

auto br0
iface br0 inet static
	bridge_ports eth0
	address $MYADDRESS
	netmask 255.255.0.0
	gateway $MYGATEWAY

/proc/sys/net/ipv4/ip_forward is set to 1.

The container config has the following lines:

lxc.network.type = veth
lxc.network.link = br0
lxc.network.ipv4 = $MYADDRESS/16
lxc.network.ipv4.gateway = $MYGATEWAY
lxc.network.flags = up

It is an unprivileged container, if that matters.

After starting the container, I have the following lines in the dmesg output:

[178761.447873] eth0: renamed from vethKBILY4
[178761.469699] IPv6: ADDRCONF(NETDEV_CHANGE): vethR5BIQV: link becomes ready
[178761.469717] br0: port 2(vethR5BIQV) entered forwarding state
[178761.469723] br0: port 2(vethR5BIQV) entered forwarding state
[178776.473395] br0: port 2(vethR5BIQV) entered forwarding state 

In the guest, Debian since Alpine is not supported via unprivileged LXC, I get the following output for the ip addr command:

11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d6:02:b0:98:cc:f0 brd ff:ff:ff:ff:ff:ff
    inet $MYADDRESS/16 brd x.y.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::d402:b0ff:fe98:ccf0/64 scope link 
       valid_lft forever preferred_lft forever

The host has the following for the veth interface:
12: veth0YIN6D@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master br0 state UP qlen 1000
    link/ether fe:46:46:96:df:bb brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc46:46ff:fe96:dfbb/64 scope link 
       valid_lft forever preferred_lft forever

Now when I try to ping something in a container-shell, my ssh connection to the host gets lost. After a while I can reconnect. dmesg shows nothing happening during that period.

Does anyone know what's going on and how to fix the container's networking?

Fri, 2016-12-09 20:46
z33ky
  • z33ky's picture
  • Offline
  • Last seen: 1 month 1 week ago
  • Joined: 2014-06-13

Alright, solved my problem. I guess I just haven't setup enough networks yet. The problem and the solution make sense, now that I know how to do it right.

Instead of creating a bridge that binds to eth0, I'm creating an unbinded bridge, upon which I create a local network.
In the containers, I still link the veth interface to the bridge, but use an address from the local network and set the host as gateway.
To get access to the public network I setup NAT via iptables on the host to route requests from the bridge to eth0 and back.

Oh and unprivileged containers seem to work fine as long as you boot with grsec_sysfs_restrict=0 and turn a few sysctl knobs:

kernel.grsecurity.chroot_caps = 0
kernel.grsecurity.chroot_deny_chmod = 0
kernel.grsecurity.chroot_deny_pivot = 0
kernel.grsecurity.chroot_deny_chroot = 0
kernel.grsecurity.chroot_deny_mount = 0

Log in or register to post comments