Disconnect on lxc-guest networking via veth
- 10 months 2 weeks ago
I'm trying to bridge the host eth0 interface via veth to a LXC container. I have a vanilla kernel for now (Linux 4.4.30 on Alpine 3.4.6), since I'm unsure about any interactions from grsecurity with LXC.
On the host I have the following networking setup:
auto br0 iface br0 inet static bridge_ports eth0 address $MYADDRESS netmask 255.255.0.0 gateway $MYGATEWAY
/proc/sys/net/ipv4/ip_forward is set to 1.
The container config has the following lines:
lxc.network.type = veth lxc.network.link = br0 lxc.network.ipv4 = $MYADDRESS/16 lxc.network.ipv4.gateway = $MYGATEWAY lxc.network.flags = up
It is an unprivileged container, if that matters.
After starting the container, I have the following lines in the dmesg output:
[178761.447873] eth0: renamed from vethKBILY4 [178761.469699] IPv6: ADDRCONF(NETDEV_CHANGE): vethR5BIQV: link becomes ready [178761.469717] br0: port 2(vethR5BIQV) entered forwarding state [178761.469723] br0: port 2(vethR5BIQV) entered forwarding state [178776.473395] br0: port 2(vethR5BIQV) entered forwarding state
In the guest, Debian since Alpine is not supported via unprivileged LXC, I get the following output for the ip addr command:
11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether d6:02:b0:98:cc:f0 brd ff:ff:ff:ff:ff:ff inet $MYADDRESS/16 brd x.y.255.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::d402:b0ff:fe98:ccf0/64 scope link valid_lft forever preferred_lft forever
The host has the following for the veth interface:
12: veth0YIN6D@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master br0 state UP qlen 1000 link/ether fe:46:46:96:df:bb brd ff:ff:ff:ff:ff:ff inet6 fe80::fc46:46ff:fe96:dfbb/64 scope link valid_lft forever preferred_lft forever
Now when I try to ping something in a container-shell, my ssh connection to the host gets lost. After a while I can reconnect. dmesg shows nothing happening during that period.
Does anyone know what's going on and how to fix the container's networking?