NetworkManager Bonds, Teams, and VLANs (CentOS 7)

First of all, Red Hat has some great documentation for NetworkManager located here. However, its a bit overwhelming, especially if you want to get something to work quickly.

Bonding

The new way of doing trunks/lags is using the team driver, however EL7 still supports the older bonding driver.

This example will use the name bond1 for the interface, using em3 and em4 devices as the slaves.

First, set up the bond and the method (LACP used here, but Red Hat's documentation doesn't mention anything about editing LACP options, like fast mode)

# nmcli connection add type bond con-name bond1 mode 802.3ad

I want to use the bond with a VLAN, and NM will automatically set the IP method to DHCP for the above command, so this will disable it:

# nmcli connection modify bond1 ipv6.method ignore ipv4.method disabled

If you want to set a static IP for the bond, do the following:

# nmcli connection add type bond con-name bond1 mode 802.3ad ip4 192.168.1.20/24 gw4 192.168.1.1

Now we add the slaves (I am explicitly defining the con-name here, because there are multiple bonds on this particular system and the default naming for slaves that NM uses doesn't explicitly define which slaves belong to which bonds in the nmcli connection view:

# nmcli connection add type bond-slave con-name bond1-port1 ifname em3 master bond1
# nmcli connection add type bond-slave con-name bond1-port1 ifname em3 master bond1

NetworkManager should bring up both interfaces, and the bond after executing the above commands.

You can check out the status of the bond by viewing /proc/net/bonding/nm-bond1:

# cat /proc/net/bonding/nm-bond1

To add a VLAN and a static IP on top of the bond, do the following:

# nmcli connection add type vlan con-name VLAN50 dev nm-bond1 id 50 ip4 192.168.50.23/24
# nmcli connection modify VLAN50 ipv6.method ignore

Teaming

Teaming is the new way of doing trunks/lags. Read more here

For this example, we are going to use LACP, using fast rate PDU timing. No sense in not doing fast mode, since a PDU is only 110 bytes sent every second for rapid detection of link failures.

First, set up the team interface. If you do not specify any IPv4 or IPv6 options, NetworkManager will automatically set up DHCP for both, and set the link connection to automatically come up at boot time.

# nmcli connection add type team con-name team1 ifname team0 config '{"runner":{"name":"lacp", "fast_rate": true}}'

To specify a static IPv4 address (with a /24 subnet mask), do the following for team0:

# nmcli connection add type team con-name team1 ifname team0 config '{"runner":{"name":"lacp", "fast_rate": true}} ip4 192.168.1.20/24 gw4 192.168.1.1

In my particular case, I want to set up a VLAN interface on top of the team interface. For the reason that NM automatically sets up DHCP if you don't specify static IP, I want to disable IPv4/IPv6 on the team0 interface:

# nmcli connection modify team0 ipv6.method ignore ipv4.method disabled

Now add the slaves to the team interface. You don't need to specify the connection-name for the interfaces, but I don't really like the way NM names the slave interfaces. By default, it will name them 'team-slave-INTERFACE', but since I will have multiple team'd interfaces on this system, I want to make sure the slaves are explicitly named according to their master. In this case, I am using NICs from a PCI card named p1p1 and p1p2:

# nmcli connection add type team-slave con-name team0-port1 ifname p1p1 master team0
# nmcli connection add type team-slave con-name team0-port2 ifname p1p2 master team0

At this point, NetworkManager only brought up the first port (p1p1). I had to manually bring up the second port with:

# nmcli connection up team0-port2

At this point, you can use teamdctl to see the current status of the team:

# teamdctl team0 state view
setup:
  runner: lacp
ports:
  p1p1
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 1
    runner:
      aggregator ID: 7, Selected
      selected: yes
      state: current
  p1p2
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 1
    runner:
      aggregator ID: 7, Selected
      selected: yes
      state: current
runner:
  active: yes
  fast rate: yes

To create a VLAN interface (VLAN ID 50 here) with nmcli on top of the above team with an IPv4 address, do the following:

# nmcli connection add type vlan con-name VLAN50 dev team0 id 50 ip4 192.168.50.22/24

Again, I will disable IPv6 on this VLAN interface because I don't need it:

# nmcli connection modify VLAN50 ipv6.method ignore

I did not set a gateway here, but you could by adding the gw4 option.

Note: for some reason, I had to admin down/up the trunk on the HP switch I was using to get the interface to come up and be seen on the network. YMMV, but this particular issue was probably due to using Broadcom NICs ;)

Modifying ifcfg files

If you hand edit any of the ifcfg-* files in /etc/sysconfig/network-scripts, then you will need to tell NetworkManager that they have been modified and to reload them. There is no need to restart the NetworkManager service, which seems to cause more issues than it solves.

To tell NetworkManager to reload the config file for VLAN50 (<ifcfg-VLAN50>), do the following:

# nmcli connection load /etc/sysconfig/network-scripts/ifcfg-VLAN50