Configuring Virtual Networks

This section describes how to configure virtual networking for your Enterprise Edition Apcera cluster.

Virtual Networking Deployment Considerations

Apcera Platform supports the use of virtual networks for use by jobs in the cluster.

In Apcera a virtual network is a layer 3 subnet. From an implementation standpoint, Apcera virtual networking employs several technologies, including Open vSwitch (OvS), network packet tunneling, and IP address management (IPAM).

Although virtual networking is available by default, there are a few design considerations to keep in mind when deploying a cluster, specifically:

  • Use of VXLAN for network packet tunneling
  • Use of Local IPAM

Using VXLAN

Apcera uses network tunneling to deliver packets over virtual networks, and requires that UDP port 4789 be open for each Instance Manager host for such purposes. You must also ensure that the firewall or security group rules for your infrastructure provider allow connections on this port.

Virtual Extensible LAN (VXLAN) is the default network packet tunneling technology for Apcera clusters based on release 2.6 or later (new or upgraded). Previous releases of Apcera used Generic Routing Encapsulation (GRE) for networking tunneling. GRE is deprecated and will be removed in the next major Apcera Platform release.

If you are upgrading a cluster to release 2.6.0 or later that is using GRE, all virtual networks will be converted to VXLAN mode and the system will restart all jobs in each affected virtual network. If your existing cluster is already using VXLAN mode, jobs in the virtual network will not be affected by the upgrade.

IMPORTANT NOTE: To use VXLAN, you must open UDP port 4789 on each Instance Manager host. Typically you do this by updating the firewall rule for the IM Security Group.

Because VXLAN is now the default, the following is not required to be present in cluster.conf to use VXLAN:

chef: {
  "continuum": {
    "vxlan_enabled": true,
  }
}

If you want to use GRE (not recommended), set the flag to false.

Support for GRE is deprecated and will be removed in the next major Apcera Platform release.

Enabling Local IPAM (BETA)

The Apcera Platform uses IP address management (IPAM) technology to assign virtual network subnets and to allocate IP addresses to job instances in virtual networks. In the default cluster configuration, virtual network subnets and IP addresses are issued and maintained centrally by the Job Manager component.

Staring with Apcera Platform release 2.6.0, you can use decentralized IPAM to assign virtual network subnets and allocate IP addresses to job instances in virtual networks. Decentralized IPAM works the same as centralized IPAM, except that virtual network subnet and IP address allocation are controlled locally by each Instance Manager (IM), not globally by the Job Manager. Local IPAM is only responsible for the job instances on a particular IM, and the IP addresses are issued and maintained by each IM.

From a job developer perspective, the difference between local IPAM and global IPAM can be seen in the CIDR ranges of IP addresses assigned to jobs in a given virtual network. Globally assigned virtual network subnets have the range 192.168.virtual_network_id.0/24. Locally assigned virtual network subnets have the range 10.virtual_network_id.0.0/16.

Local IPAM helps to distribute the load to all IMs. And, since IP address management is local, state can be persisted locally so that it does not need to survive rebooting the VM that runs the IM, only the restart of the IM service. Assigned IPs are locked to ensure consistency of the local IPAM. If you are deploying a cluster with a large number of IMs, or your cluster has many virtual networks, or jobs are joining and leaving virtual networks often, you may want to enable local IPAM as described below.

Note that if you have specified a subnet range for your cluster using a prefix less than 10./16 (such as 10.x.x.x/8), Apcera will not reserve the subnet range for virtual networks. If you have enabled local IPAM, this could result in a virtual network created with an address range that overlaps with the cluster, and jobs being unable to join a virtual network. To avoid this issue, ensure that the cluster subnet is 10.x.x.x/16 or higher.

Follow the instructions below to enable local IPAM. Keep in mind that if you enable local IPAM, any newly created virtual network will use local IPAM addresses (10.xxx.xxx.xxx); existing virtual networks will continue to use global IPAM addresses (192.168.xxx.xxx). You can migrate jobs from a globally-assigned virtual network subnet to a locally-assigned subnet by leaving the old virtual network and joining the new network.

screenshot

To enable local IPAM:

  1. Set "enable_local_ipam" to true in cluster.conf by adding the following block:

     chef: {
       "continuum": {
         "enable_local_ipam": true,
       }
     }
    
  2. Deploy the cluster using Orchestrator version 0.6.0 or later.

    After redeploy, for new virtual networks:

    • The virtual network IPs will be 10.xxx.xxx.xxx instead of 192.168.xxx.xxx.
    • The virtual network subnets will be 10.virtual_network_id.0.0/16 instead of 192.168.virtual_network_id.0/24.