Introducing Apcera Virtual Networks

This documentation provides you with a conceptual understanding of Apcera virtual networking, including:

Overview of Virtual Networks

Within an Apcera cluster you can define one or more virtual networks, each representing a separate layer 2 network with a unique VNID allocated. A job that belongs to a virtual network is allocated a virtual IP address from the network's subnet pool and automatically gets network connectivity to all other jobs on the same network. All traffic between jobs (including HTTP and UDP) on the same virtual network is tunneled over the underlying network and isolated from other jobs not on the network.

For example, the following illustration shows two virtual networks named vnet-1 and vnet-2. Jobs J1 and J2 (running on Host-1) and J5 and J6 (running on Host-2) belong to vnet-1 on Ethernet A (ethA), while jobs J3, J4 and J7 belong to vnet-2 on Ethernet B (ethB). The size of the subnet assigned to each virtual network is /12 (in CIDR notation). Virtual IP addresses are assigned to networked jobs by the Instance Manager (IM) component local to the job instances.

Alt text

Configurable Virtual Networks and Subnet Pools

With the release of Apcera Platform version 3.0, cluster administrators can configure virtual networks using subnet pools, allowing you to define custom network IP address ranges and dramatically scale the number of networks in your cluster.

As illustrated above, in release 3.0 virtual network isolation is done at data link layer (layer 2 of the OSI model) so that each virtual network gets its own Ethernet network. Through the use subnet pools the system assigns a unique virtual network ID (VNID) to each virtual network in the cluster.

With subnet pools each virtual network of the same subnet pool gets the same address range (system default is 10.224.0.0/12) and the number of such virtual networks is bound by the VXLAN ID which is 24 bits wide, so you can have 2^24 different network IDs. There is no conflict between networks with the same address range because they are on different Ethernets.

If the default subnet pool IP address (10.224.0.0/12) conflicts with an underlying network, you can define subnet pools with custom IP address ranges. You only need to create subnet pool(s) if the default does not work for you.

Currently the only type of subnet pool you can create is private, meaning a job can only be a member of a single subnet pool (and therefore can only join 1 virtual network).

With the advent of subnet pools, the maximum number of Instance Managers that you can run in a single cluster is 4095 (increased from 256).

Legacy Virtual Networks

The first generation of Apcera virtual networks (release 2.6 and earlier) use layer 3 IP subnet isolation and global IPAM.

With legacy layer 3 networks, all virtual networks in the cluster share the same Ethernet configuration, but are given different /24 subnets in the range 192.168.x.x/16. As a result, there is a limit to the number of networks you can create (254).

The following diagram illustrates layer 3-based legacy virtual networks, where members of the "vnet-1" network are allocated IP addresses between 192.168.1.1–192.168.1.255, and members of the "vnet-2" network are allocated IPs between 192.168.2.1–192.168.2.255.

Alt text

Local IPAM

Apcera uses IP address management (IPAM) to allocate IP addresses to job instances in virtual networks.

Release 3.0 provides local IPAM whereby each Instance Manager (IM) controls IP address allocation for only the containers it hosts, and the virtual network IP addresses are issued and maintained locally by each IM. Contrast local IPAM with global IPAM where the Job Manager controls IP address allocation for all jobs across all virtual networks.

Local IPAM is required if you want to use subnet pools. For new 3.0 clusters installed using Apcera Installer, local IPAM is the default. If you upgrade to release 3.0 and you want to use subnet pools, you need to enable local IPAM on your cluster.

If you previously enabled local IPAM as a beta feature of the 2.6 release, you cannot use or delete these networks after you upgrade to 3.0. You must remove these networks before you upgrade to release 3.0.

Network Coexistence

New networks can co-exist with legacy networks in the same cluster.

For example:

apc network list
Working in "/sandbox/admin"
╭────────────────┬─────────────────────┬────────────────╮
│ Network Name   │ Namespace           │ Subnet         │
├────────────────┼─────────────────────┼────────────────┤
│ nats-net       │ /sandbox/admin/nets │ 192.168.2.0/24 │
│ ping-net       │ /sandbox/admin/nets │ 192.168.3.0/24 │
│ sparknet       │ /sandbox/admin/nets │ 192.168.4.0/24 │
│ my-network     │ /sandbox/admin      │ 10.224.0.0/12  │
│ my-new-network │ /sandbox/admin      │ 10.192.0.0/12  │
╰────────────────┴─────────────────────┴────────────────╯

All legacy networks are on the same L2 Ethernet with VNID=0; each new network is a separate L2 Ethernet with VIND>0.

You use cluster.conf to toggle between the two types of IPAM (global or local), and then redeploy the cluster.

Jobs can join/leave both types of networks in the cluster. However, you cannot create new networks of the type that is not enabled. The only type of new networks that can be created is the type set by cluster.conf.

See migratig to new networks if you are upgrading to release 3.0 and you want to use new networks.