Concepts

Edit This Page

IPv4/IPv6 dual-stack

FEATURE STATE: Kubernetes v1.16 alpha
This feature is currently in a alpha state, meaning:

  • The version names contain alpha (e.g. v1alpha1).
  • Might be buggy. Enabling the feature may expose bugs. Disabled by default.
  • Support for feature may be dropped at any time without notice.
  • The API may change in incompatible ways in a later software release without notice.
  • Recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.

IPv4/IPv6 dual-stack enables the allocation of both IPv4 and IPv6 addresses to PodsThe smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster. and ServicesA way to expose an application running on a set of Pods as a network service. .

If you enable IPv4/IPv6 dual-stack networking for your Kubernetes cluster, the cluster will support the simultaneous assignment of both IPv4 and IPv6 addresses.

Supported Features

Enabling IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features:

Prerequisites

The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack Kubernetes clusters:

Enable IPv4/IPv6 dual-stack

To enable IPv4/IPv6 dual-stack, enable the IPv6DualStack feature gate for the relevant components of your cluster, and set dual-stack cluster network assignments:

Caution: If you specify an IPv6 address block larger than a /24 via --cluster-cidr on the command line, that assignment will fail.

Services

If your cluster has IPv4/IPv6 dual-stack networking enabled, you can create ServicesA way to expose an application running on a set of Pods as a network service. with either an IPv4 or an IPv6 address. You can choose the address family for the Service’s cluster IP by setting a field, .spec.ipFamily, on that Service. You can only set this field when creating a new Service. Setting the .spec.ipFamily field is optional and should only be used if you plan to enable IPv4 and IPv6 ServicesA way to expose an application running on a set of Pods as a network service. and IngressesAn API object that manages external access to the services in a cluster, typically HTTP. on your cluster. The configuration of this field not a requirement for egress traffic.

Note: The default address family for your cluster is the address family of the first service cluster IP range configured via the --service-cluster-ip-range flag to the kube-controller-manager.

You can set .spec.ipFamily to either:

The following Service specification does not include the ipFamily field. Kubernetes will assign an IP address (also known as a “cluster IP”) from the first configured service-cluster-ip-range to this Service.

service/networking/dual-stack-default-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376

The following Service specification includes the ipFamily field. Kubernetes will assign an IPv6 address (also known as a “cluster IP”) from the configured service-cluster-ip-range to this Service.

service/networking/dual-stack-ipv6-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  ipFamily: IPv6
  selector:
    app: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376

For comparison, the following Service specification will be assigned an IPV4 address (also known as a “cluster IP”) from the configured service-cluster-ip-range to this Service.

service/networking/dual-stack-ipv4-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  ipFamily: IPv4
  selector:
    app: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376

Type LoadBalancer

On cloud providers which support IPv6 enabled external load balancers, setting the type field to LoadBalancer in additional to setting ipFamily field to IPv6 provisions a cloud load balancer for your Service.

Egress Traffic

The use of publicly routable and non-publicly routable IPv6 address blocks is acceptable provided the underlying CNIContainer network interface (CNI) plugins are a type of Network plugin that adheres to the appc/CNI specification. provider is able to implement the transport. If you have a Pod that uses non-publicly routable IPv6 and want that Pod to reach off-cluster destinations (eg. the public Internet), you must set up IP masquerading for the egress traffic and any replies. The ip-masq-agent is dual-stack aware, so you can use ip-masq-agent for IP masquerading on dual-stack clusters.

Known Issues

What's next

Feedback