Gallery
Cloud articles — shared
Share
Explore

icon picker
Spinning up the POC Autopilot cluster in the Cloud Console

All the steps below can be executed in the or in the Cloud Shell / .
Limitations of the region are also because of the k8s/gateway availability:
We will be using following variables:
export PROJECT=mcspocbdcc-ef6a
export REGION=europe-west1
gcloud config set project ${PROJECT}

1. Activate API

gcloud services enable \
multiclusterservicediscovery.googleapis.com \
multiclusteringress.googleapis.com \
gkehub.googleapis.com \
cloudresourcemanager.googleapis.com \
trafficdirector.googleapis.com \
dns.googleapis.com \
--project=mcspocbdcc-ef6a

Enable MCS - multi cluster services
gcloud container hub multi-cluster-services enable \
--project mcspocbdcc-ef6a

2. Create a VPC network

Connectivity between clusters depends on clusters running within the same VPC network or in peered VPC networks.
gcloud compute networks create mcs-npr-net --project=mcspocbdcc-ef6a --description=A\ multi\ cluster\ VPC\ network --subnet-mode=custom --mtu=1460 --bgp-routing-mode=global
Go to
Create a VPC Network with the name mcs-npr-net . This name refers to a global routing mode, where the regional routing is done within the subnets.
Subnet creation mode - Custom, because unfortunately we will not be able to extend automatically created subnets with the new ones
Enable Cloud DNS API
gcloud compute networks subnets create mcs-npr-use1-net --project=mcspocbdcc-ef6a --range=10.94.0.0/20 --network=mcs-npr-net --region=us-east1 --enable-private-ip-google-access
gcloud compute networks subnets create mcs-npr-euw1-net --project=mcspocbdcc-ef6a --range=10.96.0.0/20 --network=mcs-npr-net --region=europe-west1 --enable-private-ip-google-access
gcloud compute networks subnets create mcs-npr-aus1-net --project=mcspocbdcc-ef6a --range=10.98.0.0/20 --network=mcs-npr-net --region=australia-southeast1 --enable-private-ip-google-access
Here we create 3 subnetworks for 3 clusters, located in 3 different regions.

3. Create

gcloud compute routers create mcs-euw1-npr-router \
--network mcs-npr-net \
--region=europe-west1 \
--project $PROJECT

gcloud compute routers nats create mcs-euw1-npr-gw \
--project $PROJECT \
--region=europe-west1 \
--router mcs-euw1-npr-router \
--nat-all-subnet-ip-ranges \
--auto-allocate-nat-external-ips

---
gcloud compute routers create mcs-use1-npr-router \
--network mcs-npr-net \
--region=us-east1 \
--project $PROJECT

gcloud compute routers nats create mcs-use1-npr-gw \
--project $PROJECT \
--region=us-east1 \
--router mcs-use1-npr-router \
--nat-all-subnet-ip-ranges \
--auto-allocate-nat-external-ips

---
gcloud compute routers create mcs-aus1-npr-router \
--network mcs-npr-net \
--region=australia-southeast1 \
--project $PROJECT

gcloud compute routers nats create mcs-aus1-npr-gw \
--project $PROJECT \
--region=australia-southeast1 \
--router mcs-aus1-npr-router \
--nat-all-subnet-ip-ranges \
--auto-allocate-nat-external-ips

Create the Cloud NATx with the gateway name mcs-euw1-npr-gw, which refers to the EU W3 region. For each region, we have to create a new gateway.
Select the previously created VPC network
Create a new G Cloud Router to exchange the routes with the VPC network: mcs-euw1-npr-router.
Keep the automatic IP allocation ON. If you prefer manual: add one or more NAT IPs (1 or more IPs in multiple of '2' to be used by the NAT gateway. These IPs have to be reserved static IP addresses. In automatic mode, we don’t have full control of IPs and they can change over time and it’s impossible to maintain in a firewall. The main drawback is that we have to monitor that we have enough public IPs associated with the service.

4. GCP cluster creation

As of now (before 2022Q2), GKE Autopilot cluster cannot be used as a config cluster in the hub. So, the cluster to host the Gateway must be GKE standard.
Ensure, that CIDR and other ranges are not taken by other clusters.
gcloud beta container --project "mcspocbdcc-ef6a" clusters create "mcs-euw1-npr-gke" --region "europe-west1" --no-enable-basic-auth --cluster-version "1.23.3-gke.1100" --release-channel "rapid" --machine-type "e2-standard-4" --image-type "COS_CONTAINERD" --disk-type "pd-standard" --disk-size "100" --metadata disable-legacy-endpoints=true --service-account "project-service-account@mcspocbdcc-ef6a.iam.gserviceaccount.com" --max-pods-per-node "110" --preemptible --num-nodes "3" --logging=SYSTEM,WORKLOAD --monitoring=SYSTEM --enable-private-nodes --master-ipv4-cidr "172.18.30.0/28" --enable-ip-alias --network "projects/mcspocbdcc-ef6a/global/networks/mcs-npr-net" --subnetwork "projects/mcspocbdcc-ef6a/regions/europe-west1/subnetworks/mcs-npr-euw1-net" --enable-intra-node-visibility --default-max-pods-per-node "110" --enable-autoscaling --min-nodes "0" --max-nodes "3" --enable-dataplane-v2 --no-enable-master-authorized-networks --addons HorizontalPodAutoscaling,HttpLoadBalancing,NodeLocalDNS,ApplicationManager,GcePersistentDiskCsiDriver,ConfigConnector,BackupRestore,GceFilestoreCsiDriver --enable-autoupgrade --enable-autorepair --max-surge-upgrade 1 --max-unavailable-upgrade 0 --enable-autoprovisioning --min-cpu 1 --max-cpu 8 --min-memory 1 --max-memory 16 --autoprovisioning-service-account=project-service-account@mcspocbdcc-ef6a.iam.gserviceaccount.com --enable-autoprovisioning-autorepair --enable-autoprovisioning-autoupgrade --autoprovisioning-max-surge-upgrade 1 --autoprovisioning-max-unavailable-upgrade 0 --enable-vertical-pod-autoscaling --workload-pool "mcspocbdcc-ef6a.svc.id.goog" --enable-shielded-nodes --enable-l4-ilb-subsetting --enable-image-streaming
#This is the Autopilot cluster config, which doesn't currently work with Athos config hub
#gcloud container clusters create-auto mcs-euw1-npr-gke-auto \
# --project $PROJECT \
# --region=europe-west1 \
# --release-channel "regular" \
# --enable-private-nodes \
# --master-ipv4-cidr "172.18.30.0/28" \
# --network mcs-npr-net \
# --subnetwork mcs-npr-euw1-net \
# --cluster-ipv4-cidr "/17" \
# --services-ipv4-cidr "/22"

---
gcloud container clusters create-auto mcs-aus1-npr-gke-auto \
--project $PROJECT \
--region=australia-southeast1 \
--release-channel "regular" \
--enable-private-nodes \
--master-ipv4-cidr "172.18.10.0/28" \
--network mcs-npr-net \
--subnetwork mcs-npr-aus1-net \
--cluster-ipv4-cidr "/17" \
--services-ipv4-cidr "/22"

---
gcloud container clusters create-auto mcs-use1-npr-gke-auto \
--project $PROJECT \
--region=us-east1 \
--release-channel "regular" \
--enable-private-nodes \
--master-ipv4-cidr "172.18.20.0/28" \
--network mcs-npr-net \
--subnetwork mcs-npr-use1-net \
--cluster-ipv4-cidr "/17" \
--services-ipv4-cidr "/22"

Go to Kubernetes engine - clusters
For GKE Autopilot clusters, VPC-native traffic routing is enabled by default and cannot be overridden, so --enable-ip-alias is not needed as well as --workload-pool=mcspocbdcc-ef6a.svc.id.goog
Enable Kubernetes Engine API
Choose “create a cluster”
Configure the GKE Autopilot cluster
name: mcs-euw1-npr-gke

5. Network settings

Since we are in the autonomous project, we are free to use any CIDRs, also default ones.
Keep “Access control plane using its external IP address“ on
Set “Control plane IP range” to 172.18.0.1/28 (default one). And use different for other regions! CIDR block size must be /28
ONLY enable “Enable control plane authorised networks“ if you know the authorised network range
Set the name of the authorised network to mcs-euw1-npr-netSet the CIDR of the authorised network to your IP range. CIDR block size must be > /15
Network - here means the VPC shared network, that you created at the step 1. Use the mcs-sbx-net for the Network and the node subnet.

6. Cluster is ready

That is it, the autopilot cluster is up and running. Mind the region!
gcloud container clusters get-credentials mcs-euw1-npr-gke-auto \
--project $PROJECT \
--region $REGION
kubectl cluster-info

kubectl get nodes

Fetch the credentials for clusters
gcloud container clusters get-credentials mcs-euw1-npr-gke --zone=europe-west1 --project=mcspocbdcc-ef6a

#gcloud container clusters get-credentials mcs-euw1-npr-gke-auto --zone=europe-west1 --project=mcspocbdcc-ef6a

gcloud container clusters get-credentials mcs-aus1-npr-gke-auto --zone=australia-southeast1 --project=mcspocbdcc-ef6a

gcloud container clusters get-credentials mcs-use1-npr-gke-auto --zone=us-east1 --project=mcspocbdcc-ef6a

Rename the cluster contexts, so they are easier to reference later:
kubectl config rename-context gke_mcspocbdcc-ef6a_europe-west1_mcs-euw1-npr-gke mcs-euw1-npr-gke

#kubectl config rename-context gke_mcspocbdcc-ef6a_europe-west1_mcs-euw1-npr-gke-auto mcs-euw1-npr-gke-auto

kubectl config rename-context gke_mcspocbdcc-ef6a_us-east1_mcs-use1-npr-gke-auto mcs-use1-npr-gke-auto

kubectl config rename-context gke_mcspocbdcc-ef6a_australia-southeast1_mcs-aus1-npr-gke-auto mcs-aus1-npr-gke-auto

7. Register with Hub

After all three clusters have successfully been created, you will need to register these clusters with the GKE Hub. This will map each cluster to the project's fleet, which is the resource that encompasses the GKE clusters targeted by a multi-cluster Gateway.
gcloud container hub memberships register mcs-euw1-npr-gke \
--gke-cluster europe-west1/mcs-euw1-npr-gke \
--enable-workload-identity \
--project=mcspocbdcc-ef6a

#gcloud container hub memberships register mcs-euw1-npr-gke-auto \
# --gke-cluster europe-west1/mcs-euw1-npr-gke-auto \
# --enable-workload-identity \
# --project=mcspocbdcc-ef6a

gcloud container hub memberships register mcs-use1-npr-gke-auto \
--gke-cluster us-east1/mcs-use1-npr-gke-auto \
--enable-workload-identity \
--project=mcspocbdcc-ef6a

gcloud container hub memberships register mcs-aus1-npr-gke-auto \
--gke-cluster australia-southeast1/mcs-aus1-npr-gke-auto \
--enable-workload-identity \
--project=mcspocbdcc-ef6a

The output might be the following:
Waiting for membership to be created...done.
Created a new membership [projects/mcspocbdcc-ef6a/locations/global/memberships/mcs-aus1-npr-gke-auto] for the cluster [mcs-aus1-npr-gke-auto]
Generating the Connect Agent manifest...
Deploying the Connect Agent on cluster [mcs-aus1-npr-gke-auto] in namespace [gke-connect]...
Deployed the Connect Agent on cluster [mcs-aus1-npr-gke-auto] in namespace [gke-connect].
Finished registering the cluster [mcs-aus1-npr-gke-auto] with the Fleet.

Confirm that the clusters have successfully registered with the GKE Hub:
gcloud container hub memberships list --project=mcspocbdcc-ef6a
Enable multi-cluster Services in your fleet for the registered clusters. This enables the MCS controller for the three clusters that are registered to Hub so that it can start listening to and exporting Services.
gcloud container hub multi-cluster-services enable \
--project mcspocbdcc-ef6a

Grant the required Identity and Access Management (IAM) permissions required for MCS:
You must be an owner!
gcloud projects add-iam-policy-binding mcspocbdcc-ef6a \
--member "serviceAccount:mcspocbdcc-ef6a.svc.id.goog[gke-mcs/gke-mcs-importer]" \
--role "roles/compute.networkViewer" \
--project=mcspocbdcc-ef6a

Confirm that MCS is enabled for the registered clusters.
gcloud container hub multi-cluster-services describe --project=mcspocbdcc-ef6a

8. Install Gateway API CRDs

Before using Gateway resources in GKE you must in EVERY cluster (context).

Gateway problem formulation:

Ingress is used when configuring L7 LB, but has the following challenges:
When a namespace is assigned to each microservice that should be isolated from other services, , it is not possible to have one Ingress (L7LB and its VIP) as a representative.
Limited LB functionality supported
For example, Header based routing of External / Internal HTTP (S) LB cannot be configured via Ingress. (At least )
3. One resource is called Ingress, and covers protocols, IP addresses, port numbers, TLS certificates, and URL path routing, and it is difficult for people without infrastructure knowledge to operate it.
The Gateway API consists of multiple resources such as:
GatewayClass
Gateway
HTTPRoute
TCPRoute
TLSRoute
UDPRoute
So, check, what context are you currently in
kubectl config current-context
To change the context, use:
kubectl config use-context mcs-euw1-npr-gke

#kubectl config use-context mcs-euw1-npr-gke-auto
---
#apply the CRDs by the command below
---
kubectl config use-context mcs-aus1-npr-gke-auto
---
#apply the CRDs by the command below
---
kubectl config use-context mcs-use1-npr-gke-auto
---
#apply the CRDs by the command below

To apply the CRDs of Gateway APIs v1alpha1, use:
kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.3.0" \
| kubectl apply -f -

Enable the multi-cluster GKE Gateway controller and specify your config cluster. Note that you can always update the config cluster at a later time. This example specifies mcs-euw1-npr-gke-auto as the that will host the resources for multi-cluster Gateways.
gcloud alpha container hub ingress enable \
--config-membership=/projects/mcspocbdcc-ef6a/locations/global/memberships/mcs-euw1-npr-gke \
--project=mcspocbdcc-ef6a

#gcloud alpha container hub ingress enable \
Share
 
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.