Skip to main content
Version: 0.22.0

Getting started enterprise

Creating your first CAPD Cluster

If you've followed the Installation guide you should have:

  1. Weave GitOps Enterprise installed
  2. A CAPI provider installed (With support for ClusterResourceSets enabled).

Next up we'll add a template and use it to create a cluster.

Directory structure

Let's setup a directory structure to manage our clusters

mkdir -p clusters/bases \
clusters/management/capi/templates \
clusters/management/capi/bootstrap \
clusters/management/capi/profiles

Now we should have:

.
└── clusters
├── bases
└── management
└── capi
├── bootstrap
├── profiles
└── templates

This assumes that we've configured flux to reconcile everything in clusters/management into our management cluster.

To keep things organized we've created some subpaths for the different resources:

  • bases for any common resources between clusters like RBAC and policy.
  • templates for GitOpsTemplates
  • bootstrap for ClusterBootstrapConfig, ClusterResourceSet and the ConfigMap they reference
  • profiles for the HelmRepository of the profiles for the newly created clusters

Lets grab some sample resources to create our first cluster!

Add common RBAC to the repo

When a cluster is provisioned, by default it will reconcile all the manifests in ./clusters/<cluster-namespace>/<cluster-name> and ./clusters/bases.

To display Applications and Sources in the UI we need to give the logged in user permissions to inspect the new cluster.

Adding common rbac rules to ./clusters/bases/rbac is an easy way to configure this!

Expand to see full template yaml
clusters/bases/rbac/wego-admin.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: wego-admin-cluster-role-binding
subjects:
- kind: User
name: wego-admin
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: wego-admin-cluster-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: wego-admin-cluster-role
rules:
- apiGroups: [""]
resources: ["secrets", "pods"]
verbs: ["get", "list"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list"]
- apiGroups: ["kustomize.toolkit.fluxcd.io"]
resources: ["kustomizations"]
verbs: ["get", "list", "patch"]
- apiGroups: ["helm.toolkit.fluxcd.io"]
resources: ["helmreleases"]
verbs: ["get", "list", "patch"]
- apiGroups: ["source.toolkit.fluxcd.io"]
resources: [ "buckets", "helmcharts", "gitrepositories", "helmrepositories", "ocirepositories" ]
verbs: ["get", "list", "patch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "watch", "list"]
- apiGroups: ["pac.weave.works"]
resources: ["policies"]
verbs: ["get", "list"]

Add a template

See CAPI Templates page for more details on this topic. Once we load a template we can use it in the UI to create clusters!

Download the template below to your config repository path, then commit and push to your git origin.

Expand to see full template yaml
clusters/management/capi/templates/capd-template.yaml
apiVersion: templates.weave.works/v1alpha2
kind: GitOpsTemplate
metadata:
name: cluster-template-development
namespace: default
annotations:
templates.weave.works/add-common-bases: "true"
templates.weave.works/inject-prune-annotation: "true"
labels:
weave.works/template-type: cluster
spec:
description: A simple CAPD template
params:
- name: CLUSTER_NAME
required: true
description: This is used for the cluster naming.
- name: NAMESPACE
description: Namespace to create the cluster in
- name: KUBERNETES_VERSION
description: Kubernetes version to use for the cluster
options: ["1.19.11", "1.21.1", "1.22.0", "1.23.3"]
- name: CONTROL_PLANE_MACHINE_COUNT
description: Number of control planes
options: ["1", "2", "3"]
- name: WORKER_MACHINE_COUNT
description: Number of worker machines
resourcetemplates:
- content:
- apiVersion: gitops.weave.works/v1alpha1
kind: GitopsCluster
metadata:
name: "${CLUSTER_NAME}"
namespace: "${NAMESPACE}"
labels:
weave.works/capi: bootstrap
spec:
capiClusterRef:
name: "${CLUSTER_NAME}"
- apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: "${CLUSTER_NAME}"
namespace: "${NAMESPACE}"
labels:
cni: calico
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
serviceDomain: cluster.local
services:
cidrBlocks:
- 10.128.0.0/12
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
name: "${CLUSTER_NAME}-control-plane"
namespace: "${NAMESPACE}"
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerCluster
name: "${CLUSTER_NAME}"
namespace: "${NAMESPACE}"
- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerCluster
metadata:
name: "${CLUSTER_NAME}"
namespace: "${NAMESPACE}"
- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
name: "${CLUSTER_NAME}-control-plane"
namespace: "${NAMESPACE}"
spec:
template:
spec:
extraMounts:
- containerPath: /var/run/docker.sock
hostPath: /var/run/docker.sock
- apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: "${CLUSTER_NAME}-control-plane"
namespace: "${NAMESPACE}"
spec:
kubeadmConfigSpec:
clusterConfiguration:
apiServer:
certSANs:
- localhost
- 127.0.0.1
- 0.0.0.0
controllerManager:
extraArgs:
enable-hostpath-provisioner: "true"
initConfiguration:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
cgroup-driver: cgroupfs
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
joinConfiguration:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
cgroup-driver: cgroupfs
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
machineTemplate:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
name: "${CLUSTER_NAME}-control-plane"
namespace: "${NAMESPACE}"
replicas: "${CONTROL_PLANE_MACHINE_COUNT}"
version: "${KUBERNETES_VERSION}"
- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
name: "${CLUSTER_NAME}-md-0"
namespace: "${NAMESPACE}"
spec:
template:
spec: {}
- apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: "${CLUSTER_NAME}-md-0"
namespace: "${NAMESPACE}"
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
cgroup-driver: cgroupfs
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
- apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: "${CLUSTER_NAME}-md-0"
namespace: "${NAMESPACE}"
spec:
clusterName: "${CLUSTER_NAME}"
replicas: "${WORKER_MACHINE_COUNT}"
selector:
matchLabels: null
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: "${CLUSTER_NAME}-md-0"
namespace: "${NAMESPACE}"
clusterName: "${CLUSTER_NAME}"
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
name: "${CLUSTER_NAME}-md-0"
namespace: "${NAMESPACE}"
version: "${KUBERNETES_VERSION}"

Automatically install a CNI with ClusterResourceSets

We can use ClusterResourceSets to automatically install CNI's on a new cluster, here we use calico as an example.

Add a CRS to install a CNI

Create a calico configmap and a CRS as follows:

clusters/management/capi/boostrap/calico-crs.yaml
apiVersion: addons.cluster.x-k8s.io/v1alpha3
kind: ClusterResourceSet
metadata:
name: calico-crs
namespace: default
spec:
clusterSelector:
matchLabels:
cni: calico
resources:
- kind: ConfigMap
name: calico-crs-configmap

The full calico-crs-configmap.yaml is a bit large to display inline here but make sure to download it to clusters/management/capi/bootstrap/calico-crs-configmap.yaml too, manually or with the above curl command.

Profiles and clusters

WGE can automatically install profiles onto new clusters

Add a helmrepo

Download the profile repository below to your config repository path then commit and push. Make sure to update the url to point to a Helm repository containing your profiles.

clusters/management/capi/profiles/profile-repo.yaml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: weaveworks-charts
namespace: flux-system
spec:
interval: 1m
url: https://weaveworks.github.io/weave-gitops-profile-examples/
status: {}

For more information about profiles, see profiles from private helm repositories, policy profiles, and eso secrets profiles.

Add a cluster bootstrap config

Create a cluster bootstrap config as follows:

 kubectl create secret generic my-pat --from-literal GITHUB_TOKEN=$GITHUB_TOKEN

Download the config with

Then update the GITOPS_REPO variable to point to your cluster

Expand to see full yaml
clusters/management/capi/boostrap/capi-gitops-cluster-bootstrap-config.yaml
apiVersion: capi.weave.works/v1alpha1
kind: ClusterBootstrapConfig
metadata:
name: capi-gitops
namespace: default
spec:
clusterSelector:
matchLabels:
weave.works/capi: bootstrap
jobTemplate:
generateName: "run-gitops-{{ .ObjectMeta.Name }}"
spec:
containers:
- image: ghcr.io/fluxcd/flux-cli:v0.29.5
name: flux-bootstrap
resources: {}
volumeMounts:
- name: kubeconfig
mountPath: "/etc/gitops"
readOnly: true
args:
[
"bootstrap",
"github",
"--kubeconfig=/etc/gitops/value",
"--owner=$GITHUB_USER",
"--repository=fleet-infra",
"--path=./clusters/{{ .ObjectMeta.Namespace }}/{{ .ObjectMeta.Name }}",
]
envFrom:
- secretRef:
name: my-pat
restartPolicy: Never
volumes:
- name: kubeconfig
secret:
secretName: "{{ .ObjectMeta.Name }}-kubeconfig"

Add Monitoring Dashboards to your cluster

In order to add dashboards to your cluster, you'll need to use metadata annotations following the below pattern.

apiVersion: gitops.weave.works/v1alpha1
kind: GitopsCluster
metadata:
annotations:
metadata.weave.works/dashboard.grafana: https://grafana.com/
metadata.weave.works/dashboard.prometheus: https://prometheus.io/

Specifying CAPI cluster kinds

To be able to explicitly specify the type of cluster, you need to use metadata annotations using weave.works/cluster-kind for the annotation key as the below pattern:

apiVersion: gitops.weave.works/v1alpha1
kind: GitopsCluster
metadata:
annotations:
weave.works/cluster-kind: <CLUSTER_KIND>

where the CLUSTER_KIND can be one of the following supported ones:

  • DockerCluster
  • AWSCluster
  • AWSManagedCluster
  • AzureCluster
  • AzureManagedCluster
  • GCPCluster
  • MicrovmCluster
  • Rancher
  • Openshift
  • Tanzu
  • OtherOnprem

Test

You should now be able to create a new cluster from your template and install profiles onto it with a single Pull Request via the WGE UI!