Before we can test out our conversion, we’ll need to enable them in our CRD:
Kubebuilder generates Kubernetes manifests under the config
directory with webhook bits disabled. To enable them, we need to:
Enable patches/webhook_in_<kind>.yaml
and patches/cainjection_in_<kind>.yaml
in config/crd/kustomization.yaml
file.
Enable ../certmanager
and ../webhook
directories under the bases
section in config/default/kustomization.yaml
file.
Enable manager_webhook_patch.yaml
and webhookcainjection_patch.yaml
under the patches
section in config/default/kustomization.yaml
file.
Enable all the vars under the CERTMANAGER
section in config/default/kustomization.yaml
file.
Additionally, if present in our Makefile, we’ll need to set the CRD_OPTIONS
variable to just "crd"
, removing the trivialVersions
option (this ensures that we actually generate validation for each version, instead of telling Kubernetes that they’re the same):
CRD_OPTIONS ?= "crd"
Now we have all our code changes and manifests in place, so let’s deploy it to the cluster and test it out.
You’ll need cert-manager installed (version 0.9.0+
) unless you’ve got some other certificate management solution. The Kubebuilder team has tested the instructions in this tutorial with 0.9.0-alpha.0 release.
Once all our ducks are in a row with certificates, we can run make install deploy
(as normal) to deploy all the bits (CRD, controller-manager deployment) onto the cluster.
Once all of the bits are up and running on the cluster with conversion enabled, we can test out our conversion by requesting different versions.
We’ll make a v2 version based on our v1 version (put it under config/samples
)
apiVersion: batch.tutorial.kubebuilder.io/v2
kind: CronJob
metadata:
labels:
app.kubernetes.io/name: project
app.kubernetes.io/managed-by: kustomize
name: cronjob-sample
spec:
schedule:
minute: "*/1"
startingDeadlineSeconds: 60
concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Then, we can create it on the cluster:
kubectl apply -f config/samples/batch_v2_cronjob.yaml
If we’ve done everything correctly, it should create successfully, and we should be able to fetch it using both the v2 resource
kubectl get cronjobs.v2.batch.tutorial.kubebuilder.io -o yaml
apiVersion: batch.tutorial.kubebuilder.io/v2
kind: CronJob
metadata:
labels:
app.kubernetes.io/name: project
app.kubernetes.io/managed-by: kustomize
name: cronjob-sample
spec:
schedule:
minute: "*/1"
startingDeadlineSeconds: 60
concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
and the v1 resource
kubectl get cronjobs.v1.batch.tutorial.kubebuilder.io -o yaml
apiVersion: batch.tutorial.kubebuilder.io/v1
kind: CronJob
metadata:
labels:
app.kubernetes.io/name: project
app.kubernetes.io/managed-by: kustomize
name: cronjob-sample
spec:
schedule: "*/1 * * * *"
startingDeadlineSeconds: 60
concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Both should be filled out, and look equivalent to our v2 and v1 samples, respectively. Notice that each has a different API version.
Finally, if we wait a bit, we should notice that our CronJob continues to reconcile, even though our controller is written against our v1 API version.
kubectl and Preferred VersionsWhen we access our API types from Go code, we ask for a specific version by using that version’s Go type (e.g. batchv2.CronJob
).
You might’ve noticed that the above invocations of kubectl looked a little different from what we usually do – namely, they specify a group-version-resource, instead of just a resource.
When we write kubectl get cronjob
, kubectl needs to figure out which group-version-resource that maps to. To do this, it uses the discovery API to figure out the preferred version of the cronjob
resource. For CRDs, this is more-or-less the latest stable version (see the CRD docs for specific details).
With our updates to CronJob, this means that kubectl get cronjob
fetches the batch/v2
group-version.
If we want to specify an exact version, we can use kubectl get resource.version.group
, as we do above.
You should always use fully-qualified group-version-resource syntax in scripts. kubectl get resource
is for humans, self-aware robots, and other sentient beings that can figure out new versions. kubectl get resource.version.group
is for everything else.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4