This document describes the process to update a Kubernetes Kustomize or Kubernetes Legacy Sourcegraph instance. If you are unfamiliar with Sourcegraph versioning or releases see our general concepts documentation.
This guide is not for use with Helm. Please refer to the Upgrading Sourcegraph with Helm docs for Helm deployments.
Standard upgrades⚠️ Attention:
Always consult the release notes for the versions your upgrade will pass over and end on.
This guide assumes you have created a
release
branch following the reference repositories docsplease see our cautionary note on upgrades, if you have any concerns about running a multiversion upgrade, please reach out to us at [email protected] for advisement.
A standard upgrade occurs between a Sourcegraph version and the minor or major version released immediately after it. If you would like to jump forward several versions, you must perform a multi-version upgrade instead.
Upgrade with Kubernetes KustomizeThe following procedure is for performing a standard upgrade with Sourcegraph instances version v4.5.0
and above, which have migrated to the deploy-sourcegraph-k8s repo.
Step 1: Create a backup copy of the deployment configuration file
Make a duplicate of the current cluster.yaml
deployment configuration file that was used to deploy the current Sourcegraph instance.
If the Sourcegraph upgrade fails, you can redeploy using the current cluster.yaml
file to roll back and restore the instance to its previous state before the failed upgrade.
Step 2: Merge the new version of Sourcegraph into your release branch.
cd $DEPLOY_SOURCEGRAPH_FORK
# get updates
git fetch upstream
# to merge the upstream release tag into your release branch.
git checkout release
# Choose which version you want to deploy from https://github.com/sourcegraph/deploy-sourcegraph-k8s/tags
git merge $NEW_VERSION
Step 3: Build new manifests with Kustomize
Generate a new set of manifests locally using your current overlay instances/$INSTANCE_NAME
(e.g. INSTANCE_NAME=my-sourcegraph) without applying to the cluster.
$ kubectl kustomize instances/my-sourcegraph -o cluster.yaml
Review the generated manifests to ensure they match your intended configuration and have the images for the $NEW_VERSION
version.
Step 4: Deploy the generated manifests
Apply the new manifests from the ouput file cluster.yaml
to your cluster:
$ kubectl apply --prune -l deploy=sourcegraph -f cluster.yaml
Step 5: Monitor the status of the deployment to determine its success.
$ kubectl get pods -o wide --watch
Multi-version upgrades
⚠️ Attention: please see our cautionary note on upgrades, if you have any concerns about running a multiversion upgrade, please reach out to us at [email protected] for advisement.
To perform a multi-version upgrade via migrators upgrade command on a Sourcegraph instance deployed with our kustomize repo follow the procedure below:
Check Upgrade Readiness:
Site Admin > Updates
page to determine upgrade readiness.Pull and merge upstream changes:
release
branch.Update cluster.yaml and scale down non-database deployments and replicas:
instances/my-sourcegraph/kustomize.yaml
), uncomment the multi-version-upgrade util. This will scale down all non-database deployments and statefulSets replicas to 0. # - ../../components/utils/uid # -- Run all Postgres database with valid users on host
- ../../components/utils/multi-version-upgrade # -- Scale down non-database pods to 0 for multi-version upgrade
# - ../../components/utils/migrate-to-nonprivileged # -- Component for migrating from privileged to non-privileged
#
kubectl kustomize instances/my-sourcegraph -o cluster.yaml
kubectl apply --prune -l deploy=sourcegraph -f cluster.yaml
Note: This step will ensure that any PostgreSQL upgrade performed as an entrypoint script will have a chance to execute before the migrator is run. For more information see Upgradeing PostgreSQL.
Run Migrator with the upgrade
command:
configure/migrator/migrator.Job.yaml
manifest:
image:
to the latest release of migrator
args:
value to upgrade
. Example: - name: migrator
image: "index.docker.io/sourcegraph/migrator:6.0.0" // here we use a later version of migrator than the version we're upgrading too.
args: ["upgrade", "--from=v5.9.0", "--to=5.11.0"]
env:
Note:
- Always use the latest image version of migrator for migrator commands, except the startup command
up
- You may add the
--dry-run
flag to thecommand:
to test things out before altering the dbs
# To ensure no previous job invocations will conflict with our current invocation
kubectl delete job migrator
# Start the migrator job
kubectl apply -f components/utils/migrator/resources/migrator.Job.yaml
# Stream the migrator's stdout logs for progress
kubectl logs job/migrator -f
Example:
$ kubectl -n sourcegraph apply -f ./migrator.Job.yaml
job.batch/migrator created
$ kubectl -n sourcegraph get jobs
NAME COMPLETIONS DURATION AGE
migrator 0/1 9s 9s
$ kubectl -n sourcegraph logs job/migrator
❗️ An error was returned when detecting the terminal size and capabilities:
GetWinsize: inappropriate ioctl for device
Execution will continue, but please report this, along with your operating
system, terminal, and any other details, to:
https://sourcegraph.com/contact
✱ Sourcegraph migrator 4.5.1
👉 Migrating to v3.43 (step 1 of 3)
👉 Running schema migrations
✅ Schema migrations complete
👉 Running out of band migrations [1 2 4 5 7 13 14 15 16]
✅ Out of band migrations complete
👉 Migrating to v4.3 (step 2 of 3)
👉 Running schema migrations
✅ Schema migrations complete
👉 Running out of band migrations [17 18]
✅ Out of band migrations complete
👉 Migrating to v4.5 (step 3 of 3)
👉 Running schema migrations
✅ Schema migrations complete
$ kubectl -n sourcegraph get jobs
NAME COMPLETIONS DURATION AGE
migrator 1/1 9s 35s
Generate and apply a new cluster.yaml file:
instances/my-sourcegraph/kustomize.yaml
).# - ../../components/utils/uid # -- Run all Postgres database with valid users on host
# - ../../components/utils/multi-version-upgrade # -- Scale down non-database pods to 0 for multi-version upgrade
# - ../../components/utils/migrate-to-nonprivileged # -- Component for migrating from privileged to non-privileged
kubectl kustomize instances/my-sourcegraph -o cluster.yaml
kubectl apply --prune -l deploy=sourcegraph -f cluster.yaml
Your instance should now be up and running in the new version!
Due to limitations with the Kustomize deployment method introduced in Sourcegraph v4.5.0
, multi-version upgrades (e.g. v4.2.0
-> v5.0.3
), migrations to deploy-sourcegraph-k8s
should be conducted seperately from a full upgrade.
Admins upgrading a Sourcegraph instance older than v4.5.0
and migrating from our legacy kubernetes offering to our new kustomize manifests should upgrade to v4.5.0
perform the migrate
procedure and then perfom the remaining upgrade to bring Sourcegraph up to the desired version.
You can rollback by resetting your release
branch to the old state before redeploying the instance.
If you are rolling back more than a single version, then you must also rollback your database, as database migrations (which may have run at some point during the upgrade) are guaranteed to be compatible with one previous minor version.
Rollback with Kustomize# Re-generate manifests
kubectl kustomize instances/$YOUR_INSTANCE -o cluster-rollback.yaml
# Review manifests
less cluster-rollback.yaml
# Re-deploy
kubectl apply --prune -l deploy=sourcegraph -f cluster-rollback.yaml
Rollback with migrator downgrade
For rolling back a multiversion upgrade use the migrator
downgrade command. Learn mor in our downgrade docs.
In some situations, administrators may wish to migrate their databases before upgrading the rest of the system to reduce downtime. Sourcegraph guarantees database backward compatibility to the most recent minor point release so the database can safely be upgraded before the application code.
To execute the database migrations independently, follow the Kubernetes instructions on how to manually run database migrations. Running the up
(default) command on the migrator
of the version you are upgrading to will apply all migrations required by the next version of Sourcegraph.
Some of the services that comprise Sourcegraph require more resources than others, especially if the default CPU or memory allocations have been overridden. During an update when many services restart, you may observe that the more resource-hungry pods (e.g., gitserver
, indexed-search
) fail to restart, because no single node has enough available CPU or memory to accommodate them. This may be especially true if the cluster is heterogeneous (i.e., not all nodes have the same amount of CPU/memory).
If this happens, do the following:
kubectl drain $NODE
to drain a node of existing pods, so it has enough allocation for the larger service.watch kubectl get pods -o wide
and wait until the node has been drained. Run kubectl get pods
to check that all pods except for the resource-hungry one(s) have been assigned to a node.kubectl uncordon $NODE
to enable the larger pod(s) to be scheduled on the drained node.Note that the need to run the above steps can be prevented altogether with node selectors, which tell Kubernetes to assign certain pods to specific nodes.
High-availability updatesSourcegraph is designed to be a high-availability (HA) service, but upgrades by default require a 10m downtime window. If you need zero-downtime upgrades, please contact us. Services employ health checks to test the health of newly updated components before switching live traffic over to them by default. HA-enabling features include the following:
See the troubleshooting page.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4