It took me quite a while to find the perfect setup for my Kubernetes cluster on Linode—especially to get proper backups working for web instances using Persistent Volume Claims (PVCs). I wanted to ensure that if anything ever went wrong, I could easily restore my data.
Feel free to check it out – I hope I can save someone out there some hours of head pain.
Why Velero?
After some research, I found Velero to be the most flexible and well-supported Kubernetes backup solution out there. The official documentation, Helm chart guides, and plugin instructions on GitHub are quite comprehensive.
However, the tricky part was getting Velero to actually back up to my Linode object storage 😀
My Working Velero Setup (as of 2025)
Below, you’ll find my current configuration—tested and working with the latest version of Velero, the AWS plugin, and Linode’s S3-compatible object storage.
values.yaml (Schema, Default values)
image:
repository: velero/velero
tag: v1.16.2
credentials:
useSecret: true
existingSecret: cloud-credentials
configuration:
logLevel: warning
backupStorageLocation:
- name: default
provider: velero.io/aws
bucket: velero-backups
checksumAlgorithm: "" # <- needed after velero-plugin-for-aws:v1.9.0
default: true
config:
region: eu-central-1
s3Url: https://eu-central-1.linodeobjects.com
volumeSnapshotLocation:
- name: default
provider: linode.com/velero
config:
region: eu-central-1
s3Url: https://eu-central-1.linodeobjects.com
extraEnvVars:
- name: LINODE_TOKEN
valueFrom:
secretKeyRef:
key: linode_token
name: cloud-credentials
deployNodeAgent: true
snapshotsEnabled: false
backupsEnabled: true
initContainers:
- name: velero-plugin-for-aws
image: velero/velero-plugin-for-aws:v1.12.1
volumeMounts:
- mountPath: /target
name: plugins
- name: velero-plugin-linode
image: linode/velero-plugin:v0.0.1
volumeMounts:
- mountPath: /target
name: plugins
nodeAgent:
tolerations:
- key: "wordpress"
operator: "Equal"
value: "only"
effect: "NoSchedule"
IMPORTANT
If you’re using Linode with velero-plugin-for-aws, versions up to 1.9.0 work without specifying
checksumAlgorithm: ""
From 1.9.1 to < 1.10, this option is required but not configurable <- thats where you will fail using linode.
So you’ll either need to stay on 1.9.0 or upgrade to 1.10 or later, where the option is supported and configurable.
For >= 1.10 therefore just add it setting an empty string to override the default value set in the plugin.
The `LINODE_TOKEN` is used for snapshotting (via the Linode Velero plugin), which is still somewhat experimental in my experience.
Setup Steps
1. Create Your Secret(s)
Create a file named secret.yaml
with your Linode and S3 credentials (replace placeholders with your actual keys):
apiVersion: v1
kind: Secret
stringData:
linode_token:
cloud: |
[default]
aws_access_key_id =
aws_secret_access_key =
type: Opaque
2. Create the Namespace and Secret
kubectl create ns velero
kubectl apply -f secret.yaml -n velero
3. Deploy Velero with Helm
helm upgrade --install velero vmware-tanzu/velero -n velero --create-namespace -f values.yaml
Done!
Usage: Creating Backups
To create a manual backup, run:
velero create backup test --include-namespaces=
To add scheduled backups, extend your values.yaml
with something similar to this:
schedules:
wordpress: # <- just a name
disabled: false
schedule: "0 0 * * *"
useOwnerReferencesInBackup: false
paused: false
skipImmediately: false
template:
ttl: "240h"
storageLocation: default
includedNamespaces:
- wordpress
Then run helm upgrade
again to apply the changes.
Troubleshooting & Common Pitfalls
Fail 1
Following the GitHub plugin docs, I initially got backups and snapshots, but running a second backup threw an error possibly because I wasn’t using restic/node-agent integration and it tried to name the cloned volumes always the same.
I didnt dig deeper here…
Velero: message: /CloneVolume returned error: [400] [label] Label must be unique name: /scaleit-wordpress-6cc695795b-pknch message: /Error backing up item error: /error taking snapshot of volume: rpc error: code = Aborted desc = plugin panicked: runtime error: invalid memory address or nil pointer dereference, stack trace: goroutine 66
Fail 2
Most blog articles don’t mention the checksum algorithm requirement from plugin version 1.9.1
onward. With the wrong version, backups may create Kopia folders for PVCs but fail to upload metadata because of the wrong checksum algorithm configured by default.
time="2025-07-30T15:13:34Z" level=error msg="Error uploading log file" backup=wp-test-backup4 bucket=velero-backups error="rpc error: code = Unknown desc = error putting object backups/wp-test-backup4/wp-test-backup4-logs.gz: operation error S3: PutObject, https response error StatusCode: 400, ..
Final Thoughts
I hope this article helps anyone struggling with Velero and Linode S3 backups on Kubernetes!
If you have tips for improvement or corrections, please let me know in the comments—I’d love to learn from your experiences, too.
For search engines / fellow troubleshooters:
- Using the Velero plugin as per GitHub docs stored metadata but subsequent backups failed.
- The official documentation sometimes omits specific details (like the
checksumAlgorithm
requirement for S3-compatible storage). - Always check the plugin and Velero versions for subtle breaking changes.
Resources
Docker Images:
Velero: https://hub.docker.com/u/velero
Linode Plugin for Velero: https://hub.docker.com/r/linode/velero-plugin
Github:
Linode Plugin for Velero (Snapshots): https://github.com/linode/velero-plugin
Velero AWS Plugin: https://github.com/vmware-tanzu/velero-plugin-for-aws
The main application: https://github.com/vmware-tanzu/velero
Documentation:
Velero: https://velero.io/docs/v1.7/