Open
Description
What would you like to be added?
Upgrading and downgrading between 1.32 and 1.33 is supported:
- Install a DRA driver for 1.32.
- Create four pods:
- One pod which uses a ResourceClaimTemplate and can be scheduled.
- One pod which uses a ResourceClaim and can be scheduled.
- Two similar pods which cannot be scheduled yet because the other two use relevant resources.
- Upgrade the cluster to 1.33 without updating the DRA driver.
- Delete the first two pods, ensure that the claims get deallocated and (for the one from ResourceClaimTemplate) deleted.
- Ensure that the other two pods get scheduled.
- Create two new pods which need to remain pending.
- Downgrade to 1.32.
- Delete the running pods, ensuring proper cleanup as before.
- Ensure that the pods created with 1.33 get scheduled.
Other scenarios:
- Creating a DaemonSet with admin access enabled in 1.32, upgrading to 1.33 with admin access disabled =>
- scheduled, not-running pods should start with admin access
- not-scheduled pods should not get scheduled (error in scheduler: "feature not enabled"), without adverse affects on the DaemonSet controller (like re-creating pods)
- kubelet from v1.32 with control plane from v1.33 => can start pods with claims
When 1.32 and 1.33 are involved, both v1beta1 and v1beta2 need to be enabled.
Once master and later 1.34 have v1, upgrades/downgrades between 1.33 and master/1.34 can be tested as above. Version skew between kubelet 1.32 and master/1.34 is also supported.
Why is this needed?
Test coverage. Good for production readiness when considering GA.
Metadata
Metadata
Assignees
Labels
Categorizes issue or PR as related to a new feature.Indicates that an issue or PR should not be auto-closed due to staleness.Indicates an issue or PR lacks a `triage/foo` label and requires one.Categorizes an issue or PR as relevant to SIG Node.Categorizes an issue or PR as relevant to WG Device Management.
Type
Projects
Status
🏗 In progress