-
Notifications
You must be signed in to change notification settings - Fork 40.7k
WIP: DRA scheduler: implement filter timeout #132033
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@@ -682,6 +685,10 @@ func lookupAttribute(device *draapi.BasicDevice, deviceID DeviceID, attributeNam | |||
// This allows the logic for subrequests to call allocateOne with the same | |||
// device index without causing infinite recursion. | |||
func (alloc *allocator) allocateOne(r deviceIndices, allocateSubRequest bool) (bool, error) { | |||
if alloc.ctx.Err() != nil { | |||
return false, fmt.Errorf("filter operation aborted: %w", alloc.ctx.Err()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO:
- benchmark this additional if check
- decide whether we should add a separate feature gate for it (KEP 4381: DRA structured parameters: updates, promotion to GA enhancements#5333 (comment))
1d4d178
to
f1aec04
Compare
/retest Some known flakes, timeouts. |
The only option is the filter timeout. The implementation of it follows in a separate commit.
The intent is to catch abnormal runtimes with the generously large default timeout of 10 seconds. We have to set up a context with the configured timeout (optional!), then ensure that both CEL evaluation and the allocation logic itself properly returns the context error. The scheduler plugin then can convert that into "unschedulable".
It's unclear why k8s.io/kubernetes/pkg/apis/resource/install needs to be imported explicitly. Having the apiserver and scheduler ready to be started ensures that all APIs are available.
This covers disabling the feature via the configuration, failing to schedule because of timeouts for all nodes, and retrying after ResourceSlice changes with partial success (timeout for one node, success for the other). While at it, some helper code gets improved.
f1aec04
to
dc1bb36
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: pohly The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@pohly: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
func ValidateDynamicResourcesArgs(path *field.Path, args *config.DynamicResourcesArgs) error { | ||
var allErrs field.ErrorList | ||
if args.FilterTimeout != nil && args.FilterTimeout.Duration < 0 { | ||
allErrs = append(allErrs, field.Invalid(path.Child("filterTimeout"), args.FilterTimeout, "must be positive")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
allErrs = append(allErrs, field.Invalid(path.Child("filterTimeout"), args.FilterTimeout, "must be positive")) | |
allErrs = append(allErrs, field.Invalid(path.Child("filterTimeout"), args.FilterTimeout, "must be zero or positive")) |
What type of PR is this?
/kind feature
What this PR does / why we need it:
The intent is to catch abnormal runtimes with the generously large default timeout of 10 seconds, as discussed here:
Which issue(s) this PR fixes:
Related-to: #131730 (comment), kubernetes/enhancements#4381
Special notes for your reviewer:
We have to set up a context with the configured timeout (optional!), then ensure that both CEL evaluation and the allocation logic itself properly returns the context error. The scheduler plugin then can convert that into "unschedulable".
Does this PR introduce a user-facing change?