Skip to content

The second time the pod is deleted the grace period does not take effect #113883

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 72 commits into
base: master
Choose a base branch
from

Conversation

Seaiii
Copy link

@Seaiii Seaiii commented Nov 14, 2022

What type of PR is this?

/kind bug
/sig node

-->

What this PR does / why we need it:

Which issue(s) this PR fixes:

It doesn't work when deleting the same pod for the second time. And force stop doesn't work either, it only lets apiserver delete pods, but kubelet doesn't delete containers.
The prerequisite is that the container does not stop down! (e.g. if the container is in sleep 9999 state).
When I first kubectl delete pod --grace-perios=30, then when kubectl delete pod --grace-perios=0 -force. the second deletion (including forced deletion) does not have any effect on the kubelet.
According to the latest documentation, gracePeriod, when set to grace period for the second deletion, triggers the kubelet to start cleaning up immediately. But looking at the source code, the first delete needs to wait for the CRI to return the result, because it needs to interact with the CRI, so it needs to wait for the first gracePeriod to process successfully before processing this grace period. This is inconsistent with the documentation logic and the code logic.

Which issue(s) this PR fixes:
Fixes #113712
Fixes #113717
Fixes #83916
Fixes #87039

Special notes for your reviewer:
1.Cannot shorten the grace period for the second deletion
2.This is an issue that has arisen since the kubelet was reworked after version 1.22.

1.Kubelet delete pod name —garce-period=100
2.Kubelet delete pod name —garce-period=10
The second time when the deletion time is reduced to 10s, it should

Does this PR introduce a user-facing change?

ACTION REQUIRED: shorter grace period of a second command overrides the longer grace period of a first command

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
none

@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. kind/bug Categorizes issue or PR as related to a bug. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. sig/node Categorizes an issue or PR as relevant to SIG Node. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Nov 14, 2022
@k8s-ci-robot
Copy link
Contributor

Hi @Seaiii. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-priority Indicates a PR lacks a `priority/foo` label and requires one. label Nov 14, 2022
@k8s-ci-robot k8s-ci-robot added area/kubelet area/test sig/testing Categorizes an issue or PR as relevant to SIG Testing. labels Nov 14, 2022
@Seaiii
Copy link
Author

Seaiii commented Nov 14, 2022

@matthyx Hi, I've re-posted the new PR here, this one I think is better for the latest version

@Seaiii
Copy link
Author

Seaiii commented Nov 14, 2022

@bobbypage @endocrimes 👋🏻👋🏻 Hi ,this problem is currently plaguing our team again. At the same time I saw that the k8s team was changing this issue a week ago, I made some changes on this basis, Fixes #113408 Please review the official

@matthyx
Copy link
Contributor

matthyx commented Nov 14, 2022

/ok-to-test
Nice to reuse the context...
I think there is already a KEP about passing contexts all the way to the client api, you might want to check it:
https://github.com/kubernetes/enhancements/tree/3cb66bd0a1ef973ebcc974f935f0ac5cba9db4b2/keps/sig-api-machinery/1601-client-go-context

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Nov 14, 2022
@matthyx
Copy link
Contributor

matthyx commented Nov 14, 2022

/cc

@k8s-ci-robot k8s-ci-robot requested a review from matthyx November 14, 2022 10:21
@Seaiii
Copy link
Author

Seaiii commented Nov 14, 2022

ok, thank you for your comments. I'll take a look at it, but it should be fine for now, since the context is coming from #113591
I just used the Done method, which is already registered in pod_workers.go, and I am referring to the changes made here.
image
I just used the Done method, which is already registered in pod_workers.go, and I am referring to the changes made here.
The original author's logic for writing this code was to terminate the subsequent operations through the context, and he did determine whether it was being shortened, but the actual code did not complete this operation.
So I abandoned my previous approach and adopted the original author's idea of using context to receive the Done value here to complete it.

@matthyx
Copy link
Contributor

matthyx commented Nov 14, 2022

Please add the header from hack/boilerplate/boilerplate.go.txt in your new test file.

@Seaiii
Copy link
Author

Seaiii commented Nov 14, 2022

Please add the header from hack/boilerplate/boilerplate.go.txt in your new test file.

Oh, sorry, I can't believe I forgot about this. Thanks for the reminder that I didn't get my computer back at work. I'll add it first thing tomorrow.
I saw that two tests failed, is that the reason?

@matthyx
Copy link
Contributor

matthyx commented Nov 14, 2022

at least the first one

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Nov 15, 2022
@Seaiii
Copy link
Author

Seaiii commented Nov 15, 2022

@smarterclayton @dashpole 👋🏻👋🏻 Hi ,I saw that you guys were working on this a week ago, is this PR of mine related to the problem you guys are working on? I don't know if I can help you guys

@matthyx
Copy link
Contributor

matthyx commented Nov 15, 2022

@Seaiii it now looks much better... the issue is ready for triage now.
You should edit your description and follow the comments to add a release note, because it has end-user (positive) impact.

@Seaiii
Copy link
Author

Seaiii commented Nov 15, 2022

@Seaiii it now looks much better... the issue is ready for triage now. You should edit your description and follow the comments to add a release note, because it has end-user (positive) impact.

ok,I will do it now, But main-v2 still FAILED, this does not affect it?
Can I just write the changes before and after this change in the comments?

@k8s-ci-robot
Copy link
Contributor

@Seaiii: Reopened this PR.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@haircommander
Copy link
Contributor

/retest

@haircommander
Copy link
Contributor

@mikebrow @samuelkarp does containerd handle multiple stop requests coming in? we had to implement custom logic to handle this case in cri-o

@Seaiii
Copy link
Author

Seaiii commented May 1, 2025

@mikebrow @samuelkarp does containerd handle multiple stop requests coming in? we had to implement custom logic to handle this case in cri-o

Yes, both containerd and docker can handle multiple requests, I've tested it in local integration. As long as the kubelet can send two grpc's it solves the problem

case <-ctx.Done():
klog.V(2).InfoS("Context expired while executing PreStop hook", "pod", klog.KObj(pod), "podUID", pod.UID,
"containerName", containerSpec.Name, "containerID", containerID.String(), "gracePeriod", gracePeriod)
return int64(metav1.Now().Sub(start.Time).Seconds())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't look to me like we need this? we'll fall through to the return directly below, which looks identical

Copy link
Author

@Seaiii Seaiii May 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't look to me like we need this? we'll fall through to the return directly below, which looks identical

The logic here is the same as shortening the grace period.

The two logics

  1. if it is done is closed, it means the grpc call ends normal execution.
  2. If ctx.done(), it means gprc is blocking and needs to shorten the grace period. The current blocking is canceled by ctx.done. ctx is passed from kubelet.go in the previous layer pod_workers.go.

1.done is the chan that I registered
2.ctx.Done() is the function, which is passed by pod_worker.go

These two are not the same process and logic.The principle is that pod_worker.go registers the context's cancel function. By selecting ctx.Done to receive the

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean specifically the return int64(metav1.Now().Sub(start.Time).Seconds()) seems duplicated with line 681 which directly proceeds this it seems

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean specifically the return int64(metav1.Now().Sub(start.Time).Seconds()) seems duplicated with line 681 which directly proceeds this it seems

Oh, yeah, you're right. I've made the changes.

Copy link
Author

@Seaiii Seaiii left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove redundant return

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@SergeyKanzhelev
Copy link
Member

/reopen
/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot reopened this Jun 6, 2025
@k8s-ci-robot
Copy link
Contributor

@SergeyKanzhelev: Reopened this PR.

In response to this:

/reopen
/remove-lifecycle rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 6, 2025
@github-project-automation github-project-automation bot moved this from Closed / Done to Needs Triage in SIG Auth Jun 6, 2025
@SergeyKanzhelev
Copy link
Member

oh, I didn't mean to reopen a PR. I thought this is an issue

/close

@Seaiii feel free to reopen and bring to SIG Node attention

@k8s-ci-robot
Copy link
Contributor

@SergeyKanzhelev: Closed this PR.

In response to this:

oh, I didn't mean to reopen a PR. I thought this is an issue

/close

@Seaiii feel free to reopen and bring to SIG Node attention

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@github-project-automation github-project-automation bot moved this from Needs Triage to Closed / Done in SIG Auth Jun 6, 2025
@Seaiii
Copy link
Author

Seaiii commented Jun 6, 2025

/reopen
Opening is no problem, I am currently waiting for someone who can help me with E2E

@k8s-ci-robot k8s-ci-robot reopened this Jun 6, 2025
@k8s-ci-robot
Copy link
Contributor

@Seaiii: Reopened this PR.

In response to this:

/reopen
Opening is no problem, I am currently waiting for someone who can help me with E2E

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@github-project-automation github-project-automation bot moved this from Closed / Done to Needs Triage in SIG Auth Jun 6, 2025
@k8s-ci-robot
Copy link
Contributor

@Seaiii: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubernetes-e2e-inplace-pod-resize-containerd-main-v2 90e0cda64343ab52f4ceec2a76265853a4d55985 link false /test pull-kubernetes-e2e-inplace-pod-resize-containerd-main-v2
pull-kubernetes-local-e2e a35541b946f8ce2119204c78c740dfdb9d786c64 link false /test pull-kubernetes-local-e2e
pull-kubernetes-node-e2e-crio-dra a6dc27c link false /test pull-kubernetes-node-e2e-crio-dra
pull-kubernetes-e2e-ubuntu-gce-network-policies a6dc27c link false /test pull-kubernetes-e2e-ubuntu-gce-network-policies
pull-kubernetes-e2e-gce-providerless a6dc27c link false /test pull-kubernetes-e2e-gce-providerless
pull-kubernetes-e2e-kind-kms 948c8c9 link false /test pull-kubernetes-e2e-kind-kms
pull-kubernetes-e2e-gci-gce-ipvs 948c8c9 link false /test pull-kubernetes-e2e-gci-gce-ipvs
pull-kubernetes-kind-dra 948c8c9 link false /test pull-kubernetes-kind-dra
pull-kubernetes-e2e-capz-azure-disk-vmss 948c8c9 link false /test pull-kubernetes-e2e-capz-azure-disk-vmss
pull-kubernetes-e2e-capz-conformance 948c8c9 link false /test pull-kubernetes-e2e-capz-conformance
pull-kubernetes-e2e-capz-azure-file 948c8c9 link false /test pull-kubernetes-e2e-capz-azure-file
pull-kubernetes-e2e-kind-nftables 948c8c9 link false /test pull-kubernetes-e2e-kind-nftables
pull-kubernetes-e2e-capz-azure-disk 948c8c9 link false /test pull-kubernetes-e2e-capz-azure-disk
pull-kubernetes-e2e-storage-kind-disruptive 948c8c9 link false /test pull-kubernetes-e2e-storage-kind-disruptive
check-dependency-stats 948c8c9 link false /test check-dependency-stats
pull-kubernetes-e2e-gci-gce-ingress 948c8c9 link false /test pull-kubernetes-e2e-gci-gce-ingress
pull-kubernetes-e2e-capz-azure-file-vmss 948c8c9 link false /test pull-kubernetes-e2e-capz-azure-file-vmss
pull-kubernetes-e2e-gce-network-policies 948c8c9 link false /test pull-kubernetes-e2e-gce-network-policies
pull-kubernetes-node-e2e-crio-cgrpv1-dra 948c8c9 link false /test pull-kubernetes-node-e2e-crio-cgrpv1-dra
pull-kubernetes-e2e-capz-windows-master 948c8c9 link false /test pull-kubernetes-e2e-capz-windows-master
pull-kubernetes-node-e2e-containerd-1-7-dra 948c8c9 link false /test pull-kubernetes-node-e2e-containerd-1-7-dra
pull-kubernetes-e2e-gce-storage-slow 948c8c9 link false /test pull-kubernetes-e2e-gce-storage-slow
pull-kubernetes-node-e2e-crio-cgrpv2-dra 948c8c9 link false /test pull-kubernetes-node-e2e-crio-cgrpv2-dra
pull-kubernetes-e2e-gce-csi-serial 948c8c9 link false /test pull-kubernetes-e2e-gce-csi-serial
pull-kubernetes-e2e-gce-storage-snapshot 948c8c9 link false /test pull-kubernetes-e2e-gce-storage-snapshot
pull-kubernetes-unit-windows-master e21f189 link false /test pull-kubernetes-unit-windows-master
pull-kubernetes-node-e2e-containerd e21f189 link true /test pull-kubernetes-node-e2e-containerd
pull-kubernetes-unit e21f189 link true /test pull-kubernetes-unit
pull-kubernetes-verify e21f189 link true /test pull-kubernetes-verify

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/apiserver area/cloudprovider area/code-generation area/e2e-test-framework Issues or PRs related to refactoring the kubernetes e2e test framework area/ipvs area/kube-proxy area/kubelet area/network-policy Issues or PRs related to Network Policy subproject area/provider/gcp Issues or PRs related to gcp provider area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/contains-merge-commits Indicates a PR which contains merge commits. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/bug Categorizes issue or PR as related to a bug. kind/regression Categorizes issue or PR as related to a regression from a prior release. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. release-note-action-required Denotes a PR that introduces potentially breaking changes that require user action. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/auth Categorizes an issue or PR as relevant to SIG Auth. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/storage Categorizes an issue or PR as relevant to SIG Storage. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Archived in project
Status: Needs Triage
Archived in project
Status: Done