New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
encoding: do not cache large encodeState #57080
base: master
Are you sure you want to change the base?
Conversation
This PR (HEAD: 1ffe748) has been imported to Gerrit for code review. Please visit https://go-review.googlesource.com/c/go/+/455115 to see it. Tip: You can toggle comments from me using the |
Message from Gopher Robot: Patch Set 1: Congratulations on opening your first change. Thank you for your contribution! Next steps: Most changes in the Go project go through a few rounds of revision. This can be During May-July and Nov-Jan the Go project is in a code freeze, during which Please don’t reply on this GitHub thread. Visit golang.org/cl/455115. |
@@ -312,6 +312,14 @@ func newEncodeState() *encodeState { | |||
return &encodeState{ptrSeen: make(map[any]struct{})} | |||
} | |||
|
|||
func cacheEncodeState(e *encodeState) { | |||
// caching large objects is not memory efficient. | |||
if e.Buffer.Cap() > 32768 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some limit seems like a great idea, but how did you come up with this specific number?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am open to any numbers <= 1MiB, which is good enough for apiserver issue that I am trying to fix
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I recommend doing some kind of search process to recommend an optimal value or a tighter range. E.g. binary search through possible values to find the lowest and highest ones that fix your problem.
I suspect your particular test won't be able to narrow this down too much, you'll need a more realistic object size distribution. I'm not sure where to get that from.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I just read this code more and I agree that caching large objects is going to be horribly memory inefficient because they can be reused to store small objects, which will retain all that memory for little reason. I'm not sure where a good threshold is but it may be even smaller than this.
I think you have compared the benchmarks backwards? the diff in the OP appears to be making everything much worse :) |
...actually maybe the benchmarks weren't backwards, in which case this threshold is too small |
Message from Joseph Tsai: Patch Set 1: (1 comment) Please don’t reply on this GitHub thread. Visit golang.org/cl/455115. |
Message from Eric Lin: Patch Set 1: (1 comment) Please don’t reply on this GitHub thread. Visit golang.org/cl/455115. |
Message from Joseph Tsai: Patch Set 1: (1 comment) Please don’t reply on this GitHub thread. Visit golang.org/cl/455115. |
It's not efficient to cache large encodeState objects (>32k) because it might build up memory usage significant in a short period of time with high concurrency.
I ran the benchmark and as expected some tests are worse because the test data contain data larger than 32k
Fix kubernetes/kubernetes#114276
Benchmark