-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resolve docker.NewResolver race condition #8748
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for sending out this quick PR for fixing it and greate go test with race detection!
3f46047
to
f11aa82
Compare
Added another commit to replace a similar case to using the builtin @thaJeztah @dcantah wonder if you guys feel the current fix looks good or any suggestions I can improve (either in this PR or as follow-up). 😁
Totally agree more improvement/refactor can be made to this part of code. I'm relatively new to this part of code so haven't figure out a good way to make larger improvement 😁 |
/cc @thaJeztah @dcantah |
@djdongjin: GitHub didn't allow me to request PR reviews from the following users: kindly, ping. Note that only containerd members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay. @thaJeztah and I were both trying to make sense of the resolver code in general as we'd never dug in, but obviously it's a rabbit hole. I think this is fine given what it fixes. As a follow up (for anyone) we should see if there's anything we can do here to avoid a bunch of copies and needing to keep remaking resolver objects, whether that be API or otherwise
fd98e3e
to
086d968
Compare
/retest |
Thanks @dcantah @thaJeztah for reviewing my PR! 🙏 |
12bd529
to
d0fb31e
Compare
I think the changes in this PR are ok to mitigate against situations where a shared headers map was inadvertently passed, but shouldn't we also fix this at the call site? Perhaps these need to make a copy as well;
|
Signed-off-by: Jin Dong <djdongjin95@gmail.com>
Signed-off-by: Jin Dong <djdongjin95@gmail.com>
Good point, yeah I think in general we should That being said, I think the call sites should be fine (https://github.com/search?q=repo%3Acontainerd%2Fcontainerd+%22docker.NewResolver%28%22&type=code), given now Also around these call sites (or any use of http.Header), we should check if it satisfies the above (1) and (2). If so, we need to Some cases I found in the codebase where we did need to containerd/remotes/docker/resolver.go Lines 475 to 478 in 3c250cb
(I changed this copy to using the builtin containerd/remotes/docker/resolver.go Lines 546 to 549 in 3c250cb
@thaJeztah please let me know if this makes any sense or I missed anything 😬 |
d0fb31e
to
83ff030
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Fix #8742
It can be reproduced with the added test: