-
Notifications
You must be signed in to change notification settings - Fork 23k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add operation list for AutocastCPU #63534
add operation list for AutocastCPU #63534
Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 9caeced (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
ghstack-source-id: 368b416b290e3ba25a8598fbef918785528275e8 Pull Request resolved: #63534
ghstack-source-id: d61232bcb56a01c3cd21cec3fea411e1aaca60ee Pull Request resolved: #63534
@puririshi98 @ngimel We have changed the default data type for |
ghstack-source-id: 1e42ee99232957247c4a0833a5b9a007c6d3e9e9 Pull Request resolved: #63534
[ghstack-poisoned]
[ghstack-poisoned]
In this PR: * We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002` * We also update the operation list which needs casting to `lower_precision_fp` or `float32`. [ghstack-poisoned]
In this PR: * We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002` * We also update the operation list which needs casting to `lower_precision_fp` or `float32`. [ghstack-poisoned]
In this PR: * We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002` * We also update the operation list which needs casting to `lower_precision_fp` or `float32`. [ghstack-poisoned]
In this PR: * We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002` * We also update the operation list which needs casting to `lower_precision_fp` or `float32`. [ghstack-poisoned]
@ngimel PTAL |
LGTM, cc @puririshi98 @mcarilli, maybe we should adjust bfloat16 list for cuda amp also (although we probably don't have an infra at this point to have different op lists depending on the type?) |
In this PR: * We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002` * We also update the operation list which needs casting to `lower_precision_fp` or `float32`. [ghstack-poisoned]
In this PR: * We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002` * We also update the operation list which needs casting to `lower_precision_fp` or `float32`. [ghstack-poisoned]
In this PR: * We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002` * We also update the operation list which needs casting to `lower_precision_fp` or `float32`. [ghstack-poisoned]
In this PR: * We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002` * We also update the operation list which needs casting to `lower_precision_fp` or `float32`. [ghstack-poisoned]
In this PR: * We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002` * We also update the operation list which needs casting to `lower_precision_fp` or `float32`. [ghstack-poisoned]
@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
In this PR:
AutocastCPU
fromfloat16
tobfloat16
as discussed herehttps://github.com/pytorch/pytorch/pull/61002
lower_precision_fp
orfloat32
.Stack from ghstack:
Differential Revision: D30644914