Nothing Special   »   [go: up one dir, main page]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add operation list for AutocastCPU #63534

Conversation

leslie-fang-intel
Copy link
Collaborator
@leslie-fang-intel leslie-fang-intel commented Aug 18, 2021

In this PR:

  • We have changed the default dtype of AutocastCPU from float16 to bfloat16 as discussed here https://github.com/pytorch/pytorch/pull/61002
  • We also update the operation list which needs casting to lower_precision_fp or float32.

Stack from ghstack:

Differential Revision: D30644914

@facebook-github-bot
Copy link
Contributor
facebook-github-bot commented Aug 18, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 9caeced (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

leslie-fang-intel added a commit that referenced this pull request Aug 19, 2021
ghstack-source-id: 368b416b290e3ba25a8598fbef918785528275e8
Pull Request resolved: #63534
leslie-fang-intel added a commit that referenced this pull request Aug 19, 2021
ghstack-source-id: d61232bcb56a01c3cd21cec3fea411e1aaca60ee
Pull Request resolved: #63534
@leslie-fang-intel
Copy link
Collaborator Author

@puririshi98 @ngimel We have changed the default data type for AutocastCPU to bfloat16 in this PR.

leslie-fang-intel added a commit that referenced this pull request Aug 19, 2021
ghstack-source-id: 1e42ee99232957247c4a0833a5b9a007c6d3e9e9
Pull Request resolved: #63534
@leslie-fang-intel leslie-fang-intel added the intel priority matters to intel architecture from performance wise label Aug 19, 2021
In this PR:
* We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002`
* We also update the operation list which needs casting to `lower_precision_fp` or `float32`.




[ghstack-poisoned]
In this PR:
* We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002`
* We also update the operation list which needs casting to `lower_precision_fp` or `float32`.




[ghstack-poisoned]
In this PR:
* We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002`
* We also update the operation list which needs casting to `lower_precision_fp` or `float32`.




[ghstack-poisoned]
In this PR:
* We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002`
* We also update the operation list which needs casting to `lower_precision_fp` or `float32`.




[ghstack-poisoned]
@VitalyFedyunin VitalyFedyunin requested a review from ngimel August 24, 2021 17:45
@VitalyFedyunin
Copy link
Contributor

@ngimel PTAL

@ngimel
Copy link
Collaborator
ngimel commented Aug 24, 2021

LGTM, cc @puririshi98 @mcarilli, maybe we should adjust bfloat16 list for cuda amp also (although we probably don't have an infra at this point to have different op lists depending on the type?)

In this PR:
* We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002`
* We also update the operation list which needs casting to `lower_precision_fp` or `float32`.




[ghstack-poisoned]
In this PR:
* We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002`
* We also update the operation list which needs casting to `lower_precision_fp` or `float32`.




[ghstack-poisoned]
In this PR:
* We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002`
* We also update the operation list which needs casting to `lower_precision_fp` or `float32`.




[ghstack-poisoned]
In this PR:
* We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002`
* We also update the operation list which needs casting to `lower_precision_fp` or `float32`.




[ghstack-poisoned]
In this PR:
* We have changed the default dtype of `AutocastCPU` from `float16` to `bfloat16` as discussed here `https://github.com/pytorch/pytorch/pull/61002`
* We also update the operation list which needs casting to `lower_precision_fp` or `float32`.




[ghstack-poisoned]
@ezyang
Copy link
Contributor
ezyang commented Aug 30, 2021

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@ezyang merged this pull request in 09dfaa0.

@facebook-github-bot facebook-github-bot deleted the gh/leslie-fang-intel/1/head branch September 3, 2021 14:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed intel priority matters to intel architecture from performance wise Merged open source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants