Nothing Special   »   [go: up one dir, main page]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Add linspace symbolic #58854

Merged
merged 9 commits into from
Jun 11, 2021

Conversation

shubhambhokare1
Copy link
Collaborator
@shubhambhokare1 shubhambhokare1 commented May 24, 2021

@facebook-github-bot
Copy link
Contributor
facebook-github-bot commented May 24, 2021

💊 CI failures summary and remediations

As of commit 2a45070 (more details on the Dr. CI page):


  • 3/3 failures possibly* introduced in this PR
    • 1/3 non-scanned failure(s)

🕵️ 2 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_backward_compatibility_check_test (1/2)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Jun 10 19:41:54 The PR is introducing backward ...m to confirm whether this change is wanted or not.
Jun 10 19:41:54 processing existing schema:  alltoall_base(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor _1, Tensor _2, int[] _3, int[] _4) -> (__torch__.torch.classes.dist_c10d.Work _0)
Jun 10 19:41:54 processing existing schema:  alltoall(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, Tensor[] _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
Jun 10 19:41:54 processing existing schema:  send(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
Jun 10 19:41:54 processing existing schema:  recv(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
Jun 10 19:41:54 processing existing schema:  recv_anysource(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
Jun 10 19:41:54 processing existing schema:  barrier(__torch__.torch.classes.dist_c10d.ProcessGroup _0) -> (__torch__.torch.classes.dist_c10d.Work _0)
Jun 10 19:41:54 processing existing schema:  __init__(__torch__.torch.classes.dist_c10d.frontend _0) -> (NoneType _0)
Jun 10 19:41:54 processing existing schema:  new_process_group_helper(__torch__.torch.classes.dist_c10d.frontend _0, int _1, int _2, int[] _3, str _4, __torch__.torch.classes.dist_c10d.Store _5, str? _6, int _7) -> (__torch__.torch.classes.dist_c10d.ProcessGroup _0)
Jun 10 19:41:54 processing existing schema:  get_process_group_by_name(__torch__.torch.classes.dist_c10d.frontend _0, str _1) -> (__torch__.torch.classes.dist_c10d.ProcessGroup _0)
Jun 10 19:41:54 processing existing schema:  get_name_of_process_group(__torch__.torch.classes.dist_c10d.frontend _0, __torch__.torch.classes.dist_c10d.ProcessGroup _1) -> (str _0)
Jun 10 19:41:54 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not. 
Jun 10 19:41:54 
Jun 10 19:41:54 Broken ops: [
Jun 10 19:41:54 	aten::scatter_add.out(Tensor self, int dim, Tensor index, Tensor src, *, Tensor(a!) out) -> (Tensor(a!))
Jun 10 19:41:54 	aten::retains_grad(Tensor self) -> (bool)
Jun 10 19:41:54 	aten::scatter.src_out(Tensor self, int dim, Tensor index, Tensor src, *, Tensor(a!) out) -> (Tensor(a!))
Jun 10 19:41:54 	aten::scatter.value_out(Tensor self, int dim, Tensor index, Scalar value, *, Tensor(a!) out) -> (Tensor(a!))
Jun 10 19:41:54 	aten::scatter.reduce(Tensor self, int dim, Tensor index, Tensor src, *, str reduce) -> (Tensor)
Jun 10 19:41:54 	aten::scatter.reduce_out(Tensor self, int dim, Tensor index, Tensor src, *, str reduce, Tensor(a!) out) -> (Tensor(a!))
Jun 10 19:41:54 	aten::scatter.value_reduce(Tensor self, int dim, Tensor index, Scalar value, *, str reduce) -> (Tensor)
Jun 10 19:41:54 	aten::scatter.value_reduce_out(Tensor self, int dim, Tensor index, Scalar value, *, str reduce, Tensor(a!) out) -> (Tensor(a!))

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_build (2/2)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

Jun 10 19:31:05 AssertionError: Found an invalid operator name: scatter.src_out
Jun 10 19:31:05   File "/var/lib/jenkins/workspace/tools/codegen/gen_backend_stubs.py", line 240, in <module>
Jun 10 19:31:05     main()
Jun 10 19:31:05   File "/var/lib/jenkins/workspace/tools/codegen/gen_backend_stubs.py", line 133, in main
Jun 10 19:31:05     run(options.source_yaml, options.output_dir, options.dry_run)
Jun 10 19:31:05   File "/var/lib/jenkins/workspace/tools/codegen/gen_backend_stubs.py", line 150, in run
Jun 10 19:31:05     parsed_backend_yaml = parse_backend_yaml(source_yaml, grouped_native_functions, backend_indices)
Jun 10 19:31:05   File "/var/lib/jenkins/workspace/tools/codegen/gen_backend_stubs.py", line 84, in parse_backend_yaml
Jun 10 19:31:05     backend_idx = create_backend_index(supported, backend_key)
Jun 10 19:31:05   File "/var/lib/jenkins/workspace/tools/codegen/gen_backend_stubs.py", line 65, in create_backend_index
Jun 10 19:31:05     assert op_name in native_functions_map, f"Found an invalid operator name: {op_name}"
Jun 10 19:31:05 AssertionError: Found an invalid operator name: scatter.src_out
Jun 10 19:31:05 ~/workspace/xla
Jun 10 19:31:05 + OPTS=()
Jun 10 19:31:05 + getopts O: OPTION
Jun 10 19:31:05 + case $OPTION in
Jun 10 19:31:05 + for i in ${OPTARG}
Jun 10 19:31:05 + OPTS+=("--cxxopt=${i}")
Jun 10 19:31:05 + getopts O: OPTION
Jun 10 19:31:05 + shift 2
Jun 10 19:31:05 + CMD=install
Jun 10 19:31:05 ++ dirname /var/lib/jenkins/workspace/xla/build_torch_xla_libs.sh

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

Copy link
Collaborator
@BowenBao BowenBao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good overall, please rebase

@BowenBao BowenBao merged commit e2a1236 into pytorch:onnx_ms_1 Jun 11, 2021
BowenBao pushed a commit that referenced this pull request Jun 18, 2021
* Adds support for linspace op 
* Modifies arange symbolic in opset 9 to replicate the same behavior in which dtype is determined (similar to opset 11) as in https://pytorch.org/docs/stable/generated/torch.arange.html
* Enabled some arange unit tests which were disabled for opset 9

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>
BowenBao pushed a commit that referenced this pull request Jun 18, 2021
* Adds support for linspace op
* Modifies arange symbolic in opset 9 to replicate the same behavior in which dtype is determined (similar to opset 11) as in https://pytorch.org/docs/stable/generated/torch.arange.html
* Enabled some arange unit tests which were disabled for opset 9

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jun 18, 2021
* Adds support for linspace op
* Modifies arange symbolic in opset 9 to replicate the same behavior in which dtype is determined (similar to opset 11) as in https://pytorch.org/docs/stable/generated/torch.arange.html
* Enabled some arange unit tests which were disabled for opset 9

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jun 21, 2021
* Adds support for linspace op
* Modifies arange symbolic in opset 9 to replicate the same behavior in which dtype is determined (similar to opset 11) as in https://pytorch.org/docs/stable/generated/torch.arange.html
* Enabled some arange unit tests which were disabled for opset 9

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jun 22, 2021
* Adds support for linspace op
* Modifies arange symbolic in opset 9 to replicate the same behavior in which dtype is determined (similar to opset 11) as in https://pytorch.org/docs/stable/generated/torch.arange.html
* Enabled some arange unit tests which were disabled for opset 9

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jun 22, 2021
* Adds support for linspace op
* Modifies arange symbolic in opset 9 to replicate the same behavior in which dtype is determined (similar to opset 11) as in https://pytorch.org/docs/stable/generated/torch.arange.html
* Enabled some arange unit tests which were disabled for opset 9

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jun 23, 2021
* Adds support for linspace op
* Modifies arange symbolic in opset 9 to replicate the same behavior in which dtype is determined (similar to opset 11) as in https://pytorch.org/docs/stable/generated/torch.arange.html
* Enabled some arange unit tests which were disabled for opset 9

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jun 25, 2021
* Adds support for linspace op
* Modifies arange symbolic in opset 9 to replicate the same behavior in which dtype is determined (similar to opset 11) as in https://pytorch.org/docs/stable/generated/torch.arange.html
* Enabled some arange unit tests which were disabled for opset 9

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jun 30, 2021
* Adds support for linspace op
* Modifies arange symbolic in opset 9 to replicate the same behavior in which dtype is determined (similar to opset 11) as in https://pytorch.org/docs/stable/generated/torch.arange.html
* Enabled some arange unit tests which were disabled for opset 9

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jul 6, 2021
* Adds support for linspace op
* Modifies arange symbolic in opset 9 to replicate the same behavior in which dtype is determined (similar to opset 11) as in https://pytorch.org/docs/stable/generated/torch.arange.html
* Enabled some arange unit tests which were disabled for opset 9

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>

Differential Revision: [D29494911](https://our.internmc.facebook.com/intern/diff/D29494911)

[ghstack-poisoned]
facebook-github-bot pushed a commit that referenced this pull request Jul 8, 2021
Summary:
Pull Request resolved: #60246

* Adds support for linspace op
* Modifies arange symbolic in opset 9 to replicate the same behavior in which dtype is determined (similar to opset 11) as in https://pytorch.org/docs/stable/generated/torch.arange.html
* Enabled some arange unit tests which were disabled for opset 9

Test Plan: Imported from OSS

Reviewed By: zou3519, ZolotukhinM

Differential Revision: D29494911

Pulled By: SplitInfinity

fbshipit-source-id: bddff18a90f8a78121c8ecdd1dafc15c69962d66

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants