Nothing Special   »   [go: up one dir, main page]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix pytorch-extension and re-integrate into text2text for improved performance #23

Open
artitw opened this issue Jul 31, 2021 · 2 comments

Comments

@artitw
Copy link
Owner
artitw commented Jul 31, 2021

Training and inferencing performance could be better. Need to update and test https://github.com/artitw/apex

@artitw artitw changed the title Fix pytorch-extension and re-integrate into text2text for faster training and inference Fix pytorch-extension and re-integrate into text2text for improved performance Jul 31, 2021
@johnanisere
Copy link

I'm interested in this. Can you give more details?

@artitw
Copy link
Owner Author
artitw commented Aug 14, 2021

We used to include a pytorch extension (APEX) in the demo notebook to speed up the model performance on GPUs but removed it due to compatibility issues. Here is the snippet we would run before anything else:

export CUDA_HOME=/usr/local/cuda-10.1
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" pytorch-extension

To add it back,

  1. We can test to make sure that the original APEX works in the demo notebook.
  2. If so, we could then update the PyPI package so that it is pip installable.

Let us know if you decide to work on this, and we can assign you to it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants