PyTorch 1.0 preview (Dec 6, 2018) packages with full CUDA 10 support for your Ubuntu 18.04 x86_64 systems.
Contents
(The wheel has now been updated to the latest PyTorch 1.0 preview as of December 6, 2018.)
You’ve just received a shiny new NVIDIA Turing (RTX 2070, 2080 or 2080 Ti), or maybe even a beautiful Tesla V100, and now you would like to try out mixed precision (well mostly fp16) training on those lovely tensor cores, using PyTorch on an Ubuntu 18.04 LTS x86_64 system.
The idea is that these tensor cores chew through fp16 much faster than they do through fp32. In practice, neural networks tolerate having large parts of themselves living in fp16, although one does have to be careful with this. Furthermore, fp16 promises to save a substantial amount of graphics memory, enabling one to train bigger models.
For full fp16 support on the Turing architecture, CUDA 10 is currently the best option. Also, a number of CUDA 10 specific improvements were made to PyTorch after the 0.4.1 release.
However, PyTorch 1.0 (first release after 0.4.1) is not quite ready yet, and neither is it easy to find CUDA 10 builds of the current PyTorch 1.0 preview / PyTorch nightly.
Oh noes…
Well, fret no more!
Here you’ll be able to find a fully CUDA 10 based build (pip wheel format) of PyTorch master as on November 10 (updated!), 2018, up to and including commit b5db6ac. I’ve linked it with a fully CUDA 10 based build of MAGMA 2.4.0 as well, which I built as a conda package.
Installing and using these packages.
Ensure that you have an Ubuntu 18.04 LTS system with CUDA 10 and CUDNN installed and configured. See this great CUDA 10 howto by Puget Systems.
After this, you will also need to download CUDNN 7.1 packages for your system from the NVIDIA Developer site. An NVIDIA developer account (free signup) is required for this. I downloaded and installed libcudnn7_7.4.1.5-1+cuda10.0_amd64.deb
and libcudnn7-dev_7.4.1.5-1+cuda10.0_amd64.deb
but you’ll probably only need the former.
Setup a suitable conda environment with Python 3.7. Setup and activate with something like the following:
|
|
You can now download the PyTorch nightly wheel of 2018-12-06 (347MB) and install with:
|
|
The libraries in the wheel don’t have the conda-style relative RUNPATH
correctly set, so you have to set LD_LIBRARY_PATH
every time when starting your jupyter or any other Python code. This should work:
|
|
You’re now good to go!
First tests of mixed precision training with fast.ai on Tesla V100.
I fired up a Google Compute Engine with Tesla V100 node in Amsterdam to check that everything works.
I used the latest version of the fastai library, and specifically the callbacks.fp16
notebook which forms part of the brilliant new fastai documentation generation system. See for example the generated page on the fp16 callbacks.
Below I show the MNIST example code where I tried to compare fp32 with fast.ai fp16 (well, mixed precision to be precise) training.
The simple CNN trains up to 97% accuracy in 8 seconds, which is pretty quick already, but I could not see any training speed difference between fp16 and fp32. This could very well be because the network is so tiny.
However, I could confirm that the model parameters (at the very least) were all stored in fp16 floats when using the fast.ai to_fp16()
Learner method.
Train CNN with fp16
|
|
|
|
Check that type of parameters is half:
|
|
|
|
Train CNN with fp32
|
|
|
|
Check that type of model parameters is full float:
|
|
|
|