A few days ago I switched my main development machine from a 2017 15” MacBook Pro with a 4-core 2.9GHz 7820HQ i7, 16GB of RAM and 512GB of SSD to a Lenovo Thinkpad X1 Extreme with a 6-core 8750H i7, 32GB of RAM and 1.5TB of Really Fast SSD. My reasons for the switch were: The fact that my upgrade path with the MacBook would have been much more expensive.
One of the great new F# tools in .NET Core 3 Preview 3 is F# interactive as pure .NET Core application. To use dotnet fsi in your Visual Studio Code with the Ionide F# IDE plugin instead of the fsharpi binary, add the following to your user settings.json: 1 2 3 "FSharp.fsacRuntime": "netcore", "FSharp.fsiFilePath": "/usr/local/share/dotnet/dotnet", "FSharp.fsiExtraParameters": ["fsi"] Remember to replace fsiFilePath with wherever your .NET Core 3 dotnet binary is installed.
On Wednesday April 3, 2019, I finished migrating this site from Wordpress to Hugo. An important part of this project was finding a new system for handling blog post comments. On my personal blog, I am able to use the free and ad-free tier of Disqus, but because vxlabs.com is strictly-speaking a company site (even although 100% of the posts are non-commercial), that won’t work here. Furthermore, the $9 / month price tag of the lowest Disqus tier is not really justifiable with the relatively small number of comments being handled.
TL;DR: For best results with mixed precision training, use NVIDIA’s Automatic Mixed Precision together with fastai, and remember to set any epsilons, for example in the optimizer, correctly. Background Newer NVIDIA GPUs such as the consumer RTX range, the Tesla V100 and others have hardware support for half-precision / fp16 tensors. This is interesting, because many deep neural networks still function perfectly if you store most of their parameters using the far more compact 16-bit floating point precision.
I have prepared a simple Ansible script which will enable you to convert a clean Ubuntu 18.04 image (as supplied by Google Compute Engine or PaperSpace) into a CUDA 10, PyTorch 1.0 preview, fastai 1.0.x, miniconda3 powerhouse, ready to live the (mixed-precision!) deep learning dream. I built this script specifically in order to be able to do mixed-precision neural network training on NVIDIA’s TensorCores. It currently makes use of the vxlabs.
In a previous post I showed how to get Palantir’s Python Language Server working together with Emacs and lsp-mode. In this post, we look at the brand new elephant in the room, Microsoft’s arguably far more powerful own Python Language Server, and how to integrate it with Emacs. Motivation Since that previous post on Palantir’s language server, I’ve been using Emacs far more intensively for Python coding in tmux on remote machines with GPUs for deep learning.
(The wheel has now been updated to the latest PyTorch 1.0 preview as of December 6, 2018.) You’ve just received a shiny new NVIDIA Turing (RTX 2070, 2080 or 2080 Ti), or maybe even a beautiful Tesla V100, and now you would like to try out mixed precision (well mostly fp16) training on those lovely tensor cores, using PyTorch on an Ubuntu 18.04 LTS x86_64 system. The idea is that these tensor cores chew through fp16 much faster than they do through fp32.