As I mentioned in a previous blog post, I switched from a 2017 15" MacBook Pro to a Thinkpad X1 Extreme in April of this year.

Although I’m still using Windows on this machine in order to see what all the fuss is about (BTW, WSL is amazing), the Linux itch could not be ignored anymore.

What makes this laptop slightly more challenging than some of my previous Linux laptop and Optimus (dynamically switching graphics) adventures, is that in this case the the NVIDIA dGPU is hard-wired to the external outputs (HDMI and and displayport over thunderbolt).

My requirements for Linux on the Thinkpad were as follows:

  • Disconnection and reconnection to an external monitor should be supported, without me having to reboot or even log out and back in again.
  • It should be possible to use the NVIDIA dGPU when required, for example for PyTorch and other CUDA-dependent applications.
  • The laptop should be as battery-efficient as possible, so NVIDIA should only be active when absolutely necessary.
  • Suspend and resume should also work.

In short, I should be able to use the Thinkpad with Linux like I do when it runs Windows.

To my surprise, Manjaro Linux, in contrast to Ubuntu 19.04, facilitated meeting all of the requirements listed above.

Why Manjaro?

Ubuntu is my go-to distribution, but with the Thinkpad I was not able to get very far with the latest release 19.04.

Attempting to switch between the NVIDIA GPU and integrated graphics with prime-select almost never worked. I sat staring at a frozen desktop (right after login) more times than I wish to remember. Bumblebee was even less successful. The whole adventure ended when the only desktop I could get was 640x480 or something similar.

(I have written before about some of the issues caused by Ubuntu’s gpu-manager. Too much magic.)

For years now, I have been impressed by the level of detail on the helpful Arch Linux wiki, and the great deal of effort its developers and users put into the distribution.

However, Arch does require more hands-on time from its users than I currently have available.

Fortunately, Manjaro is an Arch-based distribution that has already taken care of much of the details, a fact that further motivated my decision to try it.

Important update on 2019-08-03: Choose KDE and choose kernel 5.2

I started with the Gnome-based Manjaro. Unfortunately, gnome-shell has the nasty habit of latching on to the /dev/nvidia* devices when the nvidia comes online, and then it becomes difficult to switch the nividia back off again without logging out and in again.

The whole point of this exercise was to avoid that inconvenience.

It took me about 30 minutes to install the necessary Manjaro packages to convert the installation to KDE (see the wiki page on the topic).

KDE never latches on to /dev/nvidia*, and the laptop switches the nvidia off as soon as I kill intel-virtual-output before monitor disconnection.

Furthermore, the Thunderbolt3 Workstation Dock at work gave some issues, which were all solved when I upgraded from Linux kernel 4.19 to 5.2.4.

Setup instructions.

The following two subsections explain step-by-step how to get Manjaro working with hybrid graphics on the Thinkpad X1 Extreme.

Configure graphics hardware with mhwd.

Use the useful little Manjaro Hardware Detection command, mhwd, to setup bumblebee-based hybrid graphics:

1
sudo mhwd -i pci video-hybrid-intel-nvidia-bumblebee

Ensure that your user account belongs to the bumblebee group:

1
sudo gpasswd -a $USER bumblebee

After this, it is probably a good idea to reboot.

The bumblebee daemon, group permissions and optirun.

Check that the bumblebeed service is running:

1
sudo systemctl status bumblebeed

If it’s running, you can try the following test:

1
optirun glxgears -info

You should see the well-known 3D rotating gears example, and on stdout you should see information about the graphics card, including a really long list of GL extensions.

The first few lines should look like this:

1
2
3
GL_RENDERER   = GeForce GTX 1050 Ti with Max-Q Design/PCIe/SSE2
GL_VERSION    = 4.6.0 NVIDIA 430.26
GL_VENDOR     = NVIDIA Corporation

If instead you see something like the following:

1
2
[14482.502494] [ERROR]You've no permission to communicate with the Bumblebee daemon. Try adding yourself to the 'bumblebee' group
[14482.502533] [ERROR]Could not connect to bumblebee daemon - is it running?

… check that your user currently has the bumblebee group active by typing:

1
groups

If you don’t see bumblebee in the output list, “login” to the group by typing the following:

1
newgrp bumblebee

… and then trying the optirun command again.

Connecting an external display.

Up to now, you will have been working on the laptop’s built-in screen.

Also, the NVIDIA should be switched off, which you can confirm by:

1
2
$ cat /proc/acpi/bbswitch 
0000:01:00.0 OFF

When you connect a monitor to the HDMI or thunderbolt3 output of the ThinkPad, that will be connected directly to the NVIDIA’s output.

We need some sort of trick to enable the Intel graphics driver, to use the NVIDIA as a virtual output.

This is exactly what the Intel-developed tool intel-virtual-output does!

Fixing xorg.conf.nvidia

Before continuing, make a two changes to the file /etc/bumblebee/xorg.conf.nvidia.

In the “device” section, change option UseEDID to true. By default this is set to false, which can prevent the bumblebee X server from correctly detecting the resolution of your external display.

Also in the device section, comment out the line with Option "ConnectedMonitor" "DFP". Without this, the external monitor connected via my ThinkPad Thunderbolt3 workstation dock at work would not come on, with the following error:

1
2
[    77.197] (EE) NVIDIA(GPU-0): Unable to add conservative default mode "nvidia-auto-select".
[    77.197] (EE) NVIDIA(GPU-0): Unable to add "nvidia-auto-select" mode to ModePool.

It was trying to use DFP-0.3, whilst the monitor in this case is named DP-1.

At home, I have the monitor connected directly to the tb3 port of the thinkpad. In that case, the monitor was indeed named DFP-1 (or somesuch), so commenting out the ConnectedMonitor option was not necessary.

My complete xorg.conf.nvidia looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
Section "ServerLayout"
    Identifier "Layout0"
    Option "AutoAddDevices" "false"
EndSection

Section "Device"
    Identifier  "Device1"
    Driver      "nvidia"
    BusID       "PCI:1:0:0"
    VendorName "NVIDIA Corporation"
    Option "NoLogo" "true"
    # activate EDID so NVIDIA can correctly identify display
    Option "UseEDID" "true"
    # comment out to let NVIDIA figure it out by itself
    # http://us.download.nvidia.com/XFree86/Linux-x86/346.47/README/xconfigoptions.html
    #Option "ConnectedMonitor" "DFP"
EndSection

Activating the external display.

Now, with the monitor connected, do the following:

1
optirun -b none intel-virtual-output -b

This will run an additional X server for the NVIDIA, using the proprietary NVIDIA drivers, and then run intel-virtual-output against that display.

The -b none specifies that it should not make use of any bridge mechanism (such as primus or vgl) to enable that command’s drawn output to be relayed back to our main X.

This is because intel-virtual-output at a lower level hooks up the NVIDIA output, directly connected to the external output, as a virtual display local to the intel (primary) X.

The upshot of this is that your currently running Intel-GPU X server now has an addition display that you can manage using all of the existing tools!

Just to ram the point home, you can drag any windows across from the laptop’s display to the external and back.

Before you disconnect your laptop, simply kill the intel-virtual-output process, which will bring all of your windows back to the laptop’s built-in display.

Running OpenGL apps in connected mode.

With the external monitor connected, you can run any OpenGL apps using the NVIDIA dGPU by prepending DISPLAY=:8.

For example:

1
DISPLAY=:8 glxgears -info

Running CUDA apps.

In all cases (connected and disconnected), running CUDA apps is as simple as the following PyTorch mini-demo:

1
2
3
4
5
6
7
8
9
$ optirun -b none python
Python 3.7.3 | packaged by conda-forge | (default, Jul  1 2019, 21:52:21) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
>>> torch.cuda.get_device_name(0)
'GeForce GTX 1050 Ti with Max-Q Design'

Note that here we use -b none again, as we only want optirun to activate the NVIDIA (if powered down) and setup the correct library paths so that the process we are invoking is able to talk directly to the GPU.

Power consumption.

One of the best things about this setup, is how little power the laptop consumes when the NVIDIA is powered down, which should happen automatically when you have no optirun sessions ongoing.

After disconnecting from the external monitor, and double-checking with cat /proc/acpi/bbswitch that the NVIDIA is off, I get the following TLP output with Manjaro gnome-shell, Emacs 26.2 in graphics mode, gnome-terminal with three tabs, and Chromium with four tabs:

1
2
3
4
5
6
7
8
9
$ sudo tlp-stat -b
--- TLP 1.2.2 --------------------------------------------

+++ ThinkPad Battery Status: BAT0 (Main / Internal)
/sys/class/power_supply/BAT0/energy_full_design             =  80400 [mWh]
/sys/class/power_supply/BAT0/energy_full                    =  77760 [mWh]
/sys/class/power_supply/BAT0/energy_now                     =  67530 [mWh]
/sys/class/power_supply/BAT0/power_now                      =   5274 [mW]
/sys/class/power_supply/BAT0/status                         = Discharging

That’s not much more than 5 W of consumption at idle, which has been attained with minimal configuration effort from my side.

I get similar results with KDE instead of Gnome.

Conclusions.

I was quite pleasantly surprised by how well Manjaro runs on the Thinkpad X1 Extreme.

Although I would like some more months for testing, this configuration could easily work as one’s daily driver.

The fact that one can have a workstation-class Linux-running laptop with low idle power consumption, yet with the ability to activate CUDA hardware when required, is compelling.