The proprietary Nvidia Linux drivers are known for excellent performance and support for the companies latest products. In this post, we look at some general tips and fixes when using these popular binary blobs with in a Linux system.


Symptoms: The picture during fast-paced videos or games will look like it's torn horizontally. Moving windows around quickly will also produce screen tearing.

If you are experiencing tearing or graphical artifacts using the proprietary Nvidia Linux modules, there may be something you can do about it. Tearing happens when a graphics card output is not in sync with its displays refresh rate. This can be a normal consequence of taxing software in which your graphics card cannot keep up. Although, if you are experiencing these symptoms during normal use with Vsync enabled, chances are it's a configuration issue. I've noticed screen tearing in both Gnome and KDE desktop environments only with the compositor enabled.

Fix: Set the ForceCompositionPipeline MetaMode option. To do this temporarily to see if this will fix your issue, run the following command:

nvidia-settings --assign CurrentMetaMode="DFP-6: nvidia-auto-select { ForceCompositionPipeline = On }"

Replace DFP-6 with the name of your display. You can find the name of the display by running the nvidia-settings utility. After running this command, the screen will blank for a moment while the settings are applied. To make this permanent, add the following to the Screen section of the xorg.conf or xorg.conf.d file:

Option     "metamodes" "DFP-6: nvidia-auto-select { ForceCompositionPipeline = On }"

Again, make sure to replace DFP-6 with the name of your actual display.

Kernel Modesetting

KMS is great to have on the open-source equivalent drivers and is often enabled by default. KMS can be enabled with the current version of Nvidias proprietary drivers but is experimental and comes with some caveats. One benefit is that the kernel can now manage the display properties directly and switching between TTYs becomes faster and safer. The downsides are that the Nvidia drivers do not currently have support for the accelerated framebuffer which makes high resolution consoles abysmal to work with. For instance, enabling a high resolution framebuffer will make scrolling text extremely slow through a TTY.

I would suggest you enable KMS temporarily by editing directly in the GRUB menu when you boot your machine. Add the following to your kernel command line arguments:


See the almighty Arch wiki or the Nvidia documentation for more details.


As mentioned above, the Nvidia module does not currently have support for an accelerated framebuffer which makes redraw on high resolution TTYs extremely slow. If you are experiencing slow drawing during startup of GRUB, I recommend you edit your /etc/default/grub file to set GRUB_TERMINAL_OUTPUT=console. This will effectively disable the high resolution terminal that shows the GRUB menu when your computer starts. At least you will no longer have the long delay for GRUB to redraw its menu. You may wish to also set GRUB_GFXPAYLOAD_LINUX=text if you have KMS enabled so that the kernel can set the display properly. Finally, make sure to run grub-mkconfig -o /boot/grub/grub.cfg to install your new configuration to /boot.


Setting the Coolbits option in your Device section of your xorg.conf file will enable some advanced features such as fan speed adjustments as well as overclocking. Here is a run down of the current possible values:

  • When "2" (Bit 1) is set in the "Coolbits" option value, the NVIDIA driver will attempt to initialize SLI when using GPUs with different amounts of video memory.
  • When "4" (Bit 2) is set in the "Coolbits" option value, the nvidia-settings Thermal Monitor page will allow configuration of GPU fan speed, on graphics boards with programmable fan capability.
  • When "8" (Bit 3) is set in the "Coolbits" option value, the PowerMizer page in the nvidia-settings control panel will display a table that allows setting per-clock domain and per-performance level offsets to apply to clock values. This is allowed on certain GeForce GPUs. Not all clock domains or
    performance levels may be modified. On GPUs based on the Pascal architecture the offset is applied to all performance levels.
  • When "16" (Bit 4) is set in the "Coolbits" option value, the nvidia-settings command line interface allows setting GPU overvoltage. This is allowed on certain GeForce GPUs.
  • The default for this option is 0 (unsupported features are disabled).

Add the values of the options you which to enable together to get a single Coolbits value. Check the official documentation or /usr/share/doc/nvidia/html/xconfigoptions.html for the up to date value to set for the options you would like enable.

Driver Persistence

You can enable driver persistence by starting the nvidia-persistenced daemon on boot. This user-space program will make sure that your driver state is persisted when doing things like CUDA development. This is necessary when doing asynchronous job scheduling where you are loading and unloading clients on a device and can prevent having to reinitialize the device. See the Nvidia documentation for more details.