FS#53090 - [nvidia-340xx-utils] Modulepath issue when using multiple Xserver
Attached to Project:
Arch Linux
Opened by hamelg (hamelg) - Sunday, 26 February 2017, 19:01 GMT
Last edited by Antonio Rojas (arojas) - Thursday, 06 June 2019, 11:28 GMT
Opened by hamelg (hamelg) - Sunday, 26 February 2017, 19:01 GMT
Last edited by Antonio Rojas (arojas) - Thursday, 06 June 2019, 11:28 GMT
|
Details
Description:
Modulepath setup is correct only for the 1st session and is ignored for additional session. Additional info: * package version(s) nvidia-340xx-utils 340.102-5 Steps to reproduce: 1. Install the KDE plasma desktop 2. Open a KDE session 3. Goto K Menu : Power/Session > Switch User > Switch Expected Result : sddm launches a second XServer and a sddm-greater is displayed to select a login. Current Result : a black screen is displayed. sddm-greater crashes because glx has not been loaded. In Xorg.0.log, the module path is correct : Xserver loads /usr/lib/nvidia/xorg/libglx.so In Xorg.1.log, the second X server tries to load the wrong module /usr/lib/xorg/modules/extensions/libglx.so (owned by xorg-server). |
This task depends upon
Closed by Antonio Rojas (arojas)
Thursday, 06 June 2019, 11:28 GMT
Reason for closing: Won't fix
Additional comments about closing: Package dropped
Thursday, 06 June 2019, 11:28 GMT
Reason for closing: Won't fix
Additional comments about closing: Package dropped
The issue still happens.
I narrowed it down to
```
(II) xfree86: Adding drm device (/dev/dri/card0)
(EE) /dev/dri/card0: failed to set DRM interface version 1.4: Permission denied
```
in Xorg.1.log, which means the device isn't fully set up (as it already was by the first X server), thus failing that the driver auto-detection in /usr/share/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf does not kick in, and the `ModulePath` stays `/usr/lib/xorg/modules` instead of being extended to `/usr/lib/nvidia/xorg,/usr/lib/xorg/modules`.
I fixed this for myself by adding
```
Section "Files"
ModulePath "/usr/lib/nvidia/xorg"
ModulePath "/usr/lib/xorg/modules"
EndSection
```
to my xorg.conf.
https://bbs.archlinux.org/viewtopic.php?id=223409
Is it related or a duplicate ?
To properly debug this one would need access to some hybrid graphics machine to test if starting multiple X servers works currently (and what drivers are used in those cases), or if the changes proposed below break anything.
I suspect having a /usr/share/X11/xorg.conf.d/10-nvidia.conf with the content above in the nvidia-340xx-utils package, and having the current /usr/share/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf renamed to /usr/share/X11/xorg.conf.d/20-nvidia-drm-outputclass.conf to overwrite these settings in eg the bumblebee package could be a working solution.
This assumes that someone installing nvidia-340xx{,-utils} wants to use that somewhat exclusively if they don't write an xorg.conf or install bumblebee. Anyone using more than 1 gpu (or 2 with bumblebee) is likely to already need an xorg.conf to do what they want.
If the above is not a proper solution I guess we can just add a note to the wiki on how to fix this, or extend the install message of nvidia-340xx-utils.
Thanks!
Great !
By https://git.archlinux.org/svntogit/packages.git/commit/trunk?h=packages/nvidia-340xx-utils&id=f59068464cd5584dc077144ff35781f86f9b701e to be specific.
The /usr/share/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf hook does not apply since the setting of the DRM interface version fails. Which is the reason why that file did not work initially.
I outlined one above, I can however not test this since I don't have any hybrid/dual graphics box to test this on, I'll however detail the approach in a possibly easier to understand manner.
Dual/hybrid graphics with nvidia* (not noveau) are doable via https://wiki.archlinux.org/index.php/NVIDIA_Optimus or https://wiki.archlinux.org/index.php/Bumblebee. Optimus does not allow switching and uses the nvidia card, and requires an xorg.conf, so we will ignore that configuration.
So the only hybrid graphics solution that uses nvidia* and should work without any configuration requires installing bumblebee. This is what the proposed solution will rely on.
nvidia*-utils:
Provide /usr/share/X11/xorg.conf.d/10-nvidia.conf
Does not provide /usr/share/X11/xorg.conf.d/20-nvidia-drm-outputclass.conf anymore. This also seems logical since a file that is meant to make hybrid graphics work is now provided by bumblebee which is about hybrid graphics.
bumblebee:
Provide /usr/share/X11/xorg.conf.d/20-nvidia-drm-outputclass.conf with a new Files section.
That way a purely nvidia* box will have 10-nvidia.conf, while a hybrid box will have 20-nvidia-drm-outputclass.conf which fixes the modulepath for the hybrid graphics case.
For the specific file contents 10-nvidia.conf should be:
Section "Files"
ModulePath "/usr/lib/nvidia/xorg"
ModulePath "/usr/lib/xorg/modules"
EndSection
Whereas 20-nvidia-drm-outputclass.conf should be:
Section "Files"
ModulePath "/usr/lib/xorg/modules"
EndSection
Section "OutputClass"
Identifier "intel"
MatchDriver "i915"
Driver "modesetting"
EndSection
Section "OutputClass"
Identifier "nvidia"
MatchDriver "nvidia-drm"
Driver "nvidia"
Option "AllowEmptyInitialConfiguration"
Option "PrimaryGPU" "yes"
ModulePath "/usr/lib/nvidia/xorg"
EndSection
It would be nice if someone that has access to a hybrid graphics box could test this.