• More on nVidia Prime offloading

    Home » Forums » AskWoody support » Linux for the Home user » Linux – all distros » More on nVidia Prime offloading

    Author
    Topic
    #2270751

    Most gaming laptops with nVidia discrete GPUs use what nVidia calls “Optimus” techology, a means by which a laptop can use the power-thrifty Intel integrated graphics for regular tasks, but when presented with a program that goes beyond what the integrated GPU can do, it seamlessly offloads that function to the nVidia GPU.  The nVidia GPU renders the frames and sends them back to the integrated CPU via the PCI express bus (the channel from GPU to CPU being almost completely unused until then; the channel from the CPU to the GPU is the one that gets heavily used as the CPU sends out the frame data to be rendered).  It’s actually quite brilliant conceptually.

    Unfortunately for those of us in the Linux world, Optimus is for Windows.  On the Linux side, we get a lesser-featured variant called Prime (get it? Optimus/Prime.  The laptop “transforms” from a power-saving standard laptop to a gaming laptop with a discrete GPU).

    Until recently, users of Prime laptops had to (as far as I know– this was before I had one!) use Bumblebee to make use of Prime.  It can be troublesome (nVidia is notoriously scant with details about how its products work), and is not compatible with Vulkan, which is a big limitation these days, as so many Windows games need DXVK to get Windows-like frame rates in WINE.

    Fortunately, nVidia has been developing Prime more recently.  Some time ago, the nVidia drivers got the ability to switch back and forth between the thrifty Intel GPU and the powerful nVidia GPU, but you had to reboot (or at least log out and in) to make it happen.  Compared to Windows, where you had to do nothing but run the program (generally a game) that needed the nVidia, this was a huge pain.

    Somewhere along the line, nVidia came up with Prime Sync,which finally solved the frustrating problem that a lot of people had with screen tearing with the nVidia GPU.  The frames coming across the PCIE bus were simply displayed by the integrated GPU as soon as they were received. Since there was no effort to sync the frames with the monitor’s vblank, it meant there would be visual tearing even if the Intel GPU was synced to vblank, because the nVidia frames were not.

    Prime Sync fixed that once and for all, and it’s been very robust and effective.  It’s a positive sign that nVidia is working on improving the experience in Linux.  It would be even nicer if they would stop requiring signed, precompiled, copyrighted binary blobs to use the cards, which would allow the open source drivers to work well, but any improvement is better than none.

    Not long ago, it was announced that Prime would begin to support an offloading feature, something that people had wanted for ages.  That’s what Optimus does… it uses the main Intel (integrated) GPU for most stuff, but switches to the nVidia when it has to.  The Linux version, for now, is not that slick… it’s necessary to tell the system which GPU you want to run a given program on, bu that’s far better than having to reboot or log out each time you want to change from one to the other.

    I posted about this a while ago, when it was still in the beta stage, but I hadn’t yet tried it.  When the beta driver finally reached the released state, I tried to set my G3 with it, but I could not get it to work, and I didn’t feel like messing with it that much.  I bought the G3 to be mostly used on AC power… gaming laptops are generally meant to be used for gaming while plugged in, as their immense power demands (by laptop standards) would suck a battery dry in a very short time.  As such, I just left it in nVidia mode pretty much all the time.

    Recently, though, I read something about the improved support for Prime offload that was built into Ubuntu 20.04.  I have a test installation of Kubuntu 20.04 on the G3, so I decided to try it.  To my surprise, it worked out of the box (so to speak), with no configuration necessary.  That’s really a “killer” feature for Ubuntu 20.04 and related releases (like the upcoming Mint 20), if you have an Optimus/Prime laptop.

    I like a lot of things about Kubuntu 20.04, but it’s still got some really obnoxious bugs (fixed ages ago in Neon) that I find quite objectionable.  As such, I’ve stuck to my KDE Neon, which is still based on 18.04 (and probably will until the fall). \]

    I’ve already got the 20.04 (focal) repo set up to deliver kernel updates to my 18.04-based Neon, so it seemed like a natural thing to try to get the bits and pieces I would need to make offloading work from the 20.04 repo.  After backing everything up (of course!), I installed the nvidia-prime package and the xorg setup from 20.04.  It took a lot of fiddling around, since I really had no idea what I was doing, and the how-to guides seemed to be too generic (distro wise) to try to see what was going wrong.

    Once the nvidia-prime package from 20.04 is installed, a third option will be added to the GPU choices. Intel and nVidia were already there, but a third one, On-demand, is new.  I selected that, naturally… on-demand is the correct mode for offloading.

    I had a time when I would boot it and it would keep using the nVidia GPU all the time, and then I would put in one of those custom xorg.conf files that all of the how-to sites have, and then it would boot to a black screen.

    I could see that when I typed xrandr –listproviders, I would get the two GPUs showing, as I should, but the last one was not called “nVidia GO” as one of the guides said I should.  The system was identifying the nVidia as “modesetting,” which is odd.  I did have options set for nVidia to use the option to allow modesetting, but there’s also a driver for the Intel GPU (built into the kernel) called “modesetting.”

    I removed that line that allowed modesetting (supposedly needed by Prime Sync), and also tried the xorg.conf from this Linux Mint forum post.  It began to dawn on me… this was the first xorg.conf I had seen for Prime offload that specifically told the nVidia GPU to use the nvidia driver.  The system was seeing the nVidia card, but it said its device name was “modesetting.”

    After that, and rolling back to the 440.81 driver from the graphics-drivers PPA (I had tried the one from the Ubuntu 20.04 repo, but it was causing crashes of the GLX driver), I did an xrandr –providers and saw the ‘nVidia GO” name there.  Could it be?

    I used the environmental-variable to launch glmark2, a benchmarking program that also has the effect of telling you which GPU is in use.  It said nVidia… tried glmark2 again without the environment variables, and it said Intel.  It worked!

    FWIW, glmark2 has a lot of limitations as a benchmark.  Lots of people have reported that it generates low scores, and upon examination, they see that it’s hardly loading the GPU at all.  Its simple tests generate framerates in the thousands, but I have vsync on, not to mention Prime sync.

    The result was that the nVidia results were way slower with offloading than when I had used nVidia mode, and in fact they were slower than the Intel integrated GPU.  That seems like a limitation of the benchmark, though, not a real representation of the performance.

    I started some games using the offloading variables, and they ran beautifully, just as smoothly as in nVidia mode.

    There is still one thing that needs more work, and that’s that only the latest nVidia cards (RTX series) get turned off when not offloading.  On older generations (like my 1050ti) , the card is left on but in a low power state.  Off would save more energy, but that has not been implemented yet.  The developer of the Arch port of Ubuntu’s nvidia-prime package has said that this is because there has been no standard way to turn the nVidia GPU off, so the chances that something could go wrong if the wrong method was used would be a problem.  Apparently, there were a number of different methods that laptop makers used to turn these cards off.

    Still, I have to wonder how it works in Windows if all of the hardware is so different.  Anyone can go get a Windows driver for nVidia and have Optimus work, so if they can get it done even with the different methods for turning off the card in Windows, then why not Linux?

    I did a quick test using PowerTop, which reports the battery drain rate in near real time, so it is possible to see how many watts of power the laptop is consuming.  It came to a low of just under 9 watts, for an estimated battery life of something like 6 hours.  The nVidia card did indeed remain powered on, as I could still see the temp readings coming from it, but it was quite cold.

    The next version of the Linux nVidia driver is supposed to come with some improved bits that may help with the power issue.  Six hours is not bad; this laptop has a smallish battery (some gaming laptops have twice the watt-hour capacity that mine does.  Nothing wrong with shooting for better, though!

    I created a small shell script to automatically set the environment variables that cause the system to switch over to the nVidia GPU. All I need to do now is type ‘usenvidia’ before the filename, or to insert that into the command field for the graphical launcher, and that program will use the nVidia. I think I actually like that idea more than dynamically switching based on load, as I have heard stories about how games that have sections of low demand can cause the GPU to switch back to Intel, then have a bit of a stutter when it switches back to nVidia.  I know which programs need the nVidia card; why have it autodetect when I can just tell it which ones should use nVidia?)

    All things considered, it works really well, and I can hardly wait to see what the next driver will bring.

    Dell XPS 13/9310, i5-1135G7/16GB, KDE Neon 6.2
    XPG Xenia 15, i7-9750H/32GB & GTX1660ti, Kubuntu 24.04
    Acer Swift Go 14, i5-1335U/16GB, Kubuntu 24.04 (and Win 11)

    2 users thanked author for this post.
    Viewing 0 reply threads
    Author
    Replies
    • #2271357

      … if only nVidia hadn’t ended support for Kepler-based laptop Quadro GPUs earlier than they did for pretty much all older versions…

      I mean, my extended family managed to accumulate several of those. And some laptop models apparently have them as the *only* GPU, too. (As in no Intel GPU to be found on the PCIe bus no matter what you do, short of possibly firmware downgrades or something)

      Oh well, almost everything works with 440.82 still (in the “unsupported but known to work” category) but… might need to switch to Nouveau with these eventually.

      • #2271367

        … if only nVidia hadn’t ended support for Kepler-based laptop Quadro GPUs earlier than they did for pretty much all older versions…

        Just in Linux or in Windows too?

        That’s one of the problems with proprietary drivers.  AMD and Intel’s company-supported drivers are open source, but nVidia persists with keeping it all closed, earning the famous gesture from Mr. Torvalds.

        Ironically, I ended up switching my Asus laptop from an ATI (at the time)/AMD HD3650 to a nVidia GT220M because the open-source drivers were feature-poor and slow, and AMD had dropped driver support for Linux for the HD3650 with its proprietary drivers well before I had become interested in using Linux on it. I think that was before AMD decided to open-source their drivers, and older cards like mine would presumably still be stuck with either the slow open drivers or the unsupported proprietary ones.

        That was several years ago, and even now the GT220M is somewhat supported for Linux.  They’re not updating the 340 driver code anymore, but it does seem to keep getting built for new kernels.

        I keep my PCs for a really long time.  The Asus in question was one I bought new in 2008, and I used it daily until less than two years ago.  If my Dell G3 is still relevant in ten years, I expect nVidia to either keep the drivers up to date or stop the embargo on Nouveau… if not, well, I guess there’s team red from then on.

        Dell XPS 13/9310, i5-1135G7/16GB, KDE Neon 6.2
        XPG Xenia 15, i7-9750H/32GB & GTX1660ti, Kubuntu 24.04
        Acer Swift Go 14, i5-1335U/16GB, Kubuntu 24.04 (and Win 11)

        • #2271378

          Just in Linux or in Windows too?

          It was the expected end of support for Windows drivers, but for older products they offered Linux support for some years longer than on Windows… which used to be a reason to buy nVidia for use with Linux…

          https://nvidia.custhelp.com/app/answers/detail/a_id/4788/~/end-of-driver-support-for-quadro-kepler-series-notebook-products also applies for Linux drivers.

          Apparently the last long-term branch for these too may be the 390 then… that’s still on until 2022, but doesn’t do Vulkan etc.

          Still a lot shorter than with the older generations like…

          That was several years ago, and even now the GT220M is somewhat supported for Linux. They’re not updating the 340 driver code anymore, but it does seem to keep getting built for new kernels.

          … exactly those. My previous laptop model had a Quadro FX series so same drivers, made in late 2007 so 12 years of Linux driver support. (Last build of the 340 branch was just before Christmas 2019.)

          This current one was made in 2013, so ~7 years of full-feature driver support and 9 with reduced features.

          Should get around to checking what the Nouveau folks have working with Kepler chips by now…

          That’s one of the problems with proprietary drivers. AMD and Intel’s company-supported drivers are open source, but nVidia persists with keeping it all closed, earning the famous gesture from Mr. Torvalds.

          Yeah, that, and apparently Kepler is also the last series that doesn’t demand nVidia-signed drivers. So yeah, AMD looks easier to live with.

    Viewing 0 reply threads
    Reply To: More on nVidia Prime offloading

    You can use BBCodes to format your content.
    Your account can't use all available BBCodes, they will be stripped before saving.

    Your information: