lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 19 Dec 2017 19:40:12 +0100
From:   Max Staudt <mstaudt@...e.de>
To:     Daniel Vetter <daniel@...ll.ch>
Cc:     Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
        Linux Fbdev development list <linux-fbdev@...r.kernel.org>,
        michal@...kovi.net, sndirsch@...e.com,
        Oliver Neukum <oneukum@...e.com>,
        Takashi Iwai <tiwai@...e.com>,
        dri-devel <dri-devel@...ts.freedesktop.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Bero Rosenkränzer 
        <bernhard.rosenkranzer@...aro.org>, philm@...jaro.org
Subject: Re: [RFC PATCH v2 00/13] Kernel based bootsplash

On 12/19/2017 06:26 PM, Daniel Vetter wrote:
> On Tue, Dec 19, 2017 at 6:04 PM, Max Staudt <mstaudt@...e.de> wrote:
>> Well, those could enable fbcon if they want the bootsplash. Shouldn't make a difference anyway if they're powerful enough to run Linux. As long as the bootsplash is shown, no fbcon drawing operations are executed, so there is no expensive scrolling or such to hog the system.
> 
> It's too big, and those folks tend to be super picky about space.

I know, they really are.

However, given just how big and clunky modern systems have become, I raise my doubts about a few extra KB for fbcon code to be relevant.

My feeling is that the kernel splash probably saves even more space on the userspace side than it adds on the kernel side, thus netting a reduction in overall code size.


> So essentially you're telling me that on a current general purpose
> distro the gfx driver loading is a dumpster fire, and we're fixing
> this by ignoring it an adding a hole new layer on top. That doesn't
> sound like any kind of good idea to me.

Yes. It is a vast improvement over the status quo, and people are asking for it. And the bootsplash layer can be moved elsewhere, just change the hooks and keep the loading/rendering.

Also, gfx driver loading isn't a dumpster fire, it mostly just works. It just mustn't be done 100% carelessly.


> So if just using drm for everything isn't possible (since drm drivers
> can at least in theory be hotunplugged), can we at least fix the
> existing fbdev kernel bugs? Not being able to unplug a drm driver when
> it's still open sounds like a rather serious issues that probably
> should be fixed anyway ... so we're better able to hotunplug an fbdev
> driver when it's in use.

I don't see it as a bug. The fbdev driver gets unloaded as much as possible, but as long as a userspace application keeps the address_space mmap()ed, there's nothing we can do, short of forcibly removing it and segfaulting the process the next time it tries to render something. Am I missing something?


> Also I'm not clear at all on the "papering over races with sleeps"
> part. DRM drivers shouldn't be racy when getting loaded ...

The DRM driver loading isn't racy, but the fbdev can't be fully unloaded while Plymouth has the address_space mmap()ed. If Plymouth sleeps until drivers that are included in initramfs are (hopefully) loaded, then it will forego using its FB backend.

A solution we've experimented with is dropping the FB backend from Plymouth. It instantly fixed the busy video RAM bug. However it made the folks relying on efifb very, very unhappy.


> Or we get simpledrm merged (for efifb and vesafb support) and someone
> types the xendrm driver (there is floating around, it's just old) and
> we could forget about any real fbdev drivers except the drm based
> ones.

And drmcon, unless we come up with a better idea than hooking into the *con driver.

Sure, that'd help a lot. But what do we do until then?



Max

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ