lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6711d7ba-1349-de28-6d35-9dce91be7996@redhat.com>
Date:   Fri, 2 Jun 2023 18:37:09 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     Luis Chamberlain <mcgrof@...nel.org>,
        Johan Hovold <johan@...nel.org>,
        Lucas De Marchi <lucas.demarchi@...el.com>,
        Petr Pavlu <petr.pavlu@...e.com>, gregkh@...uxfoundation.org,
        rafael@...nel.org, song@...nel.org, lucas.de.marchi@...il.com,
        christophe.leroy@...roup.eu, peterz@...radead.org, rppt@...nel.org,
        dave@...olabs.net, willy@...radead.org, vbabka@...e.cz,
        mhocko@...e.com, dave.hansen@...ux.intel.com,
        colin.i.king@...il.com, jim.cromie@...il.com,
        catalin.marinas@....com, jbaron@...mai.com,
        rick.p.edgecombe@...el.com, yujie.liu@...el.com,
        tglx@...utronix.de, hch@....de, patches@...ts.linux.dev,
        linux-modules@...r.kernel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, pmladek@...e.com, prarit@...hat.com,
        lennart@...ttering.net
Subject: Re: [PATCH 2/2] module: add support to avoid duplicates early on load

On 02.06.23 18:06, Linus Torvalds wrote:
> On Fri, Jun 2, 2023 at 11:20 AM David Hildenbrand <david@...hat.com> wrote:
>>
>> What concerns me a bit, is that on the patched kernel we seem to hit more cases where
>> boot takes much longer (in both kernel configs).
> 
> So it potentially serializes the loads to the same file more, but in
> the process uses much less memory (since the ones waiting will not
> have done any of the "load file contents and uncompress them"). So
> it's a bit of a trade-off.

I have the feeling that -- on this system -- it's some inaccurate 
accounting of firmware+loader times to the kernel startup time. Combined 
with some other noise. Especially the firmware loading time seems to be 
fairly randomized.

I guess what we care about regarding module loading is the 
initrd+userspace loading times, and they are fairly stable. But we 
mostly care about udev.

So let's look only at "systemd-udev" services:

1) !debug

a) master

5.672s systemd-udev-settle.service
  505ms systemd-udev-trigger.service
  272ms systemd-udevd.service
5.418s systemd-udev-settle.service
  487ms systemd-udev-trigger.service
  258ms systemd-udevd.service
5.707s systemd-udev-settle.service
  527ms systemd-udev-trigger.service
  273ms systemd-udevd.service
6.250s systemd-udev-settle.service
  455ms systemd-udev-trigger.service
  283ms systemd-udevd.service


b) patched

4.652s systemd-udev-settle.service
  461ms systemd-udev-trigger.service
  302ms systemd-udevd.service
4.652s systemd-udev-settle.service
  461ms systemd-udev-trigger.service
  302ms systemd-udevd.service
4.634s systemd-udev-settle.service
  444ms systemd-udev-trigger.service
  296ms systemd-udevd.service
4.745s systemd-udev-settle.service
  444ms systemd-udev-trigger.service
  273ms systemd-udevd.service


2) debug

a) master

32.806s systemd-udev-settle.service
  9.584s systemd-udev-trigger.service
   471ms systemd-udevd.service
29.901s systemd-udev-settle.service
  8.914s systemd-udev-trigger.service
   400ms systemd-udevd.service
28.640s systemd-udev-settle.service
  9.260s systemd-udev-trigger.service
   477ms systemd-udevd.service
29.498s systemd-udev-settle.service
  9.073s systemd-udev-trigger.service
   444ms systemd-udevd.service


b) patched

28.765s systemd-udev-settle.service
  8.898s systemd-udev-trigger.service
   400ms systemd-udevd.service
28.292s systemd-udev-settle.service
  8.903s systemd-udev-trigger.service
   401ms systemd-udevd.service
34.588s systemd-udev-settle.service
  8.959s systemd-udev-trigger.service
   455ms systemd-udevd.service
28.641s systemd-udev-settle.service
  8.953s systemd-udev-trigger.service
   389ms systemd-udevd.service



So except some noise, in the general case the patched version seems to 
be faster just looking at systemd-udev.

> 
> We could complicate things a bit, and let other callers return -EEXIST
> a bit earlier, but I'm not convinced it really matters.

Looking at the numbers, agreed.

> 
> Honestly, taking too long because user space does something stupid and
> wrong is not a kernel bug. Not booting because we use too much memory
> - that's problematic. But booting slowly because udev does several
> thousand unnecessary module loads is entirely on udev.

Yes.


I'll do some more experiments, but from what I can tell

Tested-by: David Hildenbrand <david@...hat.com>

-- 
Cheers,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ