lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZD7ldcZoWfeN7poU@bombadil.infradead.org>
Date:   Tue, 18 Apr 2023 11:46:13 -0700
From:   Luis Chamberlain <mcgrof@...nel.org>
To:     "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
Cc:     "keescook@...omium.org" <keescook@...omium.org>,
        "hch@...radead.org" <hch@...radead.org>,
        "prarit@...hat.com" <prarit@...hat.com>,
        "rppt@...nel.org" <rppt@...nel.org>,
        "catalin.marinas@....com" <catalin.marinas@....com>,
        "Torvalds, Linus" <torvalds@...ux-foundation.org>,
        "willy@...radead.org" <willy@...radead.org>,
        "song@...nel.org" <song@...nel.org>,
        "patches@...ts.linux.dev" <patches@...ts.linux.dev>,
        "pmladek@...e.com" <pmladek@...e.com>,
        "david@...hat.com" <david@...hat.com>,
        "colin.i.king@...il.com" <colin.i.king@...il.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
        "jim.cromie@...il.com" <jim.cromie@...il.com>,
        "vbabka@...e.cz" <vbabka@...e.cz>,
        "christophe.leroy@...roup.eu" <christophe.leroy@...roup.eu>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "jbaron@...mai.com" <jbaron@...mai.com>,
        "peterz@...radead.org" <peterz@...radead.org>,
        "linux-modules@...r.kernel.org" <linux-modules@...r.kernel.org>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
        "petr.pavlu@...e.com" <petr.pavlu@...e.com>,
        "rafael@...nel.org" <rafael@...nel.org>,
        "Hocko, Michal" <mhocko@...e.com>,
        "dave@...olabs.net" <dave@...olabs.net>
Subject: Re: [RFC 2/2] kread: avoid duplicates

On Mon, Apr 17, 2023 at 03:08:34PM -0700, Luis Chamberlain wrote:
> On Mon, Apr 17, 2023 at 05:33:49PM +0000, Edgecombe, Rick P wrote:
> > On Sat, 2023-04-15 at 23:41 -0700, Luis Chamberlain wrote:
> > > On Sat, Apr 15, 2023 at 11:04:12PM -0700, Christoph Hellwig wrote:
> > > > On Thu, Apr 13, 2023 at 10:28:40PM -0700, Luis Chamberlain wrote:
> > > > > With this we run into 0 wasted virtual memory bytes.
> > > > 
> > > > Avoid what duplicates?
> > > 
> > > David Hildenbrand had reported that with over 400 CPUs vmap space
> > > runs out and it seems it was related to module loading. I took a
> > > look and confirmed it. Module loading ends up requiring in the
> > > worst case 3 vmalloc allocations, so typically at least twice
> > > the size of the module size and in the worst case just add
> > > the decompressed module size:
> > > 
> > > a) initial kernel_read*() call
> > > b) optional module decompression
> > > c) the actual module data copy we will keep
> > > 
> > > Duplicate module requests that come from userspace end up being
> > > thrown
> > > in the trash bin, as only one module will be allocated.  Although
> > > there
> > > are checks for a module prior to requesting a module udev still
> > > doesn't
> > > do the best of a job to avoid that and so we end up with tons of
> > > duplicate module requests. We're talking about gigabytes of vmalloc
> > > bytes just lost because of this for large systems and megabytes for
> > > average systems. So for example with just 255 CPUs we can loose about
> > > 13.58 GiB, and for 8 CPUs about 226.53 MiB.
> > > 
> > > I have patches to curtail 1/2 of that space by doing a check in
> > > kernel
> > > before we do the allocation in c) if the module is already present.
> > > For
> > > a) it is harder because userspace just passes a file descriptor. But
> > > since we can get the file path without the vmalloc this RFC suggest
> > > maybe we can add a new kernel_read*() for module loading where it
> > > makes
> > > sense to have only one read happen at a time.
> > 
> > I'm wondering how difficult it would be to just try to remove the
> > vmallocs in (a) and (b) and operate on a list of pages.
> 
> Yes I think it's worth long term to do that, if possible with seq reads.

OK here's what I suggest we do then:

I'll resubmit the first patch which allows us to prove / disprove if
module-autoloading is the culprit. With that in place folks can debug
their setup and verify how udev is to blame.

I'll drop the second kernel_read*() patch / effort and punt this as a
userspace problem as this is also not extremely pressing.

Long term should evaluate how we can avoid vmalloc for the kread and
module decompression.

If this really becomes a pressing issue we can revisit if we want an in
kernel solution, but at this point that likely would be systems with
over 400-500 CPUs with KASAN enabled. Without KASAN the issue should
eventually trigger if you're enablig modules but its hard to say at what
point you'd hit this issue.

  Luis

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ