lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZD3CxlsVXniQvxe9@bombadil.infradead.org>
Date:   Mon, 17 Apr 2023 15:05:58 -0700
From:   Luis Chamberlain <mcgrof@...nel.org>
To:     Greg KH <gregkh@...uxfoundation.org>
Cc:     Christoph Hellwig <hch@...radead.org>,
        Kees Cook <keescook@...omium.org>, david@...hat.com,
        patches@...ts.linux.dev, linux-modules@...r.kernel.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org, pmladek@...e.com,
        petr.pavlu@...e.com, prarit@...hat.com,
        torvalds@...ux-foundation.org, rafael@...nel.org,
        christophe.leroy@...roup.eu, tglx@...utronix.de,
        peterz@...radead.org, song@...nel.org, rppt@...nel.org,
        dave@...olabs.net, willy@...radead.org, vbabka@...e.cz,
        mhocko@...e.com, dave.hansen@...ux.intel.com,
        colin.i.king@...il.com, jim.cromie@...il.com,
        catalin.marinas@....com, jbaron@...mai.com,
        rick.p.edgecombe@...el.com
Subject: Re: [RFC 2/2] kread: avoid duplicates

On Mon, Apr 17, 2023 at 08:05:31AM +0200, Greg KH wrote:
> On Sun, Apr 16, 2023 at 11:46:44AM -0700, Luis Chamberlain wrote:
> > On Sun, Apr 16, 2023 at 02:50:01PM +0200, Greg KH wrote:
> > > On Sat, Apr 15, 2023 at 11:41:28PM -0700, Luis Chamberlain wrote:
> > > > On Sat, Apr 15, 2023 at 11:04:12PM -0700, Christoph Hellwig wrote:
> > > > > On Thu, Apr 13, 2023 at 10:28:40PM -0700, Luis Chamberlain wrote:
> > > > > > With this we run into 0 wasted virtual memory bytes.
> > > > > 
> > > > > Avoid what duplicates?
> > > > 
> > > > David Hildenbrand had reported that with over 400 CPUs vmap space
> > > > runs out and it seems it was related to module loading. I took a
> > > > look and confirmed it. Module loading ends up requiring in the
> > > > worst case 3 vmalloc allocations, so typically at least twice
> > > > the size of the module size and in the worst case just add
> > > > the decompressed module size:
> > > > 
> > > > a) initial kernel_read*() call
> > > > b) optional module decompression
> > > > c) the actual module data copy we will keep
> > > > 
> > > > Duplicate module requests that come from userspace end up being thrown
> > > > in the trash bin, as only one module will be allocated.  Although there
> > > > are checks for a module prior to requesting a module udev still doesn't
> > > > do the best of a job to avoid that and so we end up with tons of
> > > > duplicate module requests. We're talking about gigabytes of vmalloc
> > > > bytes just lost because of this for large systems and megabytes for
> > > > average systems. So for example with just 255 CPUs we can loose about
> > > > 13.58 GiB, and for 8 CPUs about 226.53 MiB.
> > > 
> > > How does the memory get "lost"?  Shouldn't it be properly freed when the
> > > duplicate module load fails?
> > 
> > Yes memory gets freed, but since virtual memory space can be limitted it
> > also means you can end up eventually getting to the point -ENOMEMs will
> > happen as you have more CPUS and you cannot use virtual memory for other
> > things during kernel bootup and bootup fails. This is apparently
> > exacerbated with KASAN enabled.
> 
> Then why not just rate-limit the module loader in userspace on such
> large systems if that's an issue?  No kernel changes needed to do that.

We can certainly just take a stance punt this as a userspace problem. I thought
it would be good to see what a kernel style of workaround would look like for
us to evluate.

  Luis

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ