lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202103161205.B2181BDE38@keescook>
Date:   Tue, 16 Mar 2021 12:08:02 -0700
From:   Kees Cook <keescook@...omium.org>
To:     Michal Hocko <mhocko@...e.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Alexey Dobriyan <adobriyan@...il.com>,
        Lee Duncan <lduncan@...e.com>, Chris Leech <cleech@...hat.com>,
        Adam Nichols <adam@...mm-co.com>, linux-kernel@...r.kernel.org,
        linux-fsdevel@...r.kernel.org, linux-hardening@...r.kernel.org
Subject: Re: [PATCH v2] seq_file: Unconditionally use vmalloc for buffer

On Tue, Mar 16, 2021 at 09:31:23AM +0100, Michal Hocko wrote:
> On Mon 15-03-21 10:48:51, Kees Cook wrote:
> > The sysfs interface to seq_file continues to be rather fragile, as seen
> > with some recent exploits[1]. Move the seq_file buffer to the vmap area
> > (while retaining the accounting flag), since it has guard pages that
> > will catch and stop linear overflows. This seems justified given that
> > seq_file already uses kvmalloc(), is almost always using a PAGE_SIZE or
> > larger allocation, has allocations are normally short lived, and is not
> > normally on a performance critical path.
> 
> I have already objected without having my concerns really addressed.

Sorry, I didn't mean to ignore your comments!

> Your observation that most of buffers are PAGE_SIZE in the vast majority
> cases matches my experience and kmalloc should perform better than
> vmalloc. You should check the most common /proc readers at least.

Yeah, I'm going to build a quick test rig to see some before/after
timings, etc.

> Also this cannot really be done for configurations with a very limited
> vmalloc space (32b for example). Those systems are more and more rare
> but you shouldn't really allow userspace to deplete the vmalloc space.

This sounds like two objections:
- 32b has a small vmalloc space
- userspace shouldn't allow depletion of vmalloc space

I'd be happy to make this 64b only. For the latter, I would imagine
there are other vmalloc-exposed-to-userspace cases, but yes, this would
be much more direct. Is that a problem in practice?

> I would be also curious to see how vmalloc scales with huge number of
> single page allocations which would be easy to trigger with this patch.

Right -- what the best way to measure this (and what would be "too
much")?

-- 
Kees Cook

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ