lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3rpmzsxiwo5t2uq7xy5inizbtaasotjtzocxbayw5ntgk5a2rx@jkccjg5mbqqh>
Date: Wed, 15 May 2024 18:19:14 -0400
From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
To: Jeff Xu <jeffxu@...omium.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, keescook@...omium.org,
        jannh@...gle.com, sroettger@...gle.com, willy@...radead.org,
        gregkh@...uxfoundation.org, torvalds@...ux-foundation.org,
        usama.anjum@...labora.com, corbet@....net, surenb@...gle.com,
        merimus@...gle.com, rdunlap@...radead.org, jeffxu@...gle.com,
        jorgelo@...omium.org, groeck@...omium.org,
        linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
        linux-mm@...ck.org, pedro.falcato@...il.com, dave.hansen@...el.com,
        linux-hardening@...r.kernel.org, deraadt@...nbsd.org
Subject: Re: [PATCH v10 0/5] Introduce mseal

* Jeff Xu <jeffxu@...omium.org> [240515 13:18]:
..

> The current mseal patch does up-front checking in two different situations:
> 1 when applying mseal()
>    Checking for unallocated memory in the given memory range.
> 
> 2 When checking mseal flag during mprotect/unmap/remap/mmap
>   Checking mseal flag is placed ahead of the main business logic, and
> treated the same as input arguments check.
> 
> > Either we are planning to clean this up and do what we can up-front, or
> > just move the mseal check with the rest.  Otherwise we are making a
> > larger mess with more technical dept for a single user, and I think this
> > is not an acceptable trade-off.
> >
> The sealing use case  is different  from regular mm API and this
> didn't create additional technical debt.  Please allow me to explain
> those separately.
> 
> The main use case and threat model is that an attacker exploits a
> vulnerability and has arbitrary write access to the process, and can
> manipulate some arguments to syscalls from some threads. Placing the
> checking of mseal flag ahead of mprotect main business logic is
> stricter compared with doing it in-place. It is meant to be harder for
> the attacker, e.g. blocking the  opportunistically attempt of munmap
> by modifying the size argument.

If you can manipulate some arguments to syscalls, couldn't it avoid
having the VMA mseal'ed?

Again I don't care where the check goes - but having it happen alone is
pointless.

> 
> The legit app code won't call mprotect/munmap on sealed memory.  It is
> irrelevant for both precheck and in-place check approaches, from a
> legit app code point of view.

So let's do them together.

..

> About tech debt, code-wise , placing pre-check ahead of the main
> business logic of mprotect/munmap APIs, reduces the size of code
> change, and is easy to carry from release to release, or backporting.

It sounds like the other changes to the looping code in recent kernels
is going to mess up the backporting if we do this with the rest of the
checks.

> 
> But let's compare  the alternatives - doing it in-place without precheck.
> - munmap
> munmap calls arch_unmap(mm, start, end) ahead of main business logic,
> the checking of sealing flags would need to be architect specific. In
> addition, if arch_unmap return fails due to sealing, the code should
> still proceed, till the main business logic fails again.

You are going to mseal the vdso?

> 
> - mremap/mmap
> The check of sealing would be scattered, e.g. checking the src address
> range in-place, dest arrange in-place, unmap in-place, etc. The code
> is complex and prone to error.
> 
> -mprotect/madvice
> Easy to change to in-place.
> 
> - mseal
> mseal() check unallocated memory in the given memory range in the
> pre-check. Easy to change to in-place (same as mprotect)
> 
> The situation in munmap and mremap/mmap make in-place checks less desirable imo.
> 
> > Considering the benchmarks that were provided, performance arguments
> > seem like they are not a concern.
> >
> Yes. Performance is not a factor in making a design choice on this.
> 
> > I want to know if we are planning to sort and move existing checks if we
> > proceed with this change?
> >
> I would argue that we should not change the existing mm code. mseal is
> new and no backward compatible problem. That is not the case for
> mprotect and other mm api. E.g. if we were to change mprotect to add a
> precheck for memory gap, some badly written application might break.

This is a weak argument. Your new function may break these badly written
applications *if* gcc adds support.  If you're not checking the return
type then it doesn't really matter - the application will run into
issues rather quickly anyways.  The only thing that you could argue is
the speed - but you've proven that false.

> 
> The 'atomic' approach is also really difficult to enforce to the whole
> MM area, mseal() doesn't claim it is atomic. Most regular mm API might
> go deeper in mm data structure to update page tables and HW, etc. The
> rollback in handling those error cases, and performance cost. I'm not
> sure if the benefit is worth the cost. However, atomicity is another
> topic to discuss unrelated to mm sealing.  The current design of mm
> sealing is due to its use case and practical coding reason.

"best effort" is what I'm saying.  It's actually not really difficult to
do atomic, but no one cares besides Theo.

How hard is it to put userfaultfd into your loop and avoid having that
horrible userfaulfd in munmap?  For years people see horrible failure
paths and just dump in a huge comment saying "but it's okay because it's
probably not going to happen".  But now we're putting this test up
front, and doing it alone - Why?

Thanks,
Liam

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ