[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABi2SkWzjTVjEwED_QjNz385m4aGo8OfAS2RkRjuZdpSviNkJQ@mail.gmail.com>
Date: Fri, 4 Oct 2024 11:17:13 -0700
From: Jeff Xu <jeffxu@...omium.org>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Kees Cook <keescook@...omium.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Suren Baghdasaryan <surenb@...gle.com>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>, Matthew Wilcox <willy@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>, "Paul E . McKenney" <paulmck@...nel.org>, Jann Horn <jannh@...gle.com>,
David Hildenbrand <david@...hat.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Muchun Song <muchun.song@...ux.dev>, Richard Henderson <richard.henderson@...aro.org>,
Ivan Kokshaysky <ink@...assic.park.msu.ru>, Matt Turner <mattst88@...il.com>,
Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
"James E . J . Bottomley" <James.Bottomley@...senpartnership.com>, Helge Deller <deller@....de>,
Chris Zankel <chris@...kel.net>, Max Filippov <jcmvbkbc@...il.com>, Arnd Bergmann <arnd@...db.de>,
linux-alpha@...r.kernel.org, linux-mips@...r.kernel.org,
linux-parisc@...r.kernel.org, linux-arch@...r.kernel.org,
Shuah Khan <shuah@...nel.org>, Christian Brauner <brauner@...nel.org>, linux-kselftest@...r.kernel.org,
Sidhartha Kumar <sidhartha.kumar@...cle.com>, Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [RFC PATCH 3/4] mm: madvise: implement lightweight guard page mechanism
Hi Lorenzo,
Please add me to this series, I 'm interested in everything related to
mseal :-), thanks.
I also added Kees into the cc, since mseal is a security feature.
On Fri, Sep 27, 2024 at 5:52 AM Lorenzo Stoakes
<lorenzo.stoakes@...cle.com> wrote:
>
> Implement a new lightweight guard page feature, that is regions of userland
> virtual memory that, when accessed, cause a fatal signal to arise.
>
> Currently users must establish PROT_NONE ranges to achieve this.
>
> However this is very costly memory-wise - we need a VMA for each and every
> one of these regions AND they become unmergeable with surrounding VMAs.
>
> In addition repeated mmap() calls require repeated kernel context switches
> and contention of the mmap lock to install these ranges, potentially also
> having to unmap memory if installed over existing ranges.
>
> The lightweight guard approach eliminates the VMA cost altogether - rather
> than establishing a PROT_NONE VMA, it operates at the level of page table
> entries - poisoning PTEs such that accesses to them cause a fault followed
> by a SIGSGEV signal being raised.
>
> This is achieved through the PTE marker mechanism, which a previous commit
> in this series extended to permit this to be done, installed via the
> generic page walking logic, also extended by a prior commit for this
> purpose.
>
> These poison ranges are established with MADV_GUARD_POISON, and if the
> range in which they are installed contain any existing mappings, they will
> be zapped, i.e. free the range and unmap memory (thus mimicking the
> behaviour of MADV_DONTNEED in this respect).
>
> Any existing poison entries will be left untouched. There is no nesting of
> poisoned pages.
>
> Poisoned ranges are NOT cleared by MADV_DONTNEED, as this would be rather
> unexpected behaviour, but are cleared on process teardown or unmapping of
> memory ranges.
>
> Ranges can have the poison property removed by MADV_GUARD_UNPOISON -
> 'remedying' the poisoning. The ranges over which this is applied, should
> they contain non-poison entries, will be untouched, only poison entries
> will be cleared.
>
> We permit this operation on anonymous memory only, and only VMAs which are
> non-special, non-huge and not mlock()'d (if we permitted this we'd have to
> drop locked pages which would be rather counterintuitive).
>
> The poisoning of the range must be performed under mmap write lock as we
> have to install an anon_vma to ensure correct behaviour on fork.
>
> Suggested-by: Vlastimil Babka <vbabka@...e.cz>
> Suggested-by: Jann Horn <jannh@...gle.com>
> Suggested-by: David Hildenbrand <david@...hat.com>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
> ---
> arch/alpha/include/uapi/asm/mman.h | 3 +
> arch/mips/include/uapi/asm/mman.h | 3 +
> arch/parisc/include/uapi/asm/mman.h | 3 +
> arch/xtensa/include/uapi/asm/mman.h | 3 +
> include/uapi/asm-generic/mman-common.h | 3 +
> mm/madvise.c | 158 +++++++++++++++++++++++++
> mm/mprotect.c | 3 +-
> mm/mseal.c | 1 +
> 8 files changed, 176 insertions(+), 1 deletion(-)
>
> diff --git a/arch/alpha/include/uapi/asm/mman.h b/arch/alpha/include/uapi/asm/mman.h
> index 763929e814e9..71e13f27742d 100644
> --- a/arch/alpha/include/uapi/asm/mman.h
> +++ b/arch/alpha/include/uapi/asm/mman.h
> @@ -78,6 +78,9 @@
>
> #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
>
> +#define MADV_GUARD_POISON 102 /* fatal signal on access to range */
> +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
> +
> /* compatibility flags */
> #define MAP_FILE 0
>
> diff --git a/arch/mips/include/uapi/asm/mman.h b/arch/mips/include/uapi/asm/mman.h
> index 9c48d9a21aa0..1a2222322f77 100644
> --- a/arch/mips/include/uapi/asm/mman.h
> +++ b/arch/mips/include/uapi/asm/mman.h
> @@ -105,6 +105,9 @@
>
> #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
>
> +#define MADV_GUARD_POISON 102 /* fatal signal on access to range */
> +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
> +
> /* compatibility flags */
> #define MAP_FILE 0
>
> diff --git a/arch/parisc/include/uapi/asm/mman.h b/arch/parisc/include/uapi/asm/mman.h
> index 68c44f99bc93..380905522397 100644
> --- a/arch/parisc/include/uapi/asm/mman.h
> +++ b/arch/parisc/include/uapi/asm/mman.h
> @@ -75,6 +75,9 @@
> #define MADV_HWPOISON 100 /* poison a page for testing */
> #define MADV_SOFT_OFFLINE 101 /* soft offline page for testing */
>
> +#define MADV_GUARD_POISON 102 /* fatal signal on access to range */
> +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
> +
> /* compatibility flags */
> #define MAP_FILE 0
>
> diff --git a/arch/xtensa/include/uapi/asm/mman.h b/arch/xtensa/include/uapi/asm/mman.h
> index 1ff0c858544f..e8d5affceb28 100644
> --- a/arch/xtensa/include/uapi/asm/mman.h
> +++ b/arch/xtensa/include/uapi/asm/mman.h
> @@ -113,6 +113,9 @@
>
> #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
>
> +#define MADV_GUARD_POISON 102 /* fatal signal on access to range */
> +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
> +
> /* compatibility flags */
> #define MAP_FILE 0
>
> diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h
> index 6ce1f1ceb432..5dfd3d442de4 100644
> --- a/include/uapi/asm-generic/mman-common.h
> +++ b/include/uapi/asm-generic/mman-common.h
> @@ -79,6 +79,9 @@
>
> #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
>
> +#define MADV_GUARD_POISON 102 /* fatal signal on access to range */
> +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
> +
> /* compatibility flags */
> #define MAP_FILE 0
>
> diff --git a/mm/madvise.c b/mm/madvise.c
> index e871a72a6c32..7216e10723ae 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -60,6 +60,7 @@ static int madvise_need_mmap_write(int behavior)
> case MADV_POPULATE_READ:
> case MADV_POPULATE_WRITE:
> case MADV_COLLAPSE:
> + case MADV_GUARD_UNPOISON: /* Only poisoning needs a write lock. */
> return 0;
> default:
> /* be safe, default to 1. list exceptions explicitly */
> @@ -1017,6 +1018,157 @@ static long madvise_remove(struct vm_area_struct *vma,
> return error;
> }
>
> +static bool is_valid_guard_vma(struct vm_area_struct *vma, bool allow_locked)
> +{
> + vm_flags_t disallowed = VM_SPECIAL | VM_HUGETLB;
> +
> + /*
> + * A user could lock after poisoning but that's fine, as they'd not be
> + * able to fault in. The issue arises when we try to zap existing locked
> + * VMAs. We don't want to do that.
> + */
> + if (!allow_locked)
> + disallowed |= VM_LOCKED;
> +
> + if (!vma_is_anonymous(vma))
> + return false;
> +
> + if ((vma->vm_flags & (VM_MAYWRITE | disallowed)) != VM_MAYWRITE)
> + return false;
> +
> + return true;
> +}
> +
> +static int guard_poison_install_pte(unsigned long addr, unsigned long next,
> + pte_t *ptep, struct mm_walk *walk)
> +{
> + unsigned long *num_installed = (unsigned long *)walk->private;
> +
> + (*num_installed)++;
> + /* Simply install a PTE marker, this causes segfault on access. */
> + *ptep = make_pte_marker(PTE_MARKER_GUARD);
> +
> + return 0;
> +}
> +
> +static bool is_guard_pte_marker(pte_t ptent)
> +{
> + return is_pte_marker(ptent) &&
> + is_guard_swp_entry(pte_to_swp_entry(ptent));
> +}
> +
> +static int guard_poison_pte_entry(pte_t *pte, unsigned long addr,
> + unsigned long next, struct mm_walk *walk)
> +{
> + pte_t ptent = ptep_get(pte);
> +
> + /*
> + * If not a guard marker, simply abort the operation. We return a value
> + * > 0 indicating a non-error abort.
> + */
> + return !is_guard_pte_marker(ptent);
> +}
> +
> +static const struct mm_walk_ops guard_poison_walk_ops = {
> + .install_pte = guard_poison_install_pte,
> + .pte_entry = guard_poison_pte_entry,
> + /* We might need to install an anon_vma. */
> + .walk_lock = PGWALK_WRLOCK,
> +};
> +
> +static long madvise_guard_poison(struct vm_area_struct *vma,
> + struct vm_area_struct **prev,
> + unsigned long start, unsigned long end)
> +{
> + long err;
> + bool retried = false;
> +
> + *prev = vma;
> + if (!is_valid_guard_vma(vma, /* allow_locked = */false))
> + return -EINVAL;
> +
> + /*
> + * Optimistically try to install the guard poison pages first. If any
> + * non-guard pages are encountered, give up and zap the range before
> + * trying again.
> + */
> + while (true) {
> + unsigned long num_installed = 0;
> +
> + /* Returns < 0 on error, == 0 if success, > 0 if zap needed. */
> + err = walk_page_range_mm(vma->vm_mm, start, end,
> + &guard_poison_walk_ops,
> + &num_installed);
> + /*
> + * If we install poison markers, then the range is no longer
> + * empty from a page table perspective and therefore it's
> + * appropriate to have an anon_vma.
> + *
> + * This ensures that on fork, we copy page tables correctly.
> + */
> + if (err >= 0 && num_installed > 0) {
> + int err_anon = anon_vma_prepare(vma);
> +
> + if (err_anon)
> + err = err_anon;
> + }
> +
> + if (err <= 0)
> + return err;
> +
> + if (!retried)
> + /*
> + * OK some of the range have non-guard pages mapped, zap
> + * them. This leaves existing guard pages in place.
> + */
> + zap_page_range_single(vma, start, end - start, NULL);
> + else
> + /*
> + * If we reach here, then there is a racing fault that
> + * has populated the PTE after we zapped. Give up and
> + * let the user know to try again.
> + */
> + return -EAGAIN;
> +
> + retried = true;
> + }
> +}
> +
> +static int guard_unpoison_pte_entry(pte_t *pte, unsigned long addr,
> + unsigned long next, struct mm_walk *walk)
> +{
> + pte_t ptent = ptep_get(pte);
> +
> + if (is_guard_pte_marker(ptent)) {
> + /* Simply clear the PTE marker. */
> + pte_clear_not_present_full(walk->mm, addr, pte, true);
> + update_mmu_cache(walk->vma, addr, pte);
> + }
> +
> + return 0;
> +}
> +
> +static const struct mm_walk_ops guard_unpoison_walk_ops = {
> + .pte_entry = guard_unpoison_pte_entry,
> + .walk_lock = PGWALK_RDLOCK,
> +};
> +
> +static long madvise_guard_unpoison(struct vm_area_struct *vma,
> + struct vm_area_struct **prev,
> + unsigned long start, unsigned long end)
> +{
> + *prev = vma;
> + /*
> + * We're ok with unpoisoning mlock()'d ranges, as this is a
> + * non-destructive action.
> + */
> + if (!is_valid_guard_vma(vma, /* allow_locked = */true))
> + return -EINVAL;
> +
> + return walk_page_range(vma->vm_mm, start, end,
> + &guard_unpoison_walk_ops, NULL);
> +}
> +
> /*
> * Apply an madvise behavior to a region of a vma. madvise_update_vma
> * will handle splitting a vm area into separate areas, each area with its own
> @@ -1098,6 +1250,10 @@ static int madvise_vma_behavior(struct vm_area_struct *vma,
> break;
> case MADV_COLLAPSE:
> return madvise_collapse(vma, prev, start, end);
> + case MADV_GUARD_POISON:
> + return madvise_guard_poison(vma, prev, start, end);
> + case MADV_GUARD_UNPOISON:
> + return madvise_guard_unpoison(vma, prev, start, end);
> }
>
> anon_name = anon_vma_name(vma);
> @@ -1197,6 +1353,8 @@ madvise_behavior_valid(int behavior)
> case MADV_DODUMP:
> case MADV_WIPEONFORK:
> case MADV_KEEPONFORK:
> + case MADV_GUARD_POISON:
> + case MADV_GUARD_UNPOISON:
> #ifdef CONFIG_MEMORY_FAILURE
> case MADV_SOFT_OFFLINE:
> case MADV_HWPOISON:
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index 0c5d6d06107d..d0e3ebfadef8 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -236,7 +236,8 @@ static long change_pte_range(struct mmu_gather *tlb,
> } else if (is_pte_marker_entry(entry)) {
> /*
> * Ignore error swap entries unconditionally,
> - * because any access should sigbus anyway.
> + * because any access should sigbus/sigsegv
> + * anyway.
> */
> if (is_poisoned_swp_entry(entry))
> continue;
> diff --git a/mm/mseal.c b/mm/mseal.c
> index ece977bd21e1..21bf5534bcf5 100644
> --- a/mm/mseal.c
> +++ b/mm/mseal.c
> @@ -30,6 +30,7 @@ static bool is_madv_discard(int behavior)
> case MADV_REMOVE:
> case MADV_DONTFORK:
> case MADV_WIPEONFORK:
> + case MADV_GUARD_POISON:
Can you please describe the rationale to add this to the existing
mseal's semantic ?
I didn't not find any description from the cover letter or this
patch's description, hence asking.
Thanks
-Jeff
> return true;
> }
>
> --
> 2.46.2
>
>
Powered by blists - more mailing lists