[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2d6eefb8-c7c5-7d32-9a75-ae716f828cd9@redhat.com>
Date: Mon, 13 Feb 2023 13:05:16 +0100
From: David Hildenbrand <david@...hat.com>
To: Deepak Gupta <debug@...osinc.com>, linux-kernel@...r.kernel.org,
linux-riscv@...ts.infradead.org,
Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org
Subject: Re: [PATCH v1 RFC Zisslpcfi 11/20] mmu: maybe_mkwrite updated to
manufacture shadow stack PTEs
On 13.02.23 05:53, Deepak Gupta wrote:
> maybe_mkwrite creates PTEs with WRITE encodings for underlying arch if
> VM_WRITE is turned on in vma->vm_flags. Shadow stack memory is a write-
> able memory except it can only be written by certain specific
> instructions. This patch allows maybe_mkwrite to create shadow stack PTEs
> if vma is shadow stack VMA. Each arch can define which combination of VMA
> flags means a shadow stack.
>
> Additionally pte_mkshdwstk must be provided by arch specific PTE
> construction headers to create shadow stack PTEs. (in arch specific
> pgtable.h).
>
> This patch provides dummy/stub pte_mkshdwstk if CONFIG_USER_SHADOW_STACK
> is not selected.
>
> Signed-off-by: Deepak Gupta <debug@...osinc.com>
> ---
> include/linux/mm.h | 23 +++++++++++++++++++++--
> include/linux/pgtable.h | 4 ++++
> 2 files changed, 25 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 8f857163ac89..a7705bc49bfe 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1093,6 +1093,21 @@ static inline unsigned long thp_size(struct page *page)
> void free_compound_page(struct page *page);
>
> #ifdef CONFIG_MMU
> +
> +#ifdef CONFIG_USER_SHADOW_STACK
> +bool arch_is_shadow_stack_vma(struct vm_area_struct *vma);
> +#endif
> +
> +static inline bool
> +is_shadow_stack_vma(struct vm_area_struct *vma)
> +{
> +#ifdef CONFIG_USER_SHADOW_STACK
> + return arch_is_shadow_stack_vma(vma);
> +#else
> + return false;
> +#endif
> +}
> +
> /*
> * Do pte_mkwrite, but only if the vma says VM_WRITE. We do this when
> * servicing faults for write access. In the normal case, do always want
> @@ -1101,8 +1116,12 @@ void free_compound_page(struct page *page);
> */
> static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
> {
> - if (likely(vma->vm_flags & VM_WRITE))
> - pte = pte_mkwrite(pte);
> + if (likely(vma->vm_flags & VM_WRITE)) {
> + if (unlikely(is_shadow_stack_vma(vma)))
> + pte = pte_mkshdwstk(pte);
> + else
> + pte = pte_mkwrite(pte);
> + }
> return pte;
Exactly what we are trying to avoid in the x86 approach right now.
Please see the x86 series on details, we shouldn't try reinventing the
wheel but finding a core-mm approach that fits multiple architectures.
https://lkml.kernel.org/r/20230119212317.8324-1-rick.p.edgecombe@intel.com
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists