lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 18 Dec 2020 21:00:30 +0800
From:   Lai Jiangshan <jiangshanlai@...il.com>
To:     LKML <linux-kernel@...r.kernel.org>
Cc:     Lai Jiangshan <laijs@...ux.alibaba.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Andy Lutomirski <luto@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        X86 ML <x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH V2 1/3] x86/mm/pti: handle unaligned address for pmd clone
 in pti_clone_pagetable()

Hello, Dave Hansen

Could you help review the patches, please?

I think they meet your suggestion except for forcing alignment in the
caller.  The reason is in the code.

Thanks
Lai

On Thu, Dec 10, 2020 at 9:34 PM Lai Jiangshan <jiangshanlai@...il.com> wrote:
>
> From: Lai Jiangshan <laijs@...ux.alibaba.com>
>
> The commit 825d0b73cd752("x86/mm/pti: Handle unaligned address gracefully
> in pti_clone_pagetable()") handles unaligned address well for unmapped
> PUD/PMD etc. But unaligned address for mapped pmd also needs to
> be aware.
>
> For mapped pmd, if @addr is not aligned to PMD_SIZE, the next pmd
> (PTI_CLONE_PMD or the next pmd is large) or the last ptes (PTI_CLONE_PTE)
> in the next pmd will not be cloned when @end < @addr + PMD_SIZE in the
> current logic in the code.
>
> It is not a good idea to force alignment in the caller due to one of
> the cases (see the comments in the code), so it just handles the alignment
> in pti_clone_pagetable().
>
> Signed-off-by: Lai Jiangshan <laijs@...ux.alibaba.com>
> ---
>  arch/x86/mm/pti.c | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)
>
> diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
> index 1aab92930569..7ee99ef13a99 100644
> --- a/arch/x86/mm/pti.c
> +++ b/arch/x86/mm/pti.c
> @@ -342,6 +342,21 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
>                 }
>
>                 if (pmd_large(*pmd) || level == PTI_CLONE_PMD) {
> +                       /*
> +                        * pti_clone_kernel_text() might be called with
> +                        * @start not aligned to PMD_SIZE. We need to make
> +                        * it aligned, otherwise the next pmd or last ptes
> +                        * are not cloned when @end < @addr + PMD_SIZE.
> +                        *
> +                        * We can't force pti_clone_kernel_text() to align
> +                        * the @addr to PMD_SIZE when level == PTI_CLONE_PTE.
> +                        * But the problem can still possible exist when the
> +                        * first pmd is large. And it is not a good idea to
> +                        * check whether the first pmd is large or not in the
> +                        * caller, so we just simply align it here.
> +                        */
> +                       addr = round_down(addr, PMD_SIZE);
> +
>                         target_pmd = pti_user_pagetable_walk_pmd(addr);
>                         if (WARN_ON(!target_pmd))
>                                 return;
> --
> 2.19.1.6.gb485710b
>

Powered by blists - more mailing lists