lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 1 Mar 2023 19:17:53 +0800
From:   Dylan Jhong <dylan@...estech.com>
To:     Alexandre Ghiti <alex@...ti.fr>
CC:     <linux-riscv@...ts.infradead.org>, <linux-kernel@...r.kernel.org>,
        <liushixin2@...wei.com>, <x5710999x@...il.com>,
        <bjorn@...osinc.com>, <abrestic@...osinc.com>, <peterx@...hat.com>,
        <hanchuanhua@...o.com>, <apopple@...dia.com>, <hca@...ux.ibm.com>,
        <aou@...s.berkeley.edu>, <palmer@...belt.com>,
        <paul.walmsley@...ive.com>, <tim609@...estech.com>,
        <peterlin@...estech.com>, <ycliang@...estech.com>
Subject: Re: [PATCH] RISC-V: mm: Support huge page in vmalloc_fault()

On Fri, Feb 24, 2023 at 01:47:20PM +0100, Alexandre Ghiti wrote:
> Hi Dylan,
> 
> On 2/24/23 11:40, Dylan Jhong wrote:
> > RISC-V supports ioremap() with huge page (pud/pmd) mapping, but
> > vmalloc_fault() assumes that the vmalloc range is limited to pte
> > mappings. Add huge page support to complete the vmalloc_fault()
> > function.
> > 
> > Fixes: 310f541a027b ("riscv: Enable HAVE_ARCH_HUGE_VMAP for 64BIT")
> > 
> > Signed-off-by: Dylan Jhong <dylan@...estech.com>
> > ---
> >   arch/riscv/mm/fault.c | 5 +++++
> >   1 file changed, 5 insertions(+)
> > 
> > diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
> > index eb0774d9c03b..4b9953b47d81 100644
> > --- a/arch/riscv/mm/fault.c
> > +++ b/arch/riscv/mm/fault.c
> > @@ -143,6 +143,8 @@ static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long a
> >   		no_context(regs, addr);
> >   		return;
> >   	}
> > +	if (pud_leaf(*pud_k))
> > +		goto flush_tlb;
> >   	/*
> >   	 * Since the vmalloc area is global, it is unnecessary
> > @@ -153,6 +155,8 @@ static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long a
> >   		no_context(regs, addr);
> >   		return;
> >   	}
> > +	if (pmd_leaf(*pmd_k))
> > +		goto flush_tlb;
> >   	/*
> >   	 * Make sure the actual PTE exists as well to
> > @@ -172,6 +176,7 @@ static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long a
> >   	 * ordering constraint, not a cache flush; it is
> >   	 * necessary even after writing invalid entries.
> >   	 */
> > +flush_tlb:
> >   	local_flush_tlb_page(addr);
> >   }
> 
> 
> This looks good to me, you can add:
> 
> Reviewed-by: Alexandre Ghiti <alexghiti@...osinc.com>
> 
> One question: how did you encounter this bug?
> 
> Thanks,
> 
> Alex
>
Hi Alex,

>>> One question: how did you encounter this bug?
This bug is caused by the combination of out-of-order excutiuon and ioremap().
The OoO excution will try to access the VA that is given by ioremap() and record
a page fault in TLB before the mapping is created in ioremap(). When the CPU
really accesses the VA after ioremap(), the CPU will trigger page fault because
of the TLB already has the VA mapping.

We hope that the vmalloc_fault() in page fault handler will trigger sfence.vma
to invalidate the TLB[1]. But since we do not support the huge page in vmalloc_fault(),
we encountered the nested page faults in vmalloc_fault() while forcing the pmd/pud
huge pages to resolve pte entry. This is the reason I send this patch.

ref:
    [1]: https://patchwork.kernel.org/project/linux-riscv/patch/20210412000531.12249-1-liu@jiuyang.me/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ