lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z/0V9r6rjEjSH3fh@li-008a6a4c-3549-11b2-a85c-c5cc2836eea2.ibm.com>
Date: Mon, 14 Apr 2025 16:04:38 +0200
From: Alexander Gordeev <agordeev@...ux.ibm.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
        "David S. Miller" <davem@...emloft.net>,
        Andreas Larsson <andreas@...sler.com>, Juergen Gross <jgross@...e.com>,
        Boris Ostrovsky <boris.ostrovsky@...cle.com>,
        Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
        Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        "H. Peter Anvin" <hpa@...or.com>,
        "Matthew Wilcox (Oracle)" <willy@...radead.org>,
        Catalin Marinas <catalin.marinas@....com>, linux-mm@...ck.org,
        sparclinux@...r.kernel.org, xen-devel@...ts.xenproject.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 0/5] Fix lazy mmu mode

On Mon, Apr 14, 2025 at 02:22:53PM +0100, Ryan Roberts wrote:
> On 10/04/2025 17:07, Alexander Gordeev wrote:
> >> I'm planning to implement lazy mmu mode for arm64 to optimize vmalloc. As part
> >> of that, I will extend lazy mmu mode to cover kernel mappings in vmalloc table
> >> walkers. While lazy mmu mode is already used for kernel mappings in a few
> >> places, this will extend it's use significantly.
> >>
> >> Having reviewed the existing lazy mmu implementations in powerpc, sparc and x86,
> >> it looks like there are a bunch of bugs, some of which may be more likely to
> >> trigger once I extend the use of lazy mmu.
> > 
> > Do you have any idea about generic code issues as result of not adhering to
> > the originally stated requirement:
> > 
> >   /*
> >    ...
> >    * the PTE updates which happen during this window.  Note that using this
> >    * interface requires that read hazards be removed from the code.  A read
> >    * hazard could result in the direct mode hypervisor case, since the actual
> >    * write to the page tables may not yet have taken place, so reads though
> >    * a raw PTE pointer after it has been modified are not guaranteed to be
> >    * up to date.
> >    ...
> >    */
> > 
> > I tried to follow few code paths and at least this one does not look so good:
> > 
> > copy_pte_range(..., src_pte, ...)
> > 	ret = copy_nonpresent_pte(..., src_pte, ...)
> > 		try_restore_exclusive_pte(..., src_pte, ...)	// is_device_exclusive_entry(entry)
> > 			restore_exclusive_pte(..., ptep, ...)
> > 				set_pte_at(..., ptep, ...)
> > 					set_pte(ptep, pte);	// save in lazy mmu mode
> > 
> > 	// ret == -ENOENT
> > 
> > 	ptent = ptep_get(src_pte);				// lazy mmu save is not observed
> > 	ret = copy_present_ptes(..., ptent, ...);		// wrong ptent used
> > 
> > I am not aware whether the effort to "read hazards be removed from the code"
> > has ever been made and the generic code is safe in this regard.
> > 
> > What is your take on this?
> 
> Hmm, that looks like a bug to me, at least based on the stated requirements.
> Although this is not a "read through a raw PTE *pointer*", it is a ptep_get().
> The arch code can override that so I guess it has an opportunity to flush. But I
> don't think any arches are currently doing that.
> 
> Probably the simplest fix is to add arch_flush_lazy_mmu_mode() before the
> ptep_get()?

Which would completely revert the very idea of the lazy mmu mode?
(As one would flush on every PTE page table iteration).

> It won't be a problem in practice for arm64, since the pgtables are always
> updated immediately. I just want to use these hooks to defer/batch barriers in
> certain cases.
> 
> And this is a pre-existing issue for the arches that use lazy mmu with
> device-exclusive mappings, which my extending lazy mmu into vmalloc won't
> exacerbate.
> 
> Would you be willing/able to submit a fix?

Well, we have a dozen of lazy mmu cases and I would guess it is not the
only piece of code that seems affected. I was thinking about debug feature
that could help spotting all troubled locations.

Then we could assess and decide if it is feasible to fix. Just turning the
code above into the PTE read-modify-update pattern is quite an exercise...

> Thanks,
> Ryan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ