lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240530093306.GA35610@system.software.com>
Date: Thu, 30 May 2024 18:33:07 +0900
From: Byungchul Park <byungchul@...com>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: Dave Hansen <dave.hansen@...el.com>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, kernel_team@...ynix.com,
	akpm@...ux-foundation.org, vernhao@...cent.com,
	mgorman@...hsingularity.net, hughd@...gle.com, willy@...radead.org,
	david@...hat.com, peterz@...radead.org, luto@...nel.org,
	tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
	dave.hansen@...ux.intel.com, rjgolo@...il.com
Subject: Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers
 over 90%

On Thu, May 30, 2024 at 04:24:12PM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@...com> writes:
> 
> > On Thu, May 30, 2024 at 09:11:45AM +0800, Huang, Ying wrote:
> >> Byungchul Park <byungchul@...com> writes:
> >> 
> >> > On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
> >> >> On 5/28/24 22:00, Byungchul Park wrote:
> >> >> > All the code updating ptes already performs TLB flush needed in a safe
> >> >> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> >> >> > a higer level than arch code, just leaves stale ro tlb entries that are
> >> >> > currently supposed to be in use.  Could you give a scenario that you are
> >> >> > concering?
> >> >> 
> >> >> Let's go back this scenario:
> >> >> 
> >> >>  	fd = open("/some/file", O_RDONLY);
> >> >>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >> >>  	foo1 = *ptr1;
> >> >> 
> >> >> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
> >> >> eligible for LUF via the try_to_unmap() paths.  In other words, the page
> >> >> might be reclaimed at any time.  If it is reclaimed, the PTE will be
> >> >> cleared.
> >> >> 
> >> >> Then, the user might do:
> >> >> 
> >> >> 	munmap(ptr1, PAGE_SIZE);
> >> >> 
> >> >> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
> >> >> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
> >> >> mmu_gather'.
> >> >> 
> >> >> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
> >> >> 'struct mmu_gather':
> >> >> 
> >> >>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
> >> >> 	      tlb->cleared_pmds || tlb->cleared_puds ||
> >> >> 	      tlb->cleared_p4ds))
> >> >>                 return;
> >> >> 
> >> >> But since there were no cleared PTEs (or anything else) during the
> >> >> unmap, this just returns and doesn't flush the TLB.
> >> >> 
> >> >> We now have an address space with a stale TLB entry at 'ptr1' and not
> >> >> even a VMA there.  There's nothing to stop a new VMA from going in,
> >> >> installing a *new* PTE, but getting data from the stale TLB entry that
> >> >> still hasn't been flushed.
> >> >
> >> > Thank you for the explanation.  I got you.  I think I could handle the
> >> > case through a new flag in vma or something indicating LUF has deferred
> >> > necessary TLB flush for it during unmapping so that mmu_gather mechanism
> >> > can be aware of it.  Of course, the performance change should be checked
> >> > again.  Thoughts?
> >> 
> >> I suggest you to start with the simple case.  That is, only support page
> >> reclaiming and migration.  A TLB flushing can be enforced during unmap
> >> with something similar as flush_tlb_batched_pending().
> >
> > While reading flush_tlb_batched_pending(mm), I found it already performs
> > TLB flush for the target mm, if set_tlb_ubc_flush_pending(mm) has been
> > hit at least once since the last flush_tlb_batched_pending(mm).
> >
> > Since LUF also relies on set_tlb_ubc_flush_pending(mm), it's going to
> > perform TLB flush required, in flush_tlb_batched_pending(mm) during
> > munmap().  So it looks safe to me with regard to munmap() already.
> >
> > Is there something that I'm missing?
> >
> > JFYI, regarding to mmap(), I have reworked on fault handler to give up
> > luf when needed in a better way.
> 
> If TLB flush is always enforced during munmap(), then your solution can
> only avoid TLB flushing for page reclaiming and migration, not unmap.

I'm not sure if I understand what you meant.  Could you explain it in
more detail?

LUF works for only *unmapping* that happens during page reclaiming and
migration.  Other unmappings than page reclaiming and migration are not
what LUF works for.  That's why I thought flush_tlb_batched_pending()
could handle the pending tlb flushes in the case.

It'd be appreciated if you explain what you meant more.

	Byungchul

> Or do I miss something?
> 
> --
> Best Regards,
> Huang, Ying

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ