[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201123183554.GC11688@willie-the-truck>
Date: Mon, 23 Nov 2020 18:35:55 +0000
From: Will Deacon <will@...nel.org>
To: Yu Zhao <yuzhao@...gle.com>
Cc: linux-kernel@...r.kernel.org, kernel-team@...roid.com,
Catalin Marinas <catalin.marinas@....com>,
Minchan Kim <minchan@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Anshuman Khandual <anshuman.khandual@....com>,
linux-mm@...ck.org, linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH 6/6] mm: proc: Avoid fullmm flush for young/dirty bit
toggling
On Fri, Nov 20, 2020 at 01:40:05PM -0700, Yu Zhao wrote:
> On Fri, Nov 20, 2020 at 02:35:57PM +0000, Will Deacon wrote:
> > clear_refs_write() uses the 'fullmm' API for invalidating TLBs after
> > updating the page-tables for the current mm. However, since the mm is not
> > being freed, this can result in stale TLB entries on architectures which
> > elide 'fullmm' invalidation.
> >
> > Ensure that TLB invalidation is performed after updating soft-dirty
> > entries via clear_refs_write() by using the non-fullmm API to MMU gather.
> >
> > Signed-off-by: Will Deacon <will@...nel.org>
> > ---
> > fs/proc/task_mmu.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > index a76d339b5754..316af047f1aa 100644
> > --- a/fs/proc/task_mmu.c
> > +++ b/fs/proc/task_mmu.c
> > @@ -1238,7 +1238,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
> > count = -EINTR;
> > goto out_mm;
> > }
> > - tlb_gather_mmu_fullmm(&tlb, mm);
> > + tlb_gather_mmu(&tlb, mm, 0, TASK_SIZE);
>
> Let's assume my reply to patch 4 is wrong, and therefore we still need
> tlb_gather/finish_mmu() here. But then wouldn't this change deprive
> architectures other than ARM the opportunity to optimize based on the
> fact it's a full-mm flush?
Only for the soft-dirty case, but I think TLB invalidation is required
there because we are write-protecting the entries and I don't see any
mechanism to handle lazy invalidation for that (compared with the aging
case, which is handled via pte_accessible()).
Furthermore, If we decide that we can relax the TLB invalidation
requirements here, then I'd much rather than was done deliberately, rather
than as an accidental side-effect of another commit (since I think the
current behaviour was a consequence of 7a30df49f63a).
> It seems to me ARM's interpretation of tlb->fullmm is a special case,
> not the other way around.
Although I agree that this is subtle and error-prone (which is why I'm
trying to make the API more explicit here), it _is_ documented clearly
in asm-generic/tlb.h:
* - mmu_gather::fullmm
*
* A flag set by tlb_gather_mmu() to indicate we're going to free
* the entire mm; this allows a number of optimizations.
*
* - We can ignore tlb_{start,end}_vma(); because we don't
* care about ranges. Everything will be shot down.
*
* - (RISC) architectures that use ASIDs can cycle to a new ASID
* and delay the invalidation until ASID space runs out.
Will
Powered by blists - more mailing lists