lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201124011334.GA140483@google.com>
Date:   Mon, 23 Nov 2020 18:13:34 -0700
From:   Yu Zhao <yuzhao@...gle.com>
To:     Will Deacon <will@...nel.org>
Cc:     linux-kernel@...r.kernel.org, kernel-team@...roid.com,
        Catalin Marinas <catalin.marinas@....com>,
        Minchan Kim <minchan@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Anshuman Khandual <anshuman.khandual@....com>,
        linux-mm@...ck.org, linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH 6/6] mm: proc: Avoid fullmm flush for young/dirty bit
 toggling

On Mon, Nov 23, 2020 at 09:17:51PM +0000, Will Deacon wrote:
> On Mon, Nov 23, 2020 at 01:04:03PM -0700, Yu Zhao wrote:
> > On Mon, Nov 23, 2020 at 06:35:55PM +0000, Will Deacon wrote:
> > > On Fri, Nov 20, 2020 at 01:40:05PM -0700, Yu Zhao wrote:
> > > > On Fri, Nov 20, 2020 at 02:35:57PM +0000, Will Deacon wrote:
> > > > > clear_refs_write() uses the 'fullmm' API for invalidating TLBs after
> > > > > updating the page-tables for the current mm. However, since the mm is not
> > > > > being freed, this can result in stale TLB entries on architectures which
> > > > > elide 'fullmm' invalidation.
> > > > > 
> > > > > Ensure that TLB invalidation is performed after updating soft-dirty
> > > > > entries via clear_refs_write() by using the non-fullmm API to MMU gather.
> > > > > 
> > > > > Signed-off-by: Will Deacon <will@...nel.org>
> > > > > ---
> > > > >  fs/proc/task_mmu.c | 2 +-
> > > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > > > 
> > > > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > > > > index a76d339b5754..316af047f1aa 100644
> > > > > --- a/fs/proc/task_mmu.c
> > > > > +++ b/fs/proc/task_mmu.c
> > > > > @@ -1238,7 +1238,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
> > > > >  			count = -EINTR;
> > > > >  			goto out_mm;
> > > > >  		}
> > > > > -		tlb_gather_mmu_fullmm(&tlb, mm);
> > > > > +		tlb_gather_mmu(&tlb, mm, 0, TASK_SIZE);
> > > > 
> > > > Let's assume my reply to patch 4 is wrong, and therefore we still need
> > > > tlb_gather/finish_mmu() here. But then wouldn't this change deprive
> > > > architectures other than ARM the opportunity to optimize based on the
> > > > fact it's a full-mm flush?
> > 
> > I double checked my conclusion on patch 4, and aside from a couple
> > of typos, it still seems correct after the weekend.
> 
> I still need to digest that, but I would prefer that we restore the
> invalidation first, and then have a subsequent commit to relax it. I find
> it hard to believe that the behaviour in mainline at the moment is deliberate.
> 
> That is, I'm not against optimising this, but I'd rather get it "obviously
> correct" first and the current code is definitely not that.

I wouldn't mind having this patch and patch 4 if the invalidation they
restore were in a correct state -- b3a81d0841a9 ("mm: fix KSM data
corruption") isn't correct to start with.

It is complicated, so please bear with me. Let's study this by looking
at examples this time.

> > > Only for the soft-dirty case, but I think TLB invalidation is required
> > > there because we are write-protecting the entries and I don't see any
> > > mechanism to handle lazy invalidation for that (compared with the aging
> > > case, which is handled via pte_accessible()).
> > 
> > The lazy invalidation for that is done when we write-protect a page,
> > not an individual PTE. When we do so, our decision is based on both
> > the dirty bit and the writable bit on each PTE mapping this page. So
> > we only need to make sure we don't lose both on a PTE. And we don't
> > here.
> 
> Sorry, I don't follow what you're getting at here (page vs pte). Please can
> you point me to the code you're referring to? The case I'm worried about is
> code that holds sufficient locks (e.g. mmap_sem + ptl) finding an entry
> where !pte_write() and assuming (despite pte_dirty()) that there can't be
> any concurrent modifications to the mapped page. Granted, I haven't found
> anything doing that, but I could not convince myself that it would be a bug
> to write such code, either.

Example 1: memory corruption is still possible with patch 4 & 6

  CPU0        CPU1        CPU2        CPU3
  ----        ----        ----        ----
  userspace                           page writeback

  [cache writable
   PTE in TLB]

              inc_tlb_flush_pending()
              clean_record_pte()
              pte_mkclean()

                          tlb_gather_mmu()
                          [set mm_tlb_flush_pending()]
                          clear_refs_write()
                          pte_wrprotect()

                                      page_mkclean_one()
                                      !pte_dirty() && !pte_write()
                                      [true, no flush]

                                      write page to disk

  Write to page
  [using stale PTE]

                                      drop clean page
                                      [data integrity compromised]

              flush_tlb_range()

                          tlb_finish_mmu()
                          [flush (with patch 4)]

Example 2: why no flush when write-protecting is not a problem (after
we fix the problem correctly by adding mm_tlb_flush_pending()).

Case a:

  CPU0        CPU1        CPU2        CPU3
  ----        ----        ----        ----
  userspace                           page writeback

  [cache writable
   PTE in TLB]

              inc_tlb_flush_pending()
              clean_record_pte()
              pte_mkclean()

                          clear_refs_write()
                          pte_wrprotect()

                                      page_mkclean_one()
                                      !pte_dirty() && !pte_write() &&
                                      !mm_tlb_flush_pending()
                                      [false: flush]

                                      write page to disk

  Write to page
  [page fault]

                                      drop clean page
                                      [data integrity guaranteed]

              flush_tlb_range()

Case b:

  CPU0        CPU1        CPU2
  ----        ----        ----
  userspace               page writeback

  [cache writable
   PTE in TLB]

              clear_refs_write()
              pte_wrprotect()
              [pte_dirty() is false]

                          page_mkclean_one()
                          !pte_dirty() && !pte_write() &&
                          !mm_tlb_flush_pending()
                          [true: no flush]

                          write page to disk

  Write to page
  [h/w tries to set
   the dirty bit
   but sees write-
   protected PTE,
   page fault]

                          drop clean page
                          [data integrity guaranteed]

Case c:

  CPU0        CPU1        CPU2
  ----        ----        ----
  userspace               page writeback

  [cache writable
   PTE in TLB]

              clear_refs_write()
              pte_wrprotect()
              [pte_dirty() is true]

                          page_mkclean_one()
                          !pte_dirty() && !pte_write() &&
                          !mm_tlb_flush_pending()
                          [false: flush]

                          write page to disk

  Write to page
  [page fault]

                          drop clean page
                          [data integrity guaranteed]

> > > Furthermore, If we decide that we can relax the TLB invalidation
> > > requirements here, then I'd much rather than was done deliberately, rather
> > > than as an accidental side-effect of another commit (since I think the
> > > current behaviour was a consequence of 7a30df49f63a).
> > 
> > Nope. tlb_gather/finish_mmu() should be added by b3a81d0841a9
                                  ^^^^^^ shouldn't

Another typo, I apologize.

> > ("mm: fix KSM data corruption") in the first place.
> 
> Sure, but if you check out b3a81d0841a9 then you have a fullmm TLB
> invalidation in tlb_finish_mmu(). 7a30df49f63a is what removed that, no?
> 
> Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ