lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 25 Jul 2023 23:40:56 -0600
From:   Yu Zhao <yuzhao@...gle.com>
To:     Yin Fengwei <fengwei.yin@...el.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        akpm@...ux-foundation.org, minchan@...nel.org, willy@...radead.org,
        david@...hat.com, ryan.roberts@....com, shy828301@...il.com
Subject: Re: [RFC PATCH v2 2/4] madvise: Use notify-able API to clear and
 flush page table entries

On Tue, Jul 25, 2023 at 10:44 PM Yin Fengwei <fengwei.yin@...el.com> wrote:
>
>
>
> On 7/26/23 11:26, Yu Zhao wrote:
> > On Tue, Jul 25, 2023 at 8:49 PM Yin Fengwei <fengwei.yin@...el.com> wrote:
> >>
> >>
> >> On 7/25/23 13:55, Yu Zhao wrote:
> >>> On Fri, Jul 21, 2023 at 3:41 AM Yin Fengwei <fengwei.yin@...el.com> wrote:
> >>>>
> >>>> Currently, in function madvise_cold_or_pageout_pte_range(), the
> >>>> young bit of pte/pmd is cleared notify subscripter.
> >>>>
> >>>> Using notify-able API to make sure the subscripter is signaled about
> >>>> the young bit clearing.
> >>>>
> >>>> Signed-off-by: Yin Fengwei <fengwei.yin@...el.com>
> >>>> ---
> >>>>  mm/madvise.c | 18 ++----------------
> >>>>  1 file changed, 2 insertions(+), 16 deletions(-)
> >>>>
> >>>> diff --git a/mm/madvise.c b/mm/madvise.c
> >>>> index f12933ebcc24..b236e201a738 100644
> >>>> --- a/mm/madvise.c
> >>>> +++ b/mm/madvise.c
> >>>> @@ -403,14 +403,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> >>>>                         return 0;
> >>>>                 }
> >>>>
> >>>> -               if (pmd_young(orig_pmd)) {
> >>>> -                       pmdp_invalidate(vma, addr, pmd);
> >>>> -                       orig_pmd = pmd_mkold(orig_pmd);
> >>>> -
> >>>> -                       set_pmd_at(mm, addr, pmd, orig_pmd);
> >>>> -                       tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
> >>>> -               }
> >>>> -
> >>>> +               pmdp_clear_flush_young_notify(vma, addr, pmd);
> >>>>                 folio_clear_referenced(folio);
> >>>>                 folio_test_clear_young(folio);
> >>>>                 if (folio_test_active(folio))
> >>>> @@ -496,14 +489,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> >>>>
> >>>>                 VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
> >>>>
> >>>> -               if (pte_young(ptent)) {
> >>>> -                       ptent = ptep_get_and_clear_full(mm, addr, pte,
> >>>> -                                                       tlb->fullmm);
> >>>> -                       ptent = pte_mkold(ptent);
> >>>> -                       set_pte_at(mm, addr, pte, ptent);
> >>>> -                       tlb_remove_tlb_entry(tlb, pte, addr);
> >>>> -               }
> >>>> -
> >>>> +               ptep_clear_flush_young_notify(vma, addr, pte);
> >>>
> >>> These two places are tricky.
> >>>
> >>> I agree there is a problem here, i.e., we are not consulting the mmu
> >>> notifier. In fact, we do pageout on VMs on ChromeOS, and it's been a
> >>> known problem to me for a while (not a high priority one).
> >>>
> >>> tlb_remove_tlb_entry() is batched flush, ptep_clear_flush_young() is
> >>> not. But, on x86, we might see a performance improvement since
> >>> ptep_clear_flush_young() doesn't flush TLB at all. On ARM, there might
> >>> be regressions though.
> >>>
> >>> I'd go with ptep_clear_young_notify(), but IIRC, Minchan mentioned he
> >>> prefers flush. So I'll let him chime in.
> >> I am OK with either way even no flush way here is more efficient for
> >> arm64. Let's wait for Minchan's comment.
> >
> > Yes, and I don't think there would be any "negative" consequences
> > without tlb flushes when clearing the A-bit.
> >
> >>> If we do end up with ptep_clear_young_notify(), please remove
> >>> mmu_gather -- it should have been done in this patch.
> >>
> >> I suppose "remove mmu_gather" means to trigger flush tlb operation in
> >> batched way to make sure no stale data in TLB for long time on arm64
> >> platform.
> >
> > In madvise_cold_or_pageout_pte_range(), we only need struct
> > mmu_gather *tlb because of tlb_remove_pmd_tlb_entry(), i.e., flushing
> > tlb after clearing the A-bit. There is no correction, e.g., potential
> > data corruption, involved there.
>
> From https://lore.kernel.org/lkml/20181029105515.GD14127@arm.com/,
> the reason that arm64 didn't drop whole flush tlb in ptep_clear_flush_young()
> is to prevent the stale data in TLB. I suppose there is no correction issue
> there also.
>
> So why keep stale data in TLB in madvise_cold_or_pageout_pte_range() is fine?

Sorry, I'm not sure I understand your question here.

In this patch, you removed tlb_remove_tlb_entry(), so we don't need
struct mmu_gather *tlb any more.

If you are asking why I prefer ptep_clear_young_notify() (no flush),
which also doesn't need tlb_remove_tlb_entry(), then the answer is
that the TLB size doesn't scale like DRAM does: the gap has been
growing exponentially. So there is no way TLB can hold stale entries
long enough to cause a measurable effect on the A-bit. This isn't a
conjecture -- it's been proven conversely: we encountered bugs (almost
every year) caused by missing TLB flushes and resulting in data
corruption. They were never easy to reproduce, meaning stale entries
never stayed long in TLB.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ