lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4wYcaYJS3f2FXi1L6wg4zznvgicGK5Gw+ZpcW4pwCQx5g@mail.gmail.com>
Date: Fri, 4 Oct 2024 23:35:06 +0800
From: Barry Song <21cnbao@...il.com>
To: Chris Li <chrisl@...nel.org>
Cc: ying.huang@...el.com, akpm@...ux-foundation.org, david@...hat.com, 
	hannes@...xchg.org, hughd@...gle.com, kaleshsingh@...gle.com, 
	kasong@...cent.com, linux-kernel@...r.kernel.org, linux-mm@...ck.org, 
	liyangouwen1@...o.com, mhocko@...e.com, minchan@...nel.org, sj@...nel.org, 
	stable@...r.kernel.org, surenb@...gle.com, v-songbaohua@...o.com, 
	willy@...radead.org, yosryahmed@...gle.com, yuzhao@...gle.com
Subject: Re: [PATCH] mm: avoid unconditional one-tick sleep when
 swapcache_prepare fails

On Fri, Oct 4, 2024 at 6:53 AM Chris Li <chrisl@...nel.org> wrote:
>
> On Tue, Oct 1, 2024 at 6:58 PM Barry Song <21cnbao@...il.com> wrote:
> >
> > On Wed, Oct 2, 2024 at 8:43 AM Huang, Ying <ying.huang@...el.com> wrote:
> > >
> > > Barry Song <21cnbao@...il.com> writes:
> > >
> > > > On Tue, Oct 1, 2024 at 7:43 AM Huang, Ying <ying.huang@...el.com> wrote:
> > > >>
> > > >> Barry Song <21cnbao@...il.com> writes:
> > > >>
> > > >> > On Sun, Sep 29, 2024 at 3:43 PM Huang, Ying <ying.huang@...el.com> wrote:
> > > >> >>
> > > >> >> Hi, Barry,
> > > >> >>
> > > >> >> Barry Song <21cnbao@...il.com> writes:
> > > >> >>
> > > >> >> > From: Barry Song <v-songbaohua@...o.com>
> > > >> >> >
> > > >> >> > Commit 13ddaf26be32 ("mm/swap: fix race when skipping swapcache")
> > > >> >> > introduced an unconditional one-tick sleep when `swapcache_prepare()`
> > > >> >> > fails, which has led to reports of UI stuttering on latency-sensitive
> > > >> >> > Android devices. To address this, we can use a waitqueue to wake up
> > > >> >> > tasks that fail `swapcache_prepare()` sooner, instead of always
> > > >> >> > sleeping for a full tick. While tasks may occasionally be woken by an
> > > >> >> > unrelated `do_swap_page()`, this method is preferable to two scenarios:
> > > >> >> > rapid re-entry into page faults, which can cause livelocks, and
> > > >> >> > multiple millisecond sleeps, which visibly degrade user experience.
> > > >> >>
> > > >> >> In general, I think that this works.  Why not extend the solution to
> > > >> >> cover schedule_timeout_uninterruptible() in __read_swap_cache_async()
> > > >> >> too?  We can call wake_up() when we clear SWAP_HAS_CACHE.  To avoid
> > > >> >
> > > >> > Hi Ying,
> > > >> > Thanks for your comments.
> > > >> > I feel extending the solution to __read_swap_cache_async() should be done
> > > >> > in a separate patch. On phones, I've never encountered any issues reported
> > > >> > on that path, so it might be better suited for an optimization rather than a
> > > >> > hotfix?
> > > >>
> > > >> Yes.  It's fine to do that in another patch as optimization.
> > > >
> > > > Ok. I'll prepare a separate patch for optimizing that path.
> > >
> > > Thanks!
> > >
> > > >>
> > > >> >> overhead to call wake_up() when there's no task waiting, we can use an
> > > >> >> atomic to count waiting tasks.
> > > >> >
> > > >> > I'm not sure it's worth adding the complexity, as wake_up() on an empty
> > > >> > waitqueue should have a very low cost on its own?
> > > >>
> > > >> wake_up() needs to call spin_lock_irqsave() unconditionally on a global
> > > >> shared lock.  On systems with many CPUs (such servers), this may cause
> > > >> severe lock contention.  Even the cache ping-pong may hurt performance
> > > >> much.
> > > >
> > > > I understand that cache synchronization was a significant issue before
> > > > qspinlock, but it seems to be less of a concern after its implementation.
> > >
> > > Unfortunately, qspinlock cannot eliminate cache ping-pong issue, as
> > > discussed in the following thread.
> > >
> > > https://lore.kernel.org/lkml/20220510192708.GQ76023@worktop.programming.kicks-ass.net/
> > >
> > > > However, using a global atomic variable would still trigger cache broadcasts,
> > > > correct?
> > >
> > > We can only change the atomic variable to non-zero when
> > > swapcache_prepare() returns non-zero, and call wake_up() when the atomic
> > > variable is non-zero.  Because swapcache_prepare() returns 0 most times,
> > > the atomic variable is 0 most times.  If we don't change the value of
> > > atomic variable, cache ping-pong will not be triggered.
> >
> > yes. this can be implemented by adding another atomic variable.
> >
> > >
> > > Hi, Kairui,
> > >
> > > Do you have some test cases to test parallel zram swap-in?  If so, that
> > > can be used to verify whether cache ping-pong is an issue and whether it
> > > can be fixed via a global atomic variable.
> > >
> >
> > Yes, Kairui please run a test on your machine with lots of cores before
> > and after adding a global atomic variable as suggested by Ying. I am
> > sorry I don't have a server machine.
> >
> > if it turns out you find cache ping-pong can be an issue, another
> > approach would be a waitqueue hash:
> >
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 2366578015ad..aae0e532d8b6 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -4192,6 +4192,23 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf)
> >  }
> >  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> >
> > +/*
> > + * Alleviating the 'thundering herd' phenomenon using a waitqueue hash
> > + * when multiple do_swap_page() operations occur simultaneously.
> > + */
> > +#define SWAPCACHE_WAIT_TABLE_BITS 5
> > +#define SWAPCACHE_WAIT_TABLE_SIZE (1 << SWAPCACHE_WAIT_TABLE_BITS)
> > +static wait_queue_head_t swapcache_wqs[SWAPCACHE_WAIT_TABLE_SIZE];
> > +
> > +static int __init swapcache_wqs_init(void)
> > +{
> > +       for (int i = 0; i < SWAPCACHE_WAIT_TABLE_SIZE; i++)
> > +               init_waitqueue_head(&swapcache_wqs[i]);
> > +
> > +        return 0;
> > +}
> > +late_initcall(swapcache_wqs_init);
> > +
> >  /*
> >   * We enter with non-exclusive mmap_lock (to exclude vma changes,
> >   * but allow concurrent faults), and pte mapped but not yet locked.
> > @@ -4204,6 +4221,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >  {
> >         struct vm_area_struct *vma = vmf->vma;
> >         struct folio *swapcache, *folio = NULL;
> > +       DECLARE_WAITQUEUE(wait, current);
> > +       wait_queue_head_t *swapcache_wq;
> >         struct page *page;
> >         struct swap_info_struct *si = NULL;
> >         rmap_t rmap_flags = RMAP_NONE;
> > @@ -4297,12 +4316,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >                                  * undetectable as pte_same() returns true due
> >                                  * to entry reuse.
> >                                  */
> > +                               swapcache_wq = &swapcache_wqs[hash_long(vmf->address & PMD_MASK,
> > +                                                       SWAPCACHE_WAIT_TABLE_BITS)];
>
> It is better to hash against the swap entry value rather than the
> fault address. Same swap entries can map to different parts of the
> page table. I am not sure this is triggerable in the SYNC_IO page
> fault path, hash against the swap entries is more obviously correct.
>

i am not convinced swap entry offset is a correct key here.

1. do_swap_page() is always for anon pages, there is no possibility
for anon pages to have different mapped virtual address; shmem will
never execute a different code path.

2. considering a mTHP swap-in case, the aligned virtual address
is the only reliable value for hash. if we only consider small folios
swap-in, it is fine to use swap entry value.

> Chris
>
> >                                 if (swapcache_prepare(entry, nr_pages)) {
> >                                         /*
> >                                          * Relax a bit to prevent rapid
> >                                          * repeated page faults.
> >                                          */
> > +                                       add_wait_queue(swapcache_wq, &wait);
> >                                         schedule_timeout_uninterruptible(1);
> > +                                       remove_wait_queue(swapcache_wq, &wait);
> >                                         goto out_page;
> >                                 }
> >                                 need_clear_cache = true;
> > @@ -4609,8 +4632,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >                 pte_unmap_unlock(vmf->pte, vmf->ptl);
> >  out:
> >         /* Clear the swap cache pin for direct swapin after PTL unlock */
> > -       if (need_clear_cache)
> > +       if (need_clear_cache) {
> >                 swapcache_clear(si, entry, nr_pages);
> > +               wake_up(swapcache_wq);
> > +       }
> >         if (si)
> >                 put_swap_device(si);
> >         return ret;
> > @@ -4625,8 +4650,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >                 folio_unlock(swapcache);
> >                 folio_put(swapcache);
> >         }
> > -       if (need_clear_cache)
> > +       if (need_clear_cache) {
> >                 swapcache_clear(si, entry, nr_pages);
> > +               wake_up(swapcache_wq);
> > +       }
> >         if (si)
> >                 put_swap_device(si);
> >         return ret;
> > --
> > 2.34.1
> >
> > > --
> > > Best Regards,
> > > Huang, Ying
> >

Thanks
Barry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ