[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF8kJuMXjp1A1kdS_x-S_dyst8MLHwjuAEt-SfGERKVYZNmRww@mail.gmail.com>
Date: Wed, 14 Feb 2024 10:56:59 -0800
From: Chris Li <chriscli@...gle.com>
To: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Wei Xu <weixugc@...gle.com>, Yu Zhao <yuzhao@...gle.com>,
Greg Thelen <gthelen@...gle.com>, Chun-Tse Shao <ctshao@...gle.com>,
Yosry Ahmed <yosryahmed@...gle.com>, Michal Hocko <mhocko@...e.com>,
Mel Gorman <mgorman@...hsingularity.net>, Huang Ying <ying.huang@...el.com>,
Nhat Pham <nphamcs@...il.com>, Kairui Song <kasong@...cent.com>,
Barry Song <v-songbaohua@...o.com>
Subject: Re: [PATCH v3] mm: swap: async free swap slot cache entries
On Tue, Feb 13, 2024 at 4:08 PM Tim Chen <tim.c.chen@...ux.intel.com> wrote:
>
> On Tue, 2024-02-13 at 15:20 -0800, Chris Li wrote:
> > We discovered that 1% swap page fault is 100us+ while 50% of
> > the swap fault is under 20us.
> >
> > Further investigation show that a large portion of the time
> > spent in the free_swap_slots() function for the long tail case.
> >
> > The percpu cache of swap slots is freed in a batch of 64 entries
> > inside free_swap_slots(). These cache entries are accumulated
> > from previous page faults, which may not be related to the current
> > process.
> >
> > Doing the batch free in the page fault handler causes longer
> > tail latencies and penalizes the current process.
> >
> > Add /sys/kernel/mm/swap/swap_slot_async_free to control the
> > async free behavior. When enabled, using work queue to async
> > free the swap slot when the swap slot cache is full.
> >
> > Testing:
> >
> > Chun-Tse did some benchmark in chromebook, showing that
> > zram_wait_metrics improve about 15% with 80% and 95% confidence.
> >
> > I recently ran some experiments on about 1000 Google production
> > machines. It shows swapin latency drops in the long tail
> > 100us - 500us bucket dramatically.
> >
> > platform (100-500us) (0-100us)
> > A 1.12% -> 0.36% 98.47% -> 99.22%
> > B 0.65% -> 0.15% 98.96% -> 99.46%
> > C 0.61% -> 0.23% 98.96% -> 99.38%
> >
> > Signed-off-by: Chris Li <chrisl@...nel.org>
> > ---
> > Changes in v3:
> > - Address feedback from Tim Chen, direct free path will free all swap slots.
> > - Add /sys/kernel/mm/swap/swap_slot_async_fee to enable async free. Default is off.
> > - Link to v2: https://lore.kernel.org/r/20240131-async-free-v2-1-525f03e07184@kernel.org
> >
> > Changes in v2:
> > - Add description of the impact of time changing suggest by Ying.
> > - Remove create_workqueue() and use schedule_work()
> > - Link to v1: https://lore.kernel.org/r/20231221-async-free-v1-1-94b277992cb0@kernel.org
> > ---
> > include/linux/swap_slots.h | 2 ++
> > mm/swap_slots.c | 20 ++++++++++++++++++++
> > mm/swap_state.c | 23 +++++++++++++++++++++++
> > 3 files changed, 45 insertions(+)
> >
> > diff --git a/include/linux/swap_slots.h b/include/linux/swap_slots.h
> > index 15adfb8c813a..bb9a401d7cae 100644
> > --- a/include/linux/swap_slots.h
> > +++ b/include/linux/swap_slots.h
> > @@ -19,6 +19,7 @@ struct swap_slots_cache {
> > spinlock_t free_lock; /* protects slots_ret, n_ret */
> > swp_entry_t *slots_ret;
> > int n_ret;
> > + struct work_struct async_free;
> > };
> >
> > void disable_swap_slots_cache_lock(void);
> > @@ -27,5 +28,6 @@ void enable_swap_slots_cache(void);
> > void free_swap_slot(swp_entry_t entry);
> >
> > extern bool swap_slot_cache_enabled;
> > +extern uint8_t slot_cache_async_free __read_mostly;
>
> Why wouldn't you enable the async_free always?
> Otherwise the patch looks fine to me.
Thanks for the feedback.
Just in case someone doesn't care about this optimization and wants to
opt out this behavior?
Anyway, I am happy to update the patch without the sysfs control file as well.
Chris
Powered by blists - more mailing lists