[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF8kJuOZO5C6J44U0CkU-Y9nGYnYiX4EQddjJGo+fQxh3BDVQg@mail.gmail.com>
Date: Wed, 31 Jan 2024 16:57:44 -0800
From: Chris Li <chrisl@...nel.org>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Wei Xu <weixugc@...gle.com>,
Yu Zhao <yuzhao@...gle.com>,
Greg Thelen <gthelen@...gle.com>, Chun-Tse Shao <ctshao@...gle.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Brain Geffon <bgeffon@...gle.com>, Minchan Kim <minchan@...nel.org>, Michal Hocko <mhocko@...e.com>,
Mel Gorman <mgorman@...hsingularity.net>, Huang Ying <ying.huang@...el.com>,
Nhat Pham <nphamcs@...il.com>, Johannes Weiner <hannes@...xchg.org>, Kairui Song <kasong@...cent.com>,
Zhongkun He <hezhongkun.hzk@...edance.com>, Kemeng Shi <shikemeng@...weicloud.com>,
Barry Song <v-songbaohua@...o.com>
Subject: Re: [PATCH] mm: swap: async free swap slot cache entries
Hi Yosry,
On Thu, Dec 28, 2023 at 7:34 AM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> On Thu, Dec 21, 2023 at 10:25 PM Chris Li <chrisl@...nel.org> wrote:
> >
> > We discovered that 1% swap page fault is 100us+ while 50% of
> > the swap fault is under 20us.
> >
> > Further investigation show that a large portion of the time
> > spent in the free_swap_slots() function for the long tail case.
> >
> > The percpu cache of swap slots is freed in a batch of 64 entries
> > inside free_swap_slots(). These cache entries are accumulated
> > from previous page faults, which may not be related to the current
> > process.
> >
> > Doing the batch free in the page fault handler causes longer
> > tail latencies and penalizes the current process.
> >
> > Move free_swap_slots() outside of the swapin page fault handler into an
> > async work queue to avoid such long tail latencies.
> >
> > Testing:
> >
> > Chun-Tse did some benchmark in chromebook, showing that
> > zram_wait_metrics improve about 15% with 80% and 95% confidence.
> >
> > I recently ran some experiments on about 1000 Google production
> > machines. It shows swapin latency drops in the long tail
> > 100us - 500us bucket dramatically.
> >
> > platform (100-500us) (0-100us)
> > A 1.12% -> 0.36% 98.47% -> 99.22%
> > B 0.65% -> 0.15% 98.96% -> 99.46%
> > C 0.61% -> 0.23% 98.96% -> 99.38%
>
> I recall you mentioning that mem_cgroup_uncharge_swap() is the most
> expensive part of the batched freeing. If that's the case, I am
> curious what happens if we move that call outside of the batching
> (i.e. once the swap entry is no longer used and will be returned to
> the cache). This should amortize the cost of memcg uncharging and
> reduce the tail fault latency without extra work. Arguably, it could
> increase the average fault latency, but not necessarily in a
> significant way.
>
> Ying pointed out something similar if I understand correctly (and
> other operations that can be moved).
If the goal is to let the swap fault return as soon as possible. Then
the current approach is better.
mem_cgroup_uncarge_swap() is only part of it. Not close to all of it.
>
> Also, if we choose to follow this route, I think there we should flush
> the async worker in drain_slots_cache_cpu(), right?
Not sure I understand this part. The drain_slots_cache_cpu(), will
free the entries already. The current lock around cache->free_lock
should protect async workers accessing the entries. What do you mean
by flushing?
Chris
Powered by blists - more mailing lists