[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZYYY1VBKdLHH-Kl3@google.com>
Date: Fri, 22 Dec 2023 15:16:37 -0800
From: Chris Li <chrisl@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Wei Xu <weixugc@...gle.com>, Yu Zhao <yuzhao@...gle.com>,
Greg Thelen <gthelen@...gle.com>, Chun-Tse Shao <ctshao@...gle.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Yosry Ahmed <yosryahmed@...gle.com>,
Brain Geffon <bgeffon@...gle.com>, Minchan Kim <minchan@...nel.org>,
Michal Hocko <mhocko@...e.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Huang Ying <ying.huang@...el.com>, Nhat Pham <nphamcs@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Kairui Song <kasong@...cent.com>,
Zhongkun He <hezhongkun.hzk@...edance.com>,
Kemeng Shi <shikemeng@...weicloud.com>,
Barry Song <v-songbaohua@...o.com>, Hugh Dickins <hughd@...gle.com>
Subject: Re: [PATCH] mm: swap: async free swap slot cache entries
On Fri, Dec 22, 2023 at 11:52:08AM -0800, Andrew Morton wrote:
> On Thu, 21 Dec 2023 22:25:39 -0800 Chris Li <chrisl@...nel.org> wrote:
>
> > We discovered that 1% swap page fault is 100us+ while 50% of
> > the swap fault is under 20us.
> >
> > Further investigation show that a large portion of the time
> > spent in the free_swap_slots() function for the long tail case.
> >
> > The percpu cache of swap slots is freed in a batch of 64 entries
> > inside free_swap_slots(). These cache entries are accumulated
> > from previous page faults, which may not be related to the current
> > process.
> >
> > Doing the batch free in the page fault handler causes longer
> > tail latencies and penalizes the current process.
> >
> > Move free_swap_slots() outside of the swapin page fault handler into an
> > async work queue to avoid such long tail latencies.
>
> This will require a larger amount of total work than the current
Yes, there will be a tiny little bit of extra overhead to schedule the job
on to the other work queue.
> scheme. So we're trading that off against better latency.
>
> Why is this a good tradeoff?
That is a very good question. Both Hugh and Wei had asked me similar questions
before. +Hugh.
The TL;DR is that it makes the swap more palleralizedable.
Because morden computers typically have more than one CPU and the CPU utilization
is rarely reached to 100%. We are actually not trading the latency for some one
run slower. Most of the time the real impact is that the current swapin page fault
can return quicker so more work can submit to the kernel sooner, at the same time
the other idle CPU can pick up the non latency critical work of freeing of the
swap slot cache entries. The net effect is that we speed things up and increase
the overall system utilization rather than slow things down.
The test result of chromebook and Google production server should be able to show
that it is beneficial to both laptop and server workloads, making them more responsive
in swap related workload.
Chris
Powered by blists - more mailing lists