[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4f1d0c0369e3b08cb0c8d2271396277df6e1d37e.camel@linux.intel.com>
Date: Fri, 09 Feb 2024 09:52:44 -0800
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Chris Li <chrisl@...nel.org>
Cc: "Huang, Ying" <ying.huang@...el.com>, Andrew Morton
<akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Wei Xu <weixugc@...gle.com>, Yu
Zhao <yuzhao@...gle.com>, Greg Thelen
<gthelen@...gle.com>, Chun-Tse Shao <ctshao@...gle.com>, Suren
Baghdasaryan <surenb@...gle.com>, Yosry
Ahmed <yosryahmed@...gle.com>, Brain Geffon
<bgeffon@...gle.com>, Minchan Kim <minchan@...nel.org>, Michal Hocko
<mhocko@...e.com>, Mel Gorman <mgorman@...hsingularity.net>, Nhat Pham
<nphamcs@...il.com>, Johannes Weiner <hannes@...xchg.org>, Kairui Song
<kasong@...cent.com>, Zhongkun He <hezhongkun.hzk@...edance.com>, Kemeng
Shi <shikemeng@...weicloud.com>, Barry Song <v-songbaohua@...o.com>
Subject: Re: [PATCH v2] mm: swap: async free swap slot cache entries
On Tue, 2024-02-06 at 17:51 -0800, Chris Li wrote:
> On Tue, Feb 6, 2024 at 5:08 PM Tim Chen <tim.c.chen@...ux.intel.com> wrote:
> >
> > On Mon, 2024-02-05 at 11:10 -0800, Chris Li wrote:
> > >
> > >
> > > In our system, a really heavy swap load is rare and it means something
> > > is already wrong. At that point the app's SLO is likely at risk,
> > > regardless of long tail swap latency. It is already too late to
> > > address it at the swap fault end. We need to address the source of the
> > > problem which is swapping out too much.
> > >
> > >
> >
> > Could some usage scenarios put more pressure on swap than your
> > usage scenario? Say system with limited RAM and rely on zswap?
> >
> Of course. In that case what I proposed to do will already doing what
> I think is the best of this situation. After grabbing the cache lock
> and finding out async fre hasn't started the freeing yet. Just free
> all 64 entries in the swap slot cache. It is similar to the old code
> behavior.
> Yes, it will have the long tail latency due to batch freeing 64 entries.
> My point is not that I don't care about heavy swap behavior.
> My point is that the app will suffer from the swap strom anyway, it is
> unavoidable. That will be the dominant factor shadowing the batch free
> optimization effect.
The original optimization introducing swap_slots target such heavy
swap use cases when we have fast swap backend to allow higher sustainable
swap throughput. We should not ignore it. And I am afraid your current
patch as is will hurt that performance. If you change the direct free
path to free all entries, that could maintain the throughput and I'll
be okay with that.
>
> Or do I miss your point as you want to purpose the swap cache double
> buffer so it can perform better under swap storm situations?
>
I am not actually proposing doubling the buffer as that proposal have
its own downside.
Tim
Powered by blists - more mailing lists