[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201001123032.GC22560@dhcp22.suse.cz>
Date: Thu, 1 Oct 2020 14:30:32 +0200
From: Michal Hocko <mhocko@...e.com>
To: Sebastiaan Meijer <meijersebastiaan@...il.com>
Cc: akpm@...ux-foundation.org, buddy.lumpkin@...cle.com,
hannes@...xchg.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, mgorman@...e.de, riel@...riel.com,
willy@...radead.org
Subject: Re: [RFC PATCH 1/1] vmscan: Support multiple kswapd threads per node
On Wed 30-09-20 21:27:12, Sebastiaan Meijer wrote:
> > yes it shows the bottleneck but it is quite artificial. Read data is
> > usually processed and/or written back and that changes the picture a
> > lot.
> Apologies for reviving an ancient thread (and apologies in advance for my lack
> of knowledge on how mailing lists work), but I'd like to offer up another
> reason why merging this might be a good idea.
>
> From what I understand, zswap runs its compression on the same kswapd thread,
> limiting it to a single thread for compression. Given enough processing power,
> zswap can get great throughput using heavier compression algorithms like zstd,
> but this is currently greatly limited by the lack of threading.
Isn't this a problem of the zswap implementation rather than general
kswapd reclaim? Why zswap doesn't do the same as normal swap out in a
context outside of the reclaim?
My recollection of the particular patch is dimm but I do remember it
tried to add more kswapd threads which would just paper over the problem
you are seein rather than solve it.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists