[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAF8kJuOXj+Q5O=eLaRJOZrHDs7wygdTEfY3MfqGWQCmjEWQ_XQ@mail.gmail.com>
Date: Tue, 27 Feb 2024 15:01:32 -0800
From: Chris Li <chrisl@...nel.org>
To: Kairui Song <ryncsn@...il.com>
Cc: Barry Song <21cnbao@...il.com>, Andrew Morton <akpm@...ux-foundation.org>,
linux-mm <linux-mm@...ck.org>, "Huang, Ying" <ying.huang@...el.com>,
Minchan Kim <minchan@...nel.org>, Barry Song <v-songbaohua@...o.com>, Yu Zhao <yuzhao@...gle.com>,
SeongJae Park <sj@...nel.org>, David Hildenbrand <david@...hat.com>, Hugh Dickins <hughd@...gle.com>,
Johannes Weiner <hannes@...xchg.org>, Matthew Wilcox <willy@...radead.org>, Michal Hocko <mhocko@...e.com>,
Yosry Ahmed <yosryahmed@...gle.com>, Aaron Lu <aaron.lu@...el.com>, stable@...r.kernel.org,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v4] mm/swap: fix race when skipping swapcache
On Tue, Feb 27, 2024 at 10:14 AM Kairui Song <ryncsn@...il.com> wrote:
>
> On Wed, Feb 21, 2024 at 12:32 AM Chris Li <chrisl@...nel.org> wrote:
> >
> > On Mon, Feb 19, 2024 at 8:56 PM Kairui Song <ryncsn@...il.com> wrote:
> >
> > >
> > > Hi Barry,
> > >
> > > > it might not be a problem for throughput. but for real-time and tail latency,
> > > > this hurts. For example, this might increase dropping frames of UI which
> > > > is an important parameter to evaluate performance :-)
> > > >
> > >
> > > That's a true issue, as Chris mentioned before I think we need to
> > > think of some clever data struct to solve this more naturally in the
> > > future, similar issue exists for cached swapin as well and it has been
> > > there for a while. On the other hand I think maybe applications that
> > > are extremely latency sensitive should try to avoid swap on fault? A
> > > swapin could cause other issues like reclaim, throttled or contention
> > > with many other things, these seem to have a higher chance than this
> > > race.
> >
> >
> > Yes, I do think the best long term solution is to have some clever
> > data structure to solve the synchronization issue and allow racing
> > threads to make forward progress at the same time.
> >
> > I have also explored some (failed) synchronization ideas, for example
> > having the run time swap entry refcount separate from swap_map count.
> > BTW, zswap entry->refcount behaves like that, it is separate from swap
> > entry and manages the temporary run time usage count held by the
> > function. However that idea has its own problem as well, it needs to
> > have an xarray to track the swap entry run time refcount (only stored
> > in the xarray when CPU fails to get SWAP_HAS_CACHE bit.) When we are
> > done with page faults, we still need to look up the xarray to make
> > sure there is no racing CPU and put the refcount into the xarray. That
> > kind of defeats the purpose of avoiding the swap cache in the first
> > place. We still need to do the xarray lookup in the normal path.
> >
> > I came to realize that, while this current fix is not perfect, (I
> > still wish we had a better solution not pausing the racing CPU). This
> > patch stands better than not fixing this data corruption issue and the
> > patch remains relatively simple. Yes it has latency issues but still
> > better than data corruption. It also doesn't stop us from coming up
> > with better solutions later on. If we want to address the
> > synchronization in a way not blocking other CPUs, it will likely
> > require a much bigger change.
> >
> > Unless we have a better suggestion. It seems the better one among the
> > alternatives so far.
> >
>
> Hi,
>
> Thanks for the comments. I've been trying some ideas locally, I think a simple and straight solution exists: We just don't skip the swap cache xarray.
Yes, I have been pondering about that as well.
Notice in __read_swap_cache_async(), it has a similar
"schedule_timeout_uninterruptible(1)" when swapcache_prepare(entry)
fails to grab the SWAP_HAS_CACHE bit. So falling back to use the swap
cache does not automatically solve the latency issue. Similar delay
exists in the swap cache case as well.
> The current reason we are skipping it is for performance, but with some optimization, the performance should be as good as skipping it (in current behavior). Notice even in the swap cache bypass path, we need to do one lookup, and one modify (delete the shadow). That can't be skipped. So the usage of swap cache can be better organized and optimized.
> After all swapin makes use of swap cache, swapin can insert the folio in swap cache xarray first, then set swap map cache bit. I'm thinking about reusing the folio lock, or having an intermediate value in xarray, so raced swapins can wait properly. There are some tricky parts syncing with swap maps though.
Inserting the swap cache xarray first and setting SWAP_HAS_CACHE bit
later will need more audit on the race. I assume you take the swap
device/cluster lock before folio insert into swap cache xarray?
Chris
>
> Currently working on a series, will send in a few weeks if it works.
Powered by blists - more mailing lists