lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMgjq7A+B4s52XYOFSan0fzUV-7o-GeAD3pfKkQtHo6uPHbrxQ@mail.gmail.com>
Date: Fri, 9 Feb 2024 13:30:25 +0800
From: Kairui Song <ryncsn@...il.com>
To: Chris Li <chrisl@...nel.org>
Cc: "Huang, Ying" <ying.huang@...el.com>, Minchan Kim <minchan@...nel.org>, 
	Barry Song <21cnbao@...il.com>, linux-mm@...ck.org, 
	Andrew Morton <akpm@...ux-foundation.org>, Yu Zhao <yuzhao@...gle.com>, 
	Barry Song <v-songbaohua@...o.com>, SeongJae Park <sj@...nel.org>, Hugh Dickins <hughd@...gle.com>, 
	Johannes Weiner <hannes@...xchg.org>, Matthew Wilcox <willy@...radead.org>, Michal Hocko <mhocko@...e.com>, 
	Yosry Ahmed <yosryahmed@...gle.com>, David Hildenbrand <david@...hat.com>, stable@...r.kernel.org, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mm/swap: fix race when skipping swapcache

On Fri, Feb 9, 2024 at 3:42 AM Chris Li <chrisl@...nel.org> wrote:
>
> On Thu, Feb 8, 2024 at 11:01 AM Kairui Song <ryncsn@...il.com> wrote:
> >
> > On Thu, Feb 8, 2024 at 2:36 PM Huang, Ying <ying.huang@...el.com> wrote:
> > >
> > > Kairui Song <ryncsn@...il.com> writes:
> > >
> > > > On Thu, Feb 8, 2024 at 2:31 AM Minchan Kim <minchan@...nel.org> wrote:
> > > >>
> > > >> On Wed, Feb 07, 2024 at 12:06:15PM +0800, Kairui Song wrote:
> > >
> > > [snip]
> > >
> > > >> >
> > > >> > So I think the thing is, it's getting complex because this patch
> > > >> > wanted to make it simple and just reuse the swap cache flags.
> > > >>
> > > >> I agree that a simple fix would be the important at this point.
> > > >>
> > > >> Considering your description, here's my understanding of the other idea:
> > > >> Other method, such as increasing the swap count, haven't proven effective
> > > >> in your tests. The approach risk forcing racers to rely on the swap cache
> > > >> again and the potential performance loss in race scenario.
> > > >>
> > > >> While I understand that simplicity is important, and performance loss
> > > >> in this case may be infrequent, I believe swap_count approach could be a
> > > >> suitable solution. What do you think?
> > > >
> > > > Hi Minchan
> > > >
> > > > Yes, my main concern was about simplicity and performance.
> > > >
> > > > Increasing swap_count here will also race with another process from
> > > > releasing swap_count to 0 (swapcache was able to sync callers in other
> > > > call paths but we skipped swapcache here).
> > >
> > > What is the consequence of the race condition?
> >
> > Hi Ying,
> >
> > It will increase the swap count of an already freed entry, this race
> > with multiple swap free/alloc logic that checks if count ==
> > SWAP_HAS_CACHE or sets count to zero, or repeated free of an entry,
> > all result in random corruption of the swap map. This happens a lot
> > during stress testing.
>
> In theory, the original code before your patch can get into a
> situation similar to what you try to avoid.
> CPU1 enters the do_swap_page() with entry swap count == 1 and
> continues handling the swap fault without swap cache.  Then some
> operation bumps up the swap entry count and CPU2 enters the
> do_swap_page() racing with CPU1 with swap count == 2. CPU2 will need
> to go through the swap cache case.  We still need to handle this
> situation correctly.

Hi Chris,

This won't happen, nothing can bump the swap entry count unless it's
swapped in and freed. There are only two places that call
swap_duplicate: unmap or fork, unmap need page mapped and entry alloc,
so it won't happen unless we hit the entry reuse issue. Fork needs the
VMA lock which we hold it during page fault.

> So the complexity is already there.
>
> If we can make sure the above case works correctly, then one way to
> avoid the problem is just send the CPU2 to use the swap cache (without
> the swap cache by-passing).

Yes, more auditing of existing code and explanation is needed to
ensure things won't go wrong, that's the reason I tried to avoid
things from going too complex...

> > > > So the right step is: 1. Lock the cluster/swap lock; 2. Check if still
> > > > have swap_count == 1, bail out if not; 3. Set it to 2;
> > > > __swap_duplicate can be modified to support this, it's similar to
> > > > existing logics for SWAP_HAS_CACHE.
> > > >
> > > > And swap freeing path will do more things, swapcache clean up needs to
> > > > be handled even in the bypassing path since the racer may add it to
> > > > swapcache.
> > > >
> > > > Reusing SWAP_HAS_CACHE seems to make it much simpler and avoided many
> > > > overhead, so I used that way in this patch, the only issue is
> > > > potentially repeated page faults now.
> > > >
> > > > I'm currently trying to add a SWAP_MAP_LOCK (or SWAP_MAP_SYNC, I'm bad
> > > > at naming it) special value, so any racer can just spin on it to avoid
> > > > all the problems, how do you think about this?
> > >
> > > Let's try some simpler method firstly.
> >
> > Another simpler idea is, add a schedule() or
> > schedule_timeout_uninterruptible(1) in the swapcache_prepare failure
> > path before goto out (just like __read_swap_cache_async). I think this
> > should ensure in almost all cases, PTE is ready after it returns, also
> > yields more CPU.
>
> I mentioned in my earlier email and Ying points out that as well.
> Looping waiting inside do_swap_page() is bad because it is holding
> other locks.

It's not looping here though, just a tiny delay, since
SWP_SYNCHRONOUS_IO is supposed to be super fast devices so a tiny
delay should be enough.

> Sorry I started this idea but it seems no good.

Not at all, more reviewing helps to find a better solution :)

> If we can have CPU2 make forward progress without retrying
> the page fault would be the best, if possible.

Yes, making CPU2 fall back to cached swapin path is doable after
careful auditing. Just CPU2 is usually slower than CPU1 due to cache
and timing, so what it does will most likely be in vain and need to be
reverted, causing more work for both code logic and CPU. The case of
the race (CPU2 went faster) is very rare.

I'm not against the idea of bump count, it's better if things that can
be done without introducing too much noise. Will come back after more
tests and work on this.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ