lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 4 May 2023 00:22:53 -0700
From:   Chris Li <chrisl@...nel.org>
To:     Johannes Weiner <hannes@...xchg.org>
Cc:     Domenico Cerasuolo <cerasuolodomenico@...il.com>,
        sjenning@...hat.com, ddstreet@...e.org, vitaly.wool@...sulko.com,
        minchan@...nel.org, ngupta@...are.org, akpm@...ux-foundation.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mm: fix zswap writeback race condition

On Wed, May 03, 2023 at 10:29:04PM -0400, Johannes Weiner wrote:
> > >  
> > >  	case ZSWAP_SWAPCACHE_NEW: /* page is locked */
> 
>                                   ^^^^^^^^^^^^^^^^^^^^
> 
> > > +		/*
> > > +		 * Having a local reference to the zswap entry doesn't exclude
> > > +		 * swapping from invalidating and recycling the swap slot. Once
> > > +		 * the swapcache is secured against concurrent swapping to and
> > > +		 * from the slot, recheck that the entry is still current before
> > > +		 * writing.
> > > +		 */
> > > +		spin_lock(&tree->lock);
> > > +		if (zswap_rb_search(&tree->rbroot, entry->offset) != entry) {
> > > +			spin_unlock(&tree->lock);
> > > +			delete_from_swap_cache(page_folio(page));
> > > +			ret = -ENOMEM;
> > > +			goto fail;
> > > +		}
> > > +		spin_unlock(&tree->lock);
> > > +
> > 
> > The race condition is still there, just making it much harder to hit.
> > What happens after you perform the rb tree search, release tree lock.
> > Then the entry gets invalid and recycled right here before the decompress
> > step?
> 
> Recyling can only happen up until we see ZSWAP_SWAPCACHE_NEW.
> 
> Once we see it, we're holding the page lock* on a new swapcache page
> for a valid, in-use** swp_entry_t.
> 
> The lock of the swapcache page prevents swapin, which would be
> required for the count to drop and the entry to be recycled.

Thanks for the explain. I miss the locked page will prevent swapin part.

> __read_swap_cache_async() checked that the entry is valid, so the slot
> cannot be allocated to someone else.
> 
> Now we just have to check if that entry is the right one, iow the slot
> wasn't recycled.
> 
> If the slot wasn't recycled, we know we have the right data and we can
> start the IO and unlock the page. (After that swapins can continue and
> the data can change, but regular writeback vs redirtying rules apply.)
> 
> If the slot was indeed recycled before we get ZSWAP_SWAPCACHE_NEW, we
> see the mismatch, delete the page from the swapcache and unlock it. A
> racing do_swap_page() may have found and reffed the page in swapcache,
> and acquire the page lock after us; but it'll see it's no longer in
> the swapcache, drop the reference (free the page) and retry the fault.

LGTM then. Please feel free to add:

Reviewed-by: Chris Li (Google) <chrisl@...nel.org>

Chris

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ