lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231214142320.f5cf319e619dbb2127c423e9@linux-foundation.org>
Date: Thu, 14 Dec 2023 14:23:20 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Chengming Zhou <zhouchengming@...edance.com>, Nhat Pham
 <nphamcs@...il.com>, Chris Li <chriscli@...gle.com>, Johannes Weiner
 <hannes@...xchg.org>, Seth Jennings <sjenning@...hat.com>, Dan Streetman
 <ddstreet@...e.org>, Vitaly Wool <vitaly.wool@...sulko.com>,
 linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 5/5] mm/zswap: cleanup zswap_reclaim_entry()

On Wed, 13 Dec 2023 17:02:25 -0800 Yosry Ahmed <yosryahmed@...gle.com> wrote:

> On Tue, Dec 12, 2023 at 8:18 PM Chengming Zhou
> <zhouchengming@...edance.com> wrote:
> >
> > Also after the common decompress part goes to __zswap_load(), we can
> > cleanup the zswap_reclaim_entry() a little.
> 
> I think you mean zswap_writeback_entry(), same for the commit title.

I updated my copy of the changelog, thanks.

> > -       /*
> > -        * If we get here because the page is already in swapcache, a
> > -        * load may be happening concurrently. It is safe and okay to
> > -        * not free the entry. It is also okay to return !0.
> > -        */
> 
> This comment should be moved above the failure check of
> __read_swap_cache_async() above, not completely removed.

This?

--- a/mm/zswap.c~mm-zswap-cleanup-zswap_reclaim_entry-fix
+++ a/mm/zswap.c
@@ -1457,8 +1457,14 @@ static int zswap_writeback_entry(struct
 	mpol = get_task_policy(current);
 	page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol,
 				NO_INTERLEAVE_INDEX, &page_was_allocated, true);
-	if (!page)
+	if (!page) {
+		/*
+		 * If we get here because the page is already in swapcache, a
+		 * load may be happening concurrently. It is safe and okay to
+		 * not free the entry. It is also okay to return !0.
+		 */
 		return -ENOMEM;
+	}
 
 	/* Found an existing page, we raced with load/swapin */
 	if (!page_was_allocated) {


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ