[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod5rnV5ZjKYxFwPDX8NcRQKJfwN-iWyVD-Mm4+fKten1+A@mail.gmail.com>
Date: Fri, 31 Mar 2017 10:00:30 -0700
From: Shakeel Butt <shakeelb@...gle.com>
To: Andrey Ryabinin <aryabinin@...tuozzo.com>
Cc: Seth Jennings <sjenning@...hat.com>,
Dan Streetman <ddstreet@...e.org>,
Linux MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...nel.org>
Subject: Re: [PATCH] mm/zswap: fix potential deadlock in zswap_frontswap_store()
On Fri, Mar 31, 2017 at 8:30 AM, Andrey Ryabinin
<aryabinin@...tuozzo.com> wrote:
> zswap_frontswap_store() is called during memory reclaim from
> __frontswap_store() from swap_writepage() from shrink_page_list().
> This may happen in NOFS context, thus zswap shouldn't use __GFP_FS,
> otherwise we may renter into fs code and deadlock.
> zswap_frontswap_store() also shouldn't use __GFP_IO to avoid recursion
> into itself.
>
Is it possible to enter fs code (or IO) from zswap_frontswap_store()
other than recursive memory reclaim? However recursive memory reclaim
is protected through PF_MEMALLOC task flag. The change seems fine but
IMHO reasoning needs an update. Adding Michal for expert opinion.
> zswap_frontswap_store() call zpool_malloc() with __GFP_NORETRY |
> __GFP_NOWARN | __GFP_KSWAPD_RECLAIM, so let's use the same flags for
> zswap_entry_cache_alloc() as well, instead of GFP_KERNEL.
>
> Signed-off-by: Andrey Ryabinin <aryabinin@...tuozzo.com>
> ---
> mm/zswap.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index eedc278..12ad7e9 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -966,6 +966,7 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
> struct zswap_tree *tree = zswap_trees[type];
> struct zswap_entry *entry, *dupentry;
> struct crypto_comp *tfm;
> + gfp_t gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM;
> int ret;
> unsigned int dlen = PAGE_SIZE, len;
> unsigned long handle;
> @@ -989,7 +990,7 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
> }
>
> /* allocate entry */
> - entry = zswap_entry_cache_alloc(GFP_KERNEL);
> + entry = zswap_entry_cache_alloc(gfp);
> if (!entry) {
> zswap_reject_kmemcache_fail++;
> ret = -ENOMEM;
> @@ -1017,9 +1018,7 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
>
> /* store */
> len = dlen + sizeof(struct zswap_header);
> - ret = zpool_malloc(entry->pool->zpool, len,
> - __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM,
> - &handle);
> + ret = zpool_malloc(entry->pool->zpool, len, gfp, &handle);
> if (ret == -ENOSPC) {
> zswap_reject_compress_poor++;
> goto put_dstmem;
> --
> 2.10.2
>
Powered by blists - more mailing lists