[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160425221816.GA1254@google.com>
Date: Mon, 25 Apr 2016 15:18:16 -0700
From: Yu Zhao <yuzhao@...gle.com>
To: Dan Streetman <ddstreet@...e.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Seth Jennings <sjenning@...hat.com>,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Minchan Kim <minchan@...nel.org>,
Nitin Gupta <ngupta@...are.org>, Linux-MM <linux-mm@...ck.org>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
Dan Streetman <dan.streetman@...onical.com>
Subject: Re: [PATCH] mm/zpool: use workqueue for zpool_destroy
On Mon, Apr 25, 2016 at 05:20:10PM -0400, Dan Streetman wrote:
> Add a work_struct to struct zpool, and change zpool_destroy_pool to
> defer calling the pool implementation destroy.
>
> The zsmalloc pool destroy function, which is one of the zpool
> implementations, may sleep during destruction of the pool. However
> zswap, which uses zpool, may call zpool_destroy_pool from atomic
> context. So we need to defer the call to the zpool implementation
> to destroy the pool.
>
> This is essentially the same as Yu Zhao's proposed patch to zsmalloc,
> but moved to zpool.
Thanks, Dan. Sergey also mentioned another call path that triggers the
same problem (BUG: scheduling while atomic):
rcu_process_callbacks()
__zswap_pool_release()
zswap_pool_destroy()
zswap_cpu_comp_destroy()
cpu_notifier_register_begin()
mutex_lock(&cpu_add_remove_lock);
So I was thinking zswap_pool_destroy() might be done in workqueue in zswap.c.
This way we fix both call paths.
Or you have another patch to fix the second call path?
Powered by blists - more mailing lists