[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190912100429.fk5er66aostbtvyi@box>
Date: Thu, 12 Sep 2019 13:04:29 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Yu Zhao <yuzhao@...gle.com>
Cc: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/4] mm: avoid slub allocation while holding list_lock
On Wed, Sep 11, 2019 at 08:31:10PM -0600, Yu Zhao wrote:
> If we are already under list_lock, don't call kmalloc(). Otherwise we
> will run into deadlock because kmalloc() also tries to grab the same
> lock.
>
> Fixing the problem by using a static bitmap instead.
>
> WARNING: possible recursive locking detected
> --------------------------------------------
> mount-encrypted/4921 is trying to acquire lock:
> (&(&n->list_lock)->rlock){-.-.}, at: ___slab_alloc+0x104/0x437
>
> but task is already holding lock:
> (&(&n->list_lock)->rlock){-.-.}, at: __kmem_cache_shutdown+0x81/0x3cb
>
> other info that might help us debug this:
> Possible unsafe locking scenario:
>
> CPU0
> ----
> lock(&(&n->list_lock)->rlock);
> lock(&(&n->list_lock)->rlock);
>
> *** DEADLOCK ***
>
> Signed-off-by: Yu Zhao <yuzhao@...gle.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
--
Kirill A. Shutemov
Powered by blists - more mailing lists