[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <5E8B439C-5971-49DF-BDC4-3B53268F8FF4@lightnvm.io>
Date: Mon, 2 Oct 2017 14:09:35 +0200
From: Javier González <jg@...htnvm.io>
To: Rakesh Pandit <rakesh@...era.com>
Cc: Matias Bjørling <mb@...htnvm.io>,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/6] lightnvm: pblk: free up mempool allocation for erases
correctly
> On 1 Oct 2017, at 15.25, Rakesh Pandit <rakesh@...era.com> wrote:
>
> While separating read and erase mempools in 22da65a1b pblk_g_rq_cache
> was used two times to set aside memory both for erase and read
> requests. Because same kmem cache is used repeatedly a single call to
> kmem_cache_destroy wouldn't deallocate everything. Repeatedly doing
> loading and unloading of pblk modules would eventually result in some
> leak.
>
> The fix is to really use separate kmem cache and track it
> appropriately.
>
> Fixes: 22da65a1b ("lightnvm: pblk: decouple read/erase mempools")
> Signed-off-by: Rakesh Pandit <rakesh@...era.com>
>
I'm not sure I follow this logic. I assume that you're thinking of the
refcount on kmem_cache. During cache creation, all is good; if a
different cache creation fails, destruction is guaranteed, since the
refcount is 0. On tear down (pblk_core_free), we destroy the mempools
associated to the caches. In this case, the refcount goes to 0 too, as
we destroy the 2 mempools. So I don't see where the leak can happen. Am
I missing something?
In any case, Jens reported some bugs on the mempools, where we did not
guarantee forward progress. Here you can find the original discussion
and the mempool audit [1]. Would be good if you reviewed these.
[1] https://www.spinics.net/lists/kernel/msg2602274.html
Thanks,
Javier
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists