[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <00698BD3-4382-4033-908F-BEB63E7FAD57@cnexlabs.com>
Date: Thu, 16 Aug 2018 15:53:34 +0000
From: Javier Gonzalez <javier@...xlabs.com>
To: Matias Bjørling <mb@...htnvm.io>
CC: "Konopko, Igor J" <igor.j.konopko@...el.com>,
"marcin.dziegielewski@...el.com" <marcin.dziegielewski@...el.com>,
Hans Holmberg <hans.holmberg@...xlabs.com>,
Heiner Litz <hlitz@...c.edu>,
Young Tack Tack Jin <youngtack.jin@...cuitblvd.com>,
"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v2 0/1] lightnvm: move bad block and chunk state logic
to core
> On 16 Aug 2018, at 13.34, Matias Bjørling <mb@...htnvm.io> wrote:
>
> This patch moves the 1.2 and 2.0 block/chunk metadata retrieval to
> core.
>
> Hi Javier, I did not end up using your patch. I had misunderstood what
> was implemented. Instead I implemented the detection of the each chunk by
> first sensing the first page, then the last page, and if the chunk
> is sensed as open, a per page scan will be executed to update the write
> pointer appropriately.
>
I see why you want to do it this way for maintaining the chunk
abstraction, but this is potentially very inefficient as blocks not used
by any target will be recovered unnecessarily. Note that in 1.2, it is
expected that targets will need to recover the write pointer themselves.
What is more, in the normal path, this will be part of the metadata
being stored so no wp recovery is needed. Still, this approach forces
recovery on each 1.2 instance creation (also on factory reset). In this
context, you are right, the patch I proposed only addresses the double
erase issue, which was the original motivator, and left the actual
pointer recovery to the normal pblk recovery process.
Besides this, in order to consider this as a real possibility, we need
to measure the impact on startup time. For this, could you implement
nvm_bb_scan_chunk() and nvm_bb_chunk_sense() more efficiently by
recovering (i) asynchronously and (ii) concurrently across luns so that
we can establish the recovery cost more fairly? We can look at a
specific penalty ranges afterwards.
Also, the recovery scheme in pblk will change significantly by doing
this, so I assume you will send a followup patchset reimplementing
recovery for the 1.2 path? I am rebasing wp recovery for 2.0 now and
expect to post in the next couple of days. This logic can be reused, but
it requires some work and testing. A preliminary version of this patch
can be found here [1].
> Note that one needs a real drive to test the implementation. The 1.2
> qemu implementation is lacking. I did update it a bit, such that
> it defaults to all blocks being free. It can be picked up in the ocssd
> qemu repository.
I added patches to fix store/recover chunk metadata in qemu. This
should help you generating an arbitrary chunk state and test sanity for
these patches.
Can you share the tests that you have run to verify this patch? I can
run them on a 1.2 device next week (preferably on a V3 that includes the
comments above).
[1] https://github.com/OpenChannelSSD/linux/commits/for-4.20/pblk: 3c9c548a83ce
Javier
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists