[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1493655131.30303.17.camel@hpe.com>
Date: Mon, 1 May 2017 16:12:13 +0000
From: "Kani, Toshimitsu" <toshi.kani@....com>
To: "dan.j.williams@...el.com" <dan.j.williams@...el.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>,
"dave.jiang@...el.com" <dave.jiang@...el.com>,
"vishal.l.verma@...el.com" <vishal.l.verma@...el.com>
Subject: Re: [PATCH] libnvdimm: rework region badblocks clearing
On Mon, 2017-05-01 at 08:52 -0700, Dan Williams wrote:
> On Mon, May 1, 2017 at 8:43 AM, Dan Williams <dan.j.williams@...el.co
> m> wrote:
> > On Mon, May 1, 2017 at 8:34 AM, Kani, Toshimitsu <toshi.kani@....co
> > m> wrote:
> > > On Sun, 2017-04-30 at 05:39 -0700, Dan Williams wrote:
:
> > >
> > > Hi Dan,
> > >
> > > I was testing the change with CONFIG_DEBUG_ATOMIC_SLEEP set this
> > > time, and hit the following BUG with BTT. This is a separate
> > > issue (not introduced by this patch), but it shows that we have
> > > an issue with the DSM call path as well.
> >
> > Ah, great find, thanks! We don't see this in the unit tests because
> > the nfit_test infrastructure takes no sleeping actions in its
> > simulated DSM path. Outside of converting btt to use sleeping locks
> > I'm not sure I see a path forward. I wonder how bad the performance
> > impact of that would be? Perhaps with opportunistic spinning it
> > won't be so bad, but I don't see another choice.
>
> It's worse than that. Part of the performance optimization of BTT I/O
> was to avoid locking altogether when we could rely on a BTT lane
> percpu, so that would also need to be removed.
I do not have a good idea either, but I'd rather disable this clearing
in the regular BTT write path than adding sleeping locks to BTT.
Clearing a bad block in the BTT write path is difficult/challenging
since it allocates a new block.
Thanks,
-Toshi
Powered by blists - more mailing lists