[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151211191400.GA24229@redhat.com>
Date: Fri, 11 Dec 2015 14:14:01 -0500
From: Mike Snitzer <snitzer@...hat.com>
To: Nikolay Borisov <n.borisov@...eground.com>
Cc: Tejun Heo <tj@...nel.org>, Nikolay Borisov <kernel@...p.com>,
"Linux-Kernel@...r. Kernel. Org" <linux-kernel@...r.kernel.org>,
SiteGround Operations <operations@...eground.com>,
Alasdair Kergon <agk@...hat.com>,
device-mapper development <dm-devel@...hat.com>
Subject: Re: corruption causing crash in __queue_work
On Fri, Dec 11 2015 at 1:00pm -0500,
Nikolay Borisov <n.borisov@...eground.com> wrote:
> On Fri, Dec 11, 2015 at 7:08 PM, Tejun Heo <tj@...nel.org> wrote:
> >
> > Hmmm... No idea why it didn't show up in the debug log but the only
> > way a workqueue could be in the above state is either it got
> > explicitly destroyed or somehow pwq refcnting is messed up, in both
> > cases it should have shown up in the log.
> >
> > cc'ing dm people. Is there any chance dm-thinp could be using
> > workqueue after destroying it?
Not that I'm aware of. But never say never?
Plus I'd think we'd see other dm-thinp specific use-after-free issues
aside from the thin-pool's workqueue.
> In __pool_destroy in dm-thin.c I don't see a call to
> cancel_delayed_work before destroying the workqueue. Is it possible
> that this is the causeI
Cannot see how, __pool_destroy()'s destroy_workqueue() would spew a
bunch of WARN_ONs (and the wq wouldn't be destroyed) if the workqueue
had outstanding work.
__pool_destroy() is called once the thin-pool's ref count drops to 0
(see __pool_dec which is called when the thin-pool is removed --
e.g. with 'dmsetup remove'). This code is only reachable when nothing
else is using the thin-pool.
And the thin-pool is only able to be removed if all thin devices that
depend on it have first been removed. And each individual thin device
waits for all outstanding IO before they can be removed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists