lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 12 Dec 2015 13:49:50 +0200
From:	Nikolay Borisov <n.borisov@...eground.com>
To:	Mike Snitzer <snitzer@...hat.com>
Cc:	Tejun Heo <tj@...nel.org>, Nikolay Borisov <kernel@...p.com>,
	"Linux-Kernel@...r. Kernel. Org" <linux-kernel@...r.kernel.org>,
	SiteGround Operations <operations@...eground.com>,
	Alasdair Kergon <agk@...hat.com>,
	device-mapper development <dm-devel@...hat.com>
Subject: Re: corruption causing crash in __queue_work



On 12/11/2015 09:14 PM, Mike Snitzer wrote:
> On Fri, Dec 11 2015 at  1:00pm -0500,
> Nikolay Borisov <n.borisov@...eground.com> wrote:
> 
>> On Fri, Dec 11, 2015 at 7:08 PM, Tejun Heo <tj@...nel.org> wrote:
>>>
>>> Hmmm... No idea why it didn't show up in the debug log but the only
>>> way a workqueue could be in the above state is either it got
>>> explicitly destroyed or somehow pwq refcnting is messed up, in both
>>> cases it should have shown up in the log.
>>>
>>> cc'ing dm people.  Is there any chance dm-thinp could be using
>>> workqueue after destroying it?
> 
> Not that I'm aware of.  But never say never?
> 
> Plus I'd think we'd see other dm-thinp specific use-after-free issues
> aside from the thin-pool's workqueue.
> 
>> In __pool_destroy in dm-thin.c I don't see a call to
>> cancel_delayed_work before destroying the workqueue. Is it possible
>> that this is the causeI
> 
> Cannot see how, __pool_destroy()'s destroy_workqueue() would spew a
> bunch of WARN_ONs (and the wq wouldn't be destroyed) if the workqueue
> had outstanding work.
> 
> __pool_destroy() is called once the thin-pool's ref count drops to 0
> (see __pool_dec which is called when the thin-pool is removed --
> e.g. with 'dmsetup remove').  This code is only reachable when nothing
> else is using the thin-pool.
> 
> And the thin-pool is only able to be removed if all thin devices that
> depend on it have first been removed.  And each individual thin device
> waits for all outstanding IO before they can be removed.

Ok, I had a look at the code closer now and it indeed seems that when
the pool is suspended in its postsuspend callback the delay work is
indeed canceled and the workqueue is being flushed. But given that I see
those failures on at least 2-3 servers perday I doubt it it is a
hardware/machine-specific issue. Furthermore, the fact that it is always
a dm-thin queue that's being referenced points to the direction of
dm-thin, even though the code looks solid in that regard.

Regards,
Nikolay
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ