lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0912221004540.6879@localhost.localdomain>
Date:	Tue, 22 Dec 2009 10:28:57 -0800 (PST)
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Peter Zijlstra <peterz@...radead.org>
cc:	Tejun Heo <tj@...nel.org>,
	Arjan van de Ven <arjan@...ux.intel.com>,
	Jens Axboe <jens.axboe@...cle.com>,
	Andi Kleen <andi@...stfloor.org>, awalls@...ix.net,
	linux-kernel@...r.kernel.org, jeff@...zik.org, mingo@...e.hu,
	akpm@...ux-foundation.org, rusty@...tcorp.com.au,
	cl@...ux-foundation.org, dhowells@...hat.com, avi@...hat.com,
	johannes@...solutions.net
Subject: Re: workqueue thing



On Tue, 22 Dec 2009, Peter Zijlstra wrote:
> 
> Which in turn would imply we cannot carry fwd the current lockdep
> annotations, right?
> 
> Which means we'll be stuck in a situation where A flushes B and B
> flushes A will go undetected until we actually hit it.

No, lockdep should still work. It just means that waiting for an 
individual work should be seen as a matter of only waiting for the locks 
that work itself has done - rather than waiting for all the locks that any 
worker has taken.

And the way the workqueue lockdep stuff is done, I'd assume this just 
automatically fixes itself when rewritten.

> Where exactly does the tty thing live in the code?

I think I worked around it all, but I cursed the workqueues while I did 
it. The locking problem is tty->ldisc_mutex vs flushing the ldisc buffers. 
The flushing itself doesn't even care about the ldisc_mutex, but if it 
happens to be behind a the hangup work - which does care about that mutex 
- you still can't wait for it.

Happily, it turns out that you can synchronously _cancel_ the damn thing 
despite this problem, because the cancel can take it off the list if it is 
waiting for something else (eg another workqueue entry in front of it), 
and if it's actively running we know that it's not blocked waiting for 
that hangup work that needs the lock, so for that particular case we can 
even wait for it to finish running - even if we couldn't do that in 
general.

And I 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ