lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090126154217.b312e278.akpm@linux-foundation.org>
Date:	Mon, 26 Jan 2009 15:42:17 -0800
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	oleg@...hat.com, a.p.zijlstra@...llo.nl, rusty@...tcorp.com.au,
	travis@....com, mingo@...hat.com, davej@...hat.com,
	cpufreq@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] work_on_cpu: Use our own workqueue.

On Mon, 26 Jan 2009 23:59:57 +0100
Ingo Molnar <mingo@...e.hu> wrote:

> 
> * Andrew Morton <akpm@...ux-foundation.org> wrote:
> 
> > On Mon, 26 Jan 2009 23:20:02 +0100
> > Ingo Molnar <mingo@...e.hu> wrote:
> > 
> > > 
> > > * Andrew Morton <akpm@...ux-foundation.org> wrote:
> > > 
> > > > On Mon, 26 Jan 2009 23:05:37 +0100
> > > > Ingo Molnar <mingo@...e.hu> wrote:
> > > > 
> > > > > 
> > > > > * Andrew Morton <akpm@...ux-foundation.org> wrote:
> > > > > 
> > > > > > Well it turns out that I was having a less-than-usually-senile moment:
> > > > > > 
> > > > > > :     implement flush_work()
> > > > > 
> > > > > > Why isn't that working in this case??
> > > > > 
> > > > > how would that work in this case? We defer processing into the workqueue 
> > > > > exactly because we want its per-CPU properties.
> > > > 
> > > > It detaches the work item, moves it to head-of-queue, reinserts it then 
> > > > waits on it.  I think.
> > > > 
> > > > This might have a race+hole.  If a currently-running "unrelated" work 
> > > > item tries to take the lock which the flush_work() caller is holding 
> > > > then there's no way in which keventd will come back to execute the work 
> > > > item which we just put on the head of queue.
> > > 
> > > Correct - or the unrelated worklet might also be blocked on something - so 
> > > the window is rather large.
> > > 
> > 
> > hm, OK, that sucks.
> > 
> > But the deadlock still exists with Rusty's patches, doesn't it?  We 
> > still have a single kernel thread per CPU processing all the unrelated 
> > work_on_cpu() callers.  All we've done is to decouple work_on_cpu() from 
> > the keventd queue.
> 
> This particular deadlock does not exist - but you are indeed right that 
> similar types of 'unrelated' interactions might exist in the future, as 
> the usage of this facility is extended.

So... can we call the new kernel threads [kbandaidd]?

> > If correct, we'd need to create a gaggle of kernel threads on each call 
> > to work_on_cpu(), which doesn't sound nice.
> > 
> > A more efficient but trickier approach would be to create kernel threads 
> > within flush_work(), with which to run the CPU-specific worklet.  We 
> > only need to do that in the case where the CPU's keventd thread was off 
> > doing something and might deadlock, which will be rare. If the keventd 
> > was just parked waiting for something to do then we can safely feed it 
> > the to-be-flushed work item for immediate processing.
> 
> i think what you describe is a variant of the syslet thread pool ;-)

It's fairly simple.  The slightly tricky bit will be ensuring that
keventd will reliably swallow the work item which we just fed it,
before anything else.  But I guess the spinlock will be sufficient there.
But I don't think we should do any of this.

> > It'd be saner to just say "don't call work_on_cpu() while holding locks"
> > :( I bet there's some lockdep infrastructre which we could peek into to
> > add the assertion check...
> 
> The problem isnt doing the assertions - lockdep already covers workqueue 
> dependencies very efficiently.
> 
> The problem is the intrinsic utility of work_on_cpu(): we _really_ want 
> such a generic facility to be usable from any (blockable) context, just 
> like on_each_cpu(func, info) does for atomic functions, without 
> restrictions on locking context.

Do we?  work_on_cpu() is some last-gasp oh-i-screwed-my-code-up thing. 
We _really_ want people to use on_each_cpu()!

We should bust a gut to keep the number of callers to the
resource-intensive (deadlocky!) work_on_cpu() to a minimum.

(And to think that adding add_timer_on() creeped me out).


hm.  None of that was very helpful.  How to move forward?

I think I disagree that work_on_cpu() should be made into some robust,
smiled-upon core kernel facility.  It _is_ slow, it _is_ deadlockable. 
It should be positioned as something which is only used as a last
resort.  And if you _have_ to use it, sort out your locking!

Plus the number of code sites which want to fiddle with other CPUs in
this manner will always be small.  cpufreq, MCE, irq-affinity, things
like that.

What is the deadlock in acpi-cpufreq?  Which lock, and who is the
"other" holder of that lock?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ