lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C192B94.9020601@kernel.org>
Date:	Wed, 16 Jun 2010 21:52:52 +0200
From:	Tejun Heo <tj@...nel.org>
To:	Daniel Walker <dwalker@...eaurora.org>
CC:	mingo@...e.hu, awalls@...ix.net, linux-kernel@...r.kernel.org,
	jeff@...zik.org, akpm@...ux-foundation.org, rusty@...tcorp.com.au,
	cl@...ux-foundation.org, dhowells@...hat.com,
	arjan@...ux.intel.com, johannes@...solutions.net, oleg@...hat.com,
	axboe@...nel.dk
Subject: Re: Overview of concurrency managed workqueue

On 06/16/2010 09:36 PM, Daniel Walker wrote:
>> Yeah, and it would wait that by flushing the work, right?  If the
>> waiting part is using completion or some other event notification,
>> you'll just need to update the driver so that the kernel can determine
>> who's waiting for what so that it can bump the waited one's priority.
>> Otherwise, the problem can't be solved.
> 
> This has nothing to do with flushing .. You keep bringing this back into
> the kernel for some reason, we're talking about entirely userspace
> threads ..

Yeah, sure, how would tose userspace threads wait for the event?  And
how would the kernel be able to honor latency requirements not knowing
the dependency?

>> Well, it's somewhat related,
>>
>> * Don't depend on works or workqueues for RT stuff.  It's not designed
>>   for that.
> 
> Too bad .. We have a posix OS, and posix has RT priorities .. You can't
> control what priorities user give those threads.

So, you're not talking about the real RT w/ timing guarantees?  So no
priority inheritance or whatever?  Gees, then any lock can give you
unexpected latencies.

>> * If you really wanna solve the problem, please go ahead and _solve_
>>   it yourself.  (read the rest of the mail)
> 
> Your causing the problem, why should I solve it? My solution would just
> be to NAK your patches.

I don't have any problem with that.  I would be almost happy to get
your NACK.

>> Because the workqueue might just go away in the next release or other
>> unrelated work which shouldn't get high priority might be scheduled
>> there.  Maybe the name of the workqueue changes or it gets merged with
>> another workqueue.  Maybe it gets split.  Maybe the system suspends
>> and resumes and nobody knows that workers die and are created again
>> over those events.  Maybe the backend implementaiton changes so that
>> workers are pooled.
> 
> Changing the priorities is not fragile, your saying that ones ability to
> adapt to changes in the kernel makes it hard to know what the workqueue
> is actually doing.. Ok, that's fair.. This doesn't make it less useful
> since people can discover thread dependencies without looking at the
> kernel source.

Sigh, so, yeah, the whole thing is fragile.  When did I say nice(1) is
fragile?

>>>> * depends heavily on unrelated implementation details
>>>
>>> I have no idea what this means.
>>
>> (continued) because all those are implementation details which are NOT
>> PART OF THE INTERFACE in any way.
> 
> yet they are part of the interface like it or not. How could you use
> threads and think thread priorities are not part of the interface.
> 
> In your new system how do you currently prevent thread priorities on
> your new workqueue threads from getting modified? Surely you must be
> doing that since you don't want those priorities to change right?

No, I don't.  If the root wanna shoot its feet, it can.  On the same
tune, you can lower the priority of the migration thread to your own
peril.

>> Gee, I don't know.  These are pretty evident problems to me.  Aren't
>> they obvious?
> 
> Your just looking at the problem through your specific use case glasses
> without imagining what else people could be doing with the kernel.
> 
> How often do you think workqueues change names anyway? It's not all that
> often.

So does most symbols in kernel.  What you're saying applies almost
word to word to grepping /proc/kallsyms.  Can't you see it?

>> And you're assuming grepping /proc/kallsyms is not useful?  It's
>> useful in its adhoc unsupported hacky way.
> 
> Well lets say it's useful and 100k people use that method in it's
> "hacky" way .. When does it become a feature then?

If 100k people actually want it, solve the damn problem instead of
holding onto the bandaid which doesn't work anyway.

> So your totally unwilling to change your patches to correct this
> problem? Is that what your getting at? Agree or disagree isn't relevant
> it's a real problem or I wouldn't have brought it up.
>
> btw, I already gave you a relatively easy way to correct this.

I'm sorry but the problem you brought up seems bogus to me.  So does
the solution.  In debugfs?  Is that debug feature or is it an API?  Do
we keep the workqueues stable then?  Do we make announcements when
moving one work from one workqueue to another?  If it's a debug
feature, why are we talking like this any way?

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ