[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1276719599.9309.243.camel@m0nster>
Date: Wed, 16 Jun 2010 13:19:59 -0700
From: Daniel Walker <dwalker@...eaurora.org>
To: Tejun Heo <tj@...nel.org>
Cc: mingo@...e.hu, awalls@...ix.net, linux-kernel@...r.kernel.org,
jeff@...zik.org, akpm@...ux-foundation.org, rusty@...tcorp.com.au,
cl@...ux-foundation.org, dhowells@...hat.com,
arjan@...ux.intel.com, johannes@...solutions.net, oleg@...hat.com,
axboe@...nel.dk
Subject: Re: Overview of concurrency managed workqueue
On Wed, 2010-06-16 at 21:52 +0200, Tejun Heo wrote:
> On 06/16/2010 09:36 PM, Daniel Walker wrote:
> >> Yeah, and it would wait that by flushing the work, right? If the
> >> waiting part is using completion or some other event notification,
> >> you'll just need to update the driver so that the kernel can determine
> >> who's waiting for what so that it can bump the waited one's priority.
> >> Otherwise, the problem can't be solved.
> >
> > This has nothing to do with flushing .. You keep bringing this back into
> > the kernel for some reason, we're talking about entirely userspace
> > threads ..
>
> Yeah, sure, how would tose userspace threads wait for the event? And
> how would the kernel be able to honor latency requirements not knowing
> the dependency?
Let say a userspace thread calls into the kernel via some method syscall
for instance, and while executing the syscall there is a mutex,
semaphore, completion or any other blocking mechanism. Then the
userspace thread blocks.
> >> Well, it's somewhat related,
> >>
> >> * Don't depend on works or workqueues for RT stuff. It's not designed
> >> for that.
> >
> > Too bad .. We have a posix OS, and posix has RT priorities .. You can't
> > control what priorities user give those threads.
>
> So, you're not talking about the real RT w/ timing guarantees? So no
> priority inheritance or whatever? Gees, then any lock can give you
> unexpected latencies.
I'm talking about normal threads with RT priorities ..
> >> * If you really wanna solve the problem, please go ahead and _solve_
> >> it yourself. (read the rest of the mail)
> >
> > Your causing the problem, why should I solve it? My solution would just
> > be to NAK your patches.
>
> I don't have any problem with that. I would be almost happy to get
> your NACK.
Oh yeah? why is that?
> >> Because the workqueue might just go away in the next release or other
> >> unrelated work which shouldn't get high priority might be scheduled
> >> there. Maybe the name of the workqueue changes or it gets merged with
> >> another workqueue. Maybe it gets split. Maybe the system suspends
> >> and resumes and nobody knows that workers die and are created again
> >> over those events. Maybe the backend implementaiton changes so that
> >> workers are pooled.
> >
> > Changing the priorities is not fragile, your saying that ones ability to
> > adapt to changes in the kernel makes it hard to know what the workqueue
> > is actually doing.. Ok, that's fair.. This doesn't make it less useful
> > since people can discover thread dependencies without looking at the
> > kernel source.
>
> Sigh, so, yeah, the whole thing is fragile. When did I say nice(1) is
> fragile?
Like I said that type of "fragile" doesn't really matter that much,
since you can just discover any new thread dependencies on a new kernel.
Anyone running a RT thread would likely do that anyway, since even
syscall are "fragile" in this way.
> >> Gee, I don't know. These are pretty evident problems to me. Aren't
> >> they obvious?
> >
> > Your just looking at the problem through your specific use case glasses
> > without imagining what else people could be doing with the kernel.
> >
> > How often do you think workqueues change names anyway? It's not all that
> > often.
>
> So does most symbols in kernel. What you're saying applies almost
> word to word to grepping /proc/kallsyms. Can't you see it?
No .. I don't follow your grepping thing .. Symbols in the kernel change
pretty often, and generally aren't exposed to userspace in a way that
one can readily see them, like say using "top" or "ps" for
example . /proc/kallsyms is unknown man, I vague knew what that was till
you mentioned it, yet _everyone_ has see "kblockd".
> >> And you're assuming grepping /proc/kallsyms is not useful? It's
> >> useful in its adhoc unsupported hacky way.
> >
> > Well lets say it's useful and 100k people use that method in it's
> > "hacky" way .. When does it become a feature then?
>
> If 100k people actually want it, solve the damn problem instead of
> holding onto the bandaid which doesn't work anyway.
Again with the mass assumptions ..
> > So your totally unwilling to change your patches to correct this
> > problem? Is that what your getting at? Agree or disagree isn't relevant
> > it's a real problem or I wouldn't have brought it up.
> >
> > btw, I already gave you a relatively easy way to correct this.
>
> I'm sorry but the problem you brought up seems bogus to me. So does
> the solution. In debugfs? Is that debug feature or is it an API? Do
> we keep the workqueues stable then? Do we make announcements when
> moving one work from one workqueue to another? If it's a debug
> feature, why are we talking like this any way?
I was suggesting it as a debug feature, but if people screamed loud
enough then you would have to make it an API .. You need to have feature
parity with current mainline, which you don't have..
I dont't know what you mean by "keep the workqueues stable" , you mean
the naming? No you don't .. You don't make announcements either..
I suggested it as a debug feature yes, and your the one arguing with
_me_ ..
Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists