lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 11 Jan 2011 11:38:25 -0800
From:	Kent Overstreet <kent.overstreet@...il.com>
To:	Tejun Heo <tj@...nel.org>
CC:	linux-kernel@...r.kernel.org
Subject: Re: Screwing with the concurrency limit

On 01/09/2011 06:41 AM, Tejun Heo wrote:
> Hello,
>
> On Sat, Jan 08, 2011 at 10:06:04AM -0800, Kent Overstreet wrote:
>> Well, that doesn't quite do it, I'd need workqueue_inc_max_active()
>> and workqueue_dec_max_active()... set_max_active() would be racy.
>
> You'll of course need to grab an outer mutex around max_active
> updates.
>
>> But also there's no point in adjusting max_active on every cpu's
>> workqueue, adjusting just the one on the local cpu would do exactly
>> what I want and be more efficient too... Can you see any issues in
>> doing it that way?
>
> Can you please explain the use case a bit more?  Is something per-cpu?
> ie. Are your write locks per-cpu?  How frequent do you expect the
> write locking to be?  I think adjusting max_active per-cpu should be
> doable but I'd rather stay away from that.

Hm, I must not be explaining myself very well.

Forget about the write locks for the moment.

So, high rate of work items, latency sensitive and _usually_ execute 
without blocking.

We would like a concurrency limit of 1 when they don't block - otherwise 
we're just scheduling for no reason. But sometimes they do block, and 
it's impossible to know whether they will or won't ahead of time.

That's the catch, if we have to block and we have a concurrency limit of 
1, we've got latency sensitive jobs queued on this CPU that are waiting 
around for no reason.

The write locks are the reason the concurrency limit pretty much has to 
be 1, because if it's not we'll sometimes just be trying to execute 
everything pointlessly.

So I'm trying to have my cake and eat it to. If a work item is 
executing, right before it blocks on IO it would like to do something to 
say "hey, start running whatever is available for this cpu". And it's 
only blocking the other work items on the cpu it's on, that's why I 
suggested adjusting only the local max_active.

>
>> What I was really hoping for was something like... maybe
>> move_work_to_workqueue() - if you could do that on the work item
>> you're executing, move it from the workqueue that has max_active = 1
>> to a different one - it's stateless from the caller's perspective.
>
> I don't think that's gonna be a good idea.  It's too specialized
> soultion which is likely to bite our asses down the road.

Well, I'm hoping I figure out the right way to convey what I'm trying to 
do, because I don't _think_ it's actually as specialized as it sounds. 
But as far as keeping the code sane, I agree with you there.

>> But I suspect that'd be more complicated than your way of doing it,
>> and inc()/dec() is probably just as good...
>
> So, I think it would be better to make max_active manipulation work
> somehow but again I want to stay way from being too specialized.

Yeah, it'll work. Does what I'm trying to do make any sense now?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ