lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <487CDC27.7000304@qualcomm.com>
Date:	Tue, 15 Jul 2008 10:19:35 -0700
From:	Max Krasnyansky <maxk@...lcomm.com>
To:	Paul Menage <menage@...gle.com>
CC:	Paul Jackson <pj@....com>, mingo@...e.hu,
	linux-kernel@...r.kernel.org, a.p.zijlstra@...llo.nl
Subject: Re: [PATCH] cpuset: Make rebuild_sched_domains() usable from any
 context

Paul Menage wrote:
> On Tue, Jul 15, 2008 at 9:07 AM, Paul Jackson <pj@....com> wrote:
>> If this rebuild thread was the -only- way that sched domains were
>> allowed to be rebuilt, and if this rebuild was done -asynchronously-
>> sometime shortly after requested, without any error or status feedback,
>> then it would seem to simplify the locking issues.
> 
> I sent a patch that was similar a couple of weeks ago, that used a
> workqueue to do the rebuild. It didn't quite work then since it wasn't
> safe to call get_online_cpus() from a multi-threaded workqueue then,
> but I believe there's been a patch since then that makes this safe.
> And if not, we could always have a single-threaded workqueue that
> wasn't bound to any particular CPU.

Actually I think we do not have to make it super strict "only rebuilt 
from that thread rule". I'd only off-load cpuset_write64(), 
update_flag() to the thread. It'd be nice to keep hotplug path clean 
synchronous. It's synchronous without cpusets so there is really no good 
reason when it needs to be async without them. And the toughest part is 
not even hotplug where lock nesting is pretty clear
get_online_cpus() ->
	rebuild_sched_domains() ->
		cgroup_lock();
		// Build cpumaps
			cpuset_callback_lock();
			...
			cpuset_callback_unlock();
		cgroup_unlock();
		
		partition_sched_domains() ->
			mutex_unlock(&sched_domains_mutex);
			// Rebuild sched domains
			mutex_unlock(&sched_domains_mutex);
put_online_cpus()

It's the other paths where cgroup_lock() is taken by cgroups before even 
calling into cpusets, like cgroup destroy case.
So I think we should just off-load those.

Max
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ