lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080217202739.GA2994@ami.dom.local>
Date:	Sun, 17 Feb 2008 21:27:39 +0100
From:	Jarek Poplawski <jarkao2@...il.com>
To:	Oleg Nesterov <oleg@...sign.ru>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Dipankar Sarma <dipankar@...ibm.com>,
	Gautham R Shenoy <ego@...ibm.com>,
	Jarek Poplawski <jarkao2@...pl>,
	Srivatsa Vaddagiri <vatsa@...ibm.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] workqueues: shrink cpu_populated_map when CPU dies

Hi Oleg,

This patch looks OK to me. But while reading this I got some doubts
in nearby places, so BTW 2 small questions:

1) ... workqueue_cpu_callback(...)
{
	...
        list_for_each_entry(wq, &workqueues, list) {
                cwq = per_cpu_ptr(wq->cpu_wq, cpu);

                switch (action) {
                case CPU_UP_PREPARE:
		...

It looks like not all CPU_ cases are served here: shouldn't
list_for_each_entry() be omitted for them?

2) ... __create_workqueue_key(...)
{
	...
        if (singlethread) {
		...
        } else {
                get_online_cpus();
                spin_lock(&workqueue_lock);
                list_add(&wq->list, &workqueues);

Shouldn't this list_add() be done after all these inits below?

                spin_unlock(&workqueue_lock);

                for_each_possible_cpu(cpu) {
                        cwq = init_cpu_workqueue(wq, cpu);
			...
                }
		...
Thanks,
Jarek P.
 

On Sat, Feb 16, 2008 at 08:22:59PM +0300, Oleg Nesterov wrote:
> When cpu_populated_map was introduced, it was supposed that cwq->thread can
> survive after CPU_DEAD, that is why we never shrink cpu_populated_map.
> 
> This is not very nice, we can safely remove the already dead CPU from the map.
> The only required change is that destroy_workqueue() must hold the hotplug lock
> until it destroys all cwq->thread's, to protect the cpu_populated_map. We could
> make the local copy of cpu mask and drop the lock, but sizeof(cpumask_t) may be
> very large.
> 
> Also, fix the comment near queue_work(). Unless _cpu_down() happens we do
> guarantee the cpu-affinity of the work_struct, and we have users which rely on
> this.
> 
> Signed-off-by: Oleg Nesterov <oleg@...sign.ru>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ