[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.1203081656590.31821@file.rdu.redhat.com>
Date: Thu, 8 Mar 2012 17:21:53 -0500 (EST)
From: Mikulas Patocka <mpatocka@...hat.com>
To: Mandeep Singh Baines <msb@...omium.org>
cc: linux-kernel@...r.kernel.org, dm-devel@...hat.com,
Alasdair G Kergon <agk@...hat.com>,
Will Drewry <wad@...omium.org>,
Elly Jones <ellyjones@...omium.org>,
Milan Broz <mbroz@...hat.com>,
Olof Johansson <olofj@...omium.org>,
Steffen Klassert <steffen.klassert@...unet.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: workqueues and percpu (was: [PATCH] dm: remake of the verity target)
On Tue, 6 Mar 2012, Mandeep Singh Baines wrote:
> You are
> allocated a complete shash_desc per I/O. We only allocate one per CPU.
I looked at it --- and using percpu variables in workqueues isn't safe
because the workqueue can change CPU if the CPU is hot-unplugged.
dm-crypt has the same bug --- it also uses workqueue with per-cpu
variables and assumes that the CPU doesn't change for a single work item.
This program shows that work executed in a workqueue can be switched to a
different CPU.
I'm wondering how much other kernel code assumes that workqueues are bound
to a specific CPU, which isn't true if we unplug that CPU.
Mikulas
---
/*
* A proof of concept that a work item executed on a workqueue may change CPU
* when CPU hot-unplugging is used.
* Compile this as a module and run:
* insmod test.ko; sleep 1; echo 0 >/sys/devices/system/cpu/cpu1/online
* You see that the work item starts executing on CPU 1 and ends up executing
* on different CPU, usually 0.
*/
#include <linux/module.h>
#include <linux/delay.h>
static struct workqueue_struct *wq;
static struct work_struct work;
static void do_work(struct work_struct *w)
{
printk("starting work on cpu %d\n", smp_processor_id());
msleep(10000);
printk("finishing work on cpu %d\n", smp_processor_id());
}
static int __init test_init(void)
{
printk("module init\n");
wq = alloc_workqueue("testd", WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE, 1);
if (!wq) {
printk("alloc_workqueue failed\n");
return -ENOMEM;
}
INIT_WORK(&work, do_work);
queue_work_on(1, wq, &work);
return 0;
}
static void __exit test_exit(void)
{
destroy_workqueue(wq);
printk("module exit\n");
}
module_init(test_init)
module_exit(test_exit)
MODULE_LICENSE("GPL");
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists