[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100601101703.GB9178@redhat.com>
Date: Tue, 1 Jun 2010 13:17:03 +0300
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Tejun Heo <tj@...nel.org>
Cc: Oleg Nesterov <oleg@...hat.com>,
Sridhar Samudrala <sri@...ibm.com>,
netdev <netdev@...r.kernel.org>,
lkml <linux-kernel@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Dmitri Vorobiev <dmitri.vorobiev@...ial.com>,
Jiri Kosina <jkosina@...e.cz>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH 3/3] vhost: apply cpumask and cgroup to vhost workers
On Tue, Jun 01, 2010 at 11:35:15AM +0200, Tejun Heo wrote:
> Apply the cpumask and cgroup of the initializing task to the created
> vhost worker.
>
> Based on Sridhar Samudrala's patch. Li Zefan spotted a bug in error
> path (twice), fixed (twice).
>
> Signed-off-by: Tejun Heo <tj@...nel.org>
> Cc: Michael S. Tsirkin <mst@...hat.com>
> Cc: Sridhar Samudrala <samudrala.sridhar@...il.com>
> Cc: Li Zefan <lizf@...fujitsu.com>
Something that I wanted to figure out - what happens if the
CPU mask limits us to a certain CPU that subsequently goes offline?
Will e.g. flush block forever or until that CPU comes back?
Also, does singlethreaded workqueue behave in the same way?
> ---
> drivers/vhost/vhost.c | 34 ++++++++++++++++++++++++++++++----
> 1 file changed, 30 insertions(+), 4 deletions(-)
>
> Index: work/drivers/vhost/vhost.c
> ===================================================================
> --- work.orig/drivers/vhost/vhost.c
> +++ work/drivers/vhost/vhost.c
> @@ -23,6 +23,7 @@
> #include <linux/highmem.h>
> #include <linux/slab.h>
> #include <linux/kthread.h>
> +#include <linux/cgroup.h>
>
> #include <linux/net.h>
> #include <linux/if_packet.h>
> @@ -187,11 +188,29 @@ long vhost_dev_init(struct vhost_dev *de
> struct vhost_virtqueue *vqs, int nvqs)
> {
> struct task_struct *worker;
> - int i;
> + cpumask_var_t mask;
> + int i, ret = -ENOMEM;
> +
> + if (!alloc_cpumask_var(&mask, GFP_KERNEL))
> + goto out_free_mask;
>
> worker = kthread_create(vhost_worker, dev, "vhost-%d", current->pid);
> - if (IS_ERR(worker))
> - return PTR_ERR(worker);
> + if (IS_ERR(worker)) {
> + ret = PTR_ERR(worker);
> + goto out_free_mask;
> + }
> +
> + ret = sched_getaffinity(current->pid, mask);
> + if (ret)
> + goto out_stop_worker;
> +
> + ret = sched_setaffinity(worker->pid, mask);
> + if (ret)
> + goto out_stop_worker;
> +
> + ret = cgroup_attach_task_current_cg(worker);
> + if (ret)
> + goto out_stop_worker;
>
> dev->vqs = vqs;
> dev->nvqs = nvqs;
> @@ -214,7 +233,14 @@ long vhost_dev_init(struct vhost_dev *de
> }
>
> wake_up_process(worker); /* avoid contributing to loadavg */
> - return 0;
> + ret = 0;
> + goto out_free_mask;
> +
> +out_stop_worker:
> + kthread_stop(worker);
> +out_free_mask:
> + free_cpumask_var(mask);
> + return ret;
> }
>
> /* Caller should have device mutex */
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists