[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100726155014.GA26412@redhat.com>
Date: Mon, 26 Jul 2010 18:50:15 +0300
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Tejun Heo <tj@...nel.org>
Cc: Oleg Nesterov <oleg@...hat.com>,
Sridhar Samudrala <sri@...ibm.com>,
netdev <netdev@...r.kernel.org>,
lkml <linux-kernel@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Dmitri Vorobiev <dmitri.vorobiev@...ial.com>,
Jiri Kosina <jkosina@...e.cz>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH UPDATED 1/3] vhost: replace vhost_workqueue with
per-vhost kthread
On Mon, Jul 26, 2010 at 05:34:44PM +0200, Tejun Heo wrote:
> Hello,
>
> On 07/26/2010 05:25 PM, Michael S. Tsirkin wrote:
> > BTW, kthread_worker would benefit from the optimization I implemented
> > here as well.
>
> Hmmm... I'm not quite sure whether it's an optimization. I thought
> the patch was due to feeling uncomfortable about using barriers?
Oh yes. But getting rid of barriers is what motivated me originally.
> Is it an optimization?
>
> Thanks.
Yes, sure. This removes atomic read and 2 barrier operations on data path. And
it does not add any new synchronization: instead, we reuse the lock that we
take anyway. The relevant part is:
+ if (work) {
+ __set_current_state(TASK_RUNNING);
+ work->fn(work);
+ } else
+ schedule();
- if (work) {
- __set_current_state(TASK_RUNNING);
- work->fn(work);
- smp_wmb(); /* wmb worker-b0 paired with flush-b1 */
- work->done_seq = work->queue_seq;
- smp_mb(); /* mb worker-b1 paired with flush-b0 */
- if (atomic_read(&work->flushing))
- wake_up_all(&work->done);
- } else
- schedule();
-
- goto repeat;
Is there a git tree with kthread_worker applied?
I might do this just for fun ...
> --
> tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists