[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C4BEAA2.6040301@kernel.org>
Date: Sun, 25 Jul 2010 09:41:22 +0200
From: Tejun Heo <tj@...nel.org>
To: "Michael S. Tsirkin" <mst@...hat.com>
CC: Oleg Nesterov <oleg@...hat.com>,
Sridhar Samudrala <sri@...ibm.com>,
netdev <netdev@...r.kernel.org>,
lkml <linux-kernel@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Dmitri Vorobiev <dmitri.vorobiev@...ial.com>,
Jiri Kosina <jkosina@...e.cz>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH UPDATED 1/3] vhost: replace vhost_workqueue with per-vhost
kthread
Hello,
On 07/24/2010 09:14 PM, Michael S. Tsirkin wrote:
>> I've created kthread_worker in wq#for-next tree and already converted
>> ivtv to use it. Once this lands in mainline, I think converting vhost
>> to use it would be better choice. kthread worker code uses basically
>> the same logic used in the vhost_workqueue code but is better
>> organized and documented. So, I think it would be better to stick
>> with the original implementation, as otherwise we're likely to just
>> decrease test coverage without much gain.
>>
>> http://git.kernel.org/?p=linux/kernel/git/tj/wq.git;a=commitdiff;h=b56c0d8937e665a27d90517ee7a746d0aa05af46;hp=53c5f5ba42c194cb13dd3083ed425f2c5b1ec439
>
> Sure, if we keep using workqueue. But I'd like to investigate this
> direction a bit more because there's discussion to switching from kthread to
> regular threads altogether.
Hmmm? It doesn't have much to do with workqueue. kthread_worker is a
simple wrapper around kthread. It now assumes kthread but changing it
to be useable with any thread shouldn't be too hard. Wouldn't that be
better?
>> I don't think doing this before executing the function is correct,
>
> Well, before I execute the function work is NULL, so this is skipped.
> Correct?
>
>> so
>> you'll have to release the lock, execute the function, regrab the lock
>> and then do the flush processing.
>>
>> Thanks.
>
> It's done in the loop, so I thought we can reuse the locking
> done for the sake of processing the next work item.
> Makes sense?
Yeap, right. I think it would make much more sense to use common code
when it becomes available but if you think the posted change is
necessary till then, please feel free to go ahead.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists