[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C4DE2AE.40302@kernel.org>
Date: Mon, 26 Jul 2010 21:31:58 +0200
From: Tejun Heo <tj@...nel.org>
To: "Michael S. Tsirkin" <mst@...hat.com>
CC: Oleg Nesterov <oleg@...hat.com>,
Sridhar Samudrala <sri@...ibm.com>,
netdev <netdev@...r.kernel.org>,
lkml <linux-kernel@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Dmitri Vorobiev <dmitri.vorobiev@...ial.com>,
Jiri Kosina <jkosina@...e.cz>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH UPDATED 1/3] vhost: replace vhost_workqueue with per-vhost
kthread
Hello,
On 07/26/2010 09:14 PM, Tejun Heo wrote:
> On 07/26/2010 06:51 PM, Michael S. Tsirkin wrote:
>> I noticed that with vhost, flush_work was getting the worker
>> pointer as well. Can we live with this API change?
>
> Yeah, the flushing mechanism wouldn't work reliably if the work is
> queued to a different worker without flushing, so yeah passing in
> @worker might actually be better.
Thinking a bit more about it, it kind of sucks that queueing to
another worker from worker->func() breaks flush. Maybe the right
thing to do there is using atomic_t for done_seq? It pays a bit more
overhead but maybe that's justifiable to keep the API saner? It would
be great if it can be fixed somehow even if it means that the work has
to be separately flushed for each worker it has been on before being
destroyed.
Or, if flushing has to be associated with a specific worker anyway,
maybe it would be better to move the sequence counter to
kthread_worker and do it similarly with the original workqueue so that
work can be destroyed once execution starts? Then, it can at least
remain semantically identical to the original workqueue.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists