lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 28 May 2010 18:08:30 +0300
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	Oleg Nesterov <oleg@...hat.com>,
	Sridhar Samudrala <sri@...ibm.com>,
	netdev <netdev@...r.kernel.org>,
	lkml <linux-kernel@...r.kernel.org>,
	"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Dmitri Vorobiev <dmitri.vorobiev@...ial.com>,
	Jiri Kosina <jkosina@...e.cz>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...e.hu>, Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH 2/3] workqueue: Add an API to create a singlethread
	workqueue attached to the current task's cgroup

On Thu, May 27, 2010 at 11:20:22PM +0200, Tejun Heo wrote:
> Hello, Michael.
> 
> On 05/27/2010 07:32 PM, Michael S. Tsirkin wrote:
> > Well, this is why I proposed adding a new API for creating
> > workqueue within workqueue.c, rather than exposing the task
> > and attaching it to cgroups in our driver: so that workqueue
> > maintainers can fix the implementation if it ever changes.
> > 
> > And after all, it's an internal API, we can always change
> > it later if we need.
> ...
> > Well, yes but we are using APIs like flush_work etc. These are very
> > handy.  It seems much easier than rolling our own queue on top of kthread.
> 
> The thing is that this kind of one-off usage becomes problemetic when
> you're trying to change the implementation detail.  All current
> workqueue users don't care which thread they run on and they shouldn't
> as each work owns the context only for the duration the work is
> executing.  If this sort of fundamental guidelines are followed, the
> implementation can be improved in pretty much transparent way but when
> you start depending on specific implementation details, things become
> messy pretty quickly.
> 
> If this type of usage were more common, adding proper way to account
> work usage according to cgroups would make sense but that's not the
> case here and I removed the only exception case recently while trying
> to implement cmwq and if this is added.  So, this would be the only
> one which makes such extra assumptions in the whole kernel.  One way
> or the other, workqueue needs to be improved and I don't really think
> adding the single exception at this point is a good idea.
> 
> The thing I realized after stop_machine conversion was that there was
> no reason to use workqueue there at all.  There already are more than
> enough not-too-difficult synchronization constructs and if you're
> using a thread for dedicated purposes, code complexity isn't that
> different either way.  Plus, it would also be clearer that dedicated
> threads are required there for what reason.  So, I strongly suggest
> using a kthread.  If there are issues which are noticeably difficult
> to solve with kthread, we can definitely talk about that and think
> about things again.
> 
> Thank you.

Well, we have create_singlethread_workqueue, right?
This is not very different ... is it?

Just copying structures and code from workqueue.c,
adding vhost_ in front of it will definitely work:
there is nothing magic about the workqueue library.
But this just involves cut and paste which might be best avoided.
One final idea before we go the cut and paste way: how about
'create_workqueue_from_task' that would get a thread and have workqueue
run there?

> -- 
> tejun
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ