lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160331171435.GD24661@htj.duckdns.org>
Date:	Thu, 31 Mar 2016 13:14:35 -0400
From:	Tejun Heo <tj@...nel.org>
To:	Michael Rapoport <RAPOPORT@...ibm.com>
Cc:	Bandan Das <bsd@...hat.com>, linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org, mst@...hat.com, jiangshanlai@...il.com
Subject: Re: [RFC PATCH 0/4] cgroup aware workqueues

Hello, Michael.

On Thu, Mar 31, 2016 at 08:17:13AM +0200, Michael Rapoport wrote:
> > There really shouldn't be any difference when using unbound
> > workqueues.  workqueue becomes a convenience thing which manages
> > worker pools and there shouldn't be any difference between workqueue
> > workers and kthreads in terms of behavior.
> 
> I agree that there really shouldn't be any performance difference, but the 
> tests I've run show otherwise. I have no idea why and I hadn't time yet to 
> investigate it.

I'd be happy to help digging into what's going on.  If kvm wants full
control over the worker thread, kvm can use workqueue as a pure
threadpool.  Schedule a work item to grab a worker thread with the
matching attributes and keep using it as it'd a kthread.  While that
wouldn't be able to take advantage of work item flushing and so on,
it'd still be a simpler way to manage worker threads and the extra
stuff like cgroup membership handling doesn't have to be duplicated.

> > > opportunity for optimization, at least for some workloads...
> > 
> > What sort of optimizations are we talking about?
> 
> Well, if we take Evlis (1) as for the theoretical base, there could be 
> benefit of doing I/O scheduling inside the vhost.

Yeah, if that actually is beneficial, take full control of the
kworker thread.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ