lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 3 Apr 2016 13:43:45 +0300
From:	"Michael Rapoport" <RAPOPORT@...ibm.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	Bandan Das <bsd@...hat.com>, linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org, mst@...hat.com, jiangshanlai@...il.com
Subject: Re: [RFC PATCH 0/4] cgroup aware workqueues

Hi Tejun,

> Tejun Heo <htejun@...il.com> wrote on 03/31/2016 08:14:35 PM:
> 
> Hello, Michael.
> 
> On Thu, Mar 31, 2016 at 08:17:13AM +0200, Michael Rapoport wrote:
> > > There really shouldn't be any difference when using unbound
> > > workqueues.  workqueue becomes a convenience thing which manages
> > > worker pools and there shouldn't be any difference between workqueue
> > > workers and kthreads in terms of behavior.
> > 
> > I agree that there really shouldn't be any performance difference, but 
the 
> > tests I've run show otherwise. I have no idea why and I hadn't time 
yet to 
> > investigate it.
> 
> I'd be happy to help digging into what's going on.  If kvm wants full
> control over the worker thread, kvm can use workqueue as a pure
> threadpool.  Schedule a work item to grab a worker thread with the
> matching attributes and keep using it as it'd a kthread.  While that
> wouldn't be able to take advantage of work item flushing and so on,
> it'd still be a simpler way to manage worker threads and the extra
> stuff like cgroup membership handling doesn't have to be duplicated.

My concern is that we trade-off performance for simpler management of 
worker threads.
With the three models I've tested (current vhost models, workqueues-based 
(1) and shared threads based (2)), workqueues-based ones gave the worst 
performance results :(
 
> > > > opportunity for optimization, at least for some workloads...
> > > 
> > > What sort of optimizations are we talking about?
> > 
> > Well, if we take Evlis (1) as for the theoretical base, there could be 

> > benefit of doing I/O scheduling inside the vhost.
> 
> Yeah, if that actually is beneficial, take full control of the
> kworker thread.
> 
> Thanks.

[1] http://thread.gmane.org/gmane.linux.network/286858
[2] http://thread.gmane.org/gmane.linux.kernel.cgroups/13808
 
> -- 
> tejun
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ