lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 21 Mar 2016 13:43:41 -0400
From:	Bandan Das <bsd@...hat.com>
To:	"Michael Rapoport" <RAPOPORT@...ibm.com>
Cc:	tj@...nel.org, linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
	mst@...hat.com, jiangshanlai@...il.com
Subject: Re: [RFC PATCH 0/4] cgroup aware workqueues

"Michael Rapoport" <RAPOPORT@...ibm.com> writes:

> Hi Bandan,
>
>> From: Bandan Das <bsd@...hat.com>
>> 
>> At Linuxcon last year, based on our presentation "vhost: sharing is 
> better" [1],
>> we had briefly discussed the idea of cgroup aware workqueues with Tejun. 
> The
>> following patches are a result of the discussion. They are in no way 
> complete in
>> that the changes are for unbounded workqueues only, but I just wanted to 
> present my
>> unfinished work as RFC and get some feedback.
>> 
>> 1/4 and 3/4 are simple cgroup changes and add a helper function.
>> 2/4 is the main implementation.
>> 4/4 changes vhost to use workqueues with support for cgroups.
>>
>> Example:
>> vhost creates a worker thread when invoked for a kvm guest. Since,
>> the guest is a normal process, the kernel thread servicing it should be
>> attached to the vm process' cgroups.
>
> I did some performance evaluation of different threading models in vhost, 
> and in most tests replacing vhost kthread's with workqueues degrades the

Workqueues us kthread_create internally and if calling one over the
other impacts performace, I think we should investigate that. Which
patches did you use ? Note that an earlier version of workqueue patches
that I posted used per-cpu workqueues.

> performance. Moreover, having thread management inside the vhost provides

What exactly is the advantage doing our own thread management ? Do you have
any examples ? (Besides for doing our own scheduling like in the original Elvis
paper which I don't think is gonna happen). Also, note here that there is
a possibility to affect how our work gets executed by using optional switches to
alloc_workqueue() so all is not lost.

> opportunity for optimization, at least for some workloads...
> That said, I believe that switching vhost to use workqueues is not that 
> good idea after all.
>  
>> Netperf:
>> Two guests running netperf in parallel.
>>                                  Without patches                  With 
> patches
>> 
>> TCP_STREAM (10^6 bits/second)         975.45              978.88 
>> TCP_RR (Trans/second)            20121              18820.82
>> UDP_STREAM (10^6 bits/second)         1287.82                1184.5
>> UDP_RR (Trans/second)            20766.72              19667.08
>> Time a 4G iso download            2m 33 seconds           3m 02 seconds
>
> --
> Sincerely yours,
> Mike.
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ