lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 3 Apr 2016 13:43:46 +0300
From:	"Michael Rapoport" <RAPOPORT@...ibm.com>
To:	Bandan Das <bsd@...hat.com>
Cc:	jiangshanlai@...il.com, kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org, mst@...hat.com,
	Tejun Heo <tj@...nel.org>
Subject: Re: [RFC PATCH 0/4] cgroup aware workqueues

Hi Bandan,

>  Bandan Das <bsd@...hat.com> wrote on 03/31/2016 09:45:43 PM:
> >
> >> > > opportunity for optimization, at least for some workloads...
> >> > 
> >> > What sort of optimizations are we talking about?
> >> 
> >> Well, if we take Evlis (1) as for the theoretical base, there could 
be 
> >> benefit of doing I/O scheduling inside the vhost.
> >
> > Yeah, if that actually is beneficial, take full control of the
> > kworker thread.
> 
> Well, even if it actually is beneficial (which I am sure it is), it 
seems a
> little impractical to block current improvements based on a future 
prospect
> that (as far as I know), no one is working on ?

I'm not suggesting to block current improvements based on a future 
prospect. But, unfortunately, there's regression rather than improvement 
with the results you've posted.

And, I thought you are working on comparing different approaches to vhost 
threading, like workqueues and shared vhost thread (1) ;-)
Anyway, I'm working on this in a background, and, frankly, I cannot say I 
have a clear vision of the best route.
 
> There have been discussions about this in the past and iirc, most people 
agree
> about not going the byos* route. But I am still all for such a proposal 
and if
> it's good/clean enough, I think we can definitely tear down what we have 
and
> throw it away! The I/O scheduling part is intrusive enough that even the 
current
> code base has to be changed quite a bit.

The "byos" route seems more promising with respect to possible performance 
gains, but it will definitely add complexity, and I cannot say if the 
added complexity will be worth performance improvements.

Meanwhile, I'd suggest we better understand what causes regression with 
your current patches and maybe then we'll be smarter to get to the right 
direction. :)
 
> *byos = bring your own scheduling ;)
> 
> > Thanks.

--
Sincerely yours,
Mike.

[1] https://lwn.net/Articles/650857/ 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ