lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <jpgshz6cwq0.fsf@linux.bootlegged.copy>
Date:	Thu, 31 Mar 2016 14:45:43 -0400
From:	Bandan Das <bsd@...hat.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	Michael Rapoport <RAPOPORT@...ibm.com>,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org, mst@...hat.com,
	jiangshanlai@...il.com
Subject: Re: [RFC PATCH 0/4] cgroup aware workqueues

Tejun Heo <tj@...nel.org> writes:

> Hello, Michael.
>
> On Thu, Mar 31, 2016 at 08:17:13AM +0200, Michael Rapoport wrote:
>> > There really shouldn't be any difference when using unbound
>> > workqueues.  workqueue becomes a convenience thing which manages
>> > worker pools and there shouldn't be any difference between workqueue
>> > workers and kthreads in terms of behavior.
>> 
>> I agree that there really shouldn't be any performance difference, but the 
>> tests I've run show otherwise. I have no idea why and I hadn't time yet to 
>> investigate it.
>
> I'd be happy to help digging into what's going on.  If kvm wants full
> control over the worker thread, kvm can use workqueue as a pure
> threadpool.  Schedule a work item to grab a worker thread with the
> matching attributes and keep using it as it'd a kthread.  While that
> wouldn't be able to take advantage of work item flushing and so on,
> it'd still be a simpler way to manage worker threads and the extra
> stuff like cgroup membership handling doesn't have to be duplicated.
>
>> > > opportunity for optimization, at least for some workloads...
>> > 
>> > What sort of optimizations are we talking about?
>> 
>> Well, if we take Evlis (1) as for the theoretical base, there could be 
>> benefit of doing I/O scheduling inside the vhost.
>
> Yeah, if that actually is beneficial, take full control of the
> kworker thread.

Well, even if it actually is beneficial (which I am sure it is), it seems a
little impractical to block current improvements based on a future prospect
that (as far as I know), no one is working on ?

There have been discussions about this in the past and iirc, most people agree
about not going the byos* route. But I am still all for such a proposal and if
it's good/clean enough, I think we can definitely tear down what we have and
throw it away! The I/O scheduling part is intrusive enough that even the current
code base has to be changed quite a bit.

*byos = bring your own scheduling ;)

> Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ