lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <jpgtws6rk16.fsf@linux.bootlegged.copy>
Date:	Mon, 10 Aug 2015 16:00:21 -0400
From:	Bandan Das <bsd@...hat.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
Cc:	kvm@...r.kernel.org, netdev@...r.kernel.org,
	linux-kernel@...r.kernel.org, Eyal Moscovici <EYALMO@...ibm.com>,
	Razya Ladelsky <RAZYA@...ibm.com>, cgroups@...r.kernel.org,
	jasowang@...hat.com
Subject: Re: [RFC PATCH 0/4] Shared vhost design

"Michael S. Tsirkin" <mst@...hat.com> writes:

> On Sat, Aug 08, 2015 at 07:06:38PM -0400, Bandan Das wrote:
>> Hi Michael,
...
>>
>> > - does the design address the issue of VM 1 being blocked
>> >   (e.g. because it hits swap) and blocking VM 2?
>> Good question. I haven't thought of this yet. But IIUC,
>> the worker thread will complete VM1's job and then move on to
>> executing VM2's scheduled work.
>> It doesn't matter if VM1 is
>> blocked currently. I think it would be a problem though if/when
>> polling is introduced.
>
> Sorry, I wasn't clear. If VM1's memory is in swap, attempts to
> access it might block the service thread, so it won't
> complete VM2's job.

Ah ok, I understand now. I am pretty sure the current RFC doesn't
take care of this :) I will add this to my todo list for v2.

Bandan

>
>
>> 
>> >> 
>> >> #* Last run with the vCPU and I/O thread(s) pinned, no CPU/memory limit imposed.
>> >> #  I/O thread runs on CPU 14 or 15 depending on which guest it's serving
>> >> 
>> >> There's a simple graph at
>> >> http://people.redhat.com/~bdas/elvis/data/results.png
>> >> that shows how task affinity results in a jump and even without it,
>> >> as the number of guests increase, the shared vhost design performs
>> >> slightly better.
>> >> 
>> >> Observations:
>> >> 1. In terms of "stock" performance, the results are comparable.
>> >> 2. However, with a tuned setup, even without polling, we see an improvement
>> >> with the new design.
>> >> 3. Making the new design simulate old behavior would be a matter of setting
>> >> the number of guests per vhost threads to 1.
>> >> 4. Maybe, setting a per guest limit on the work being done by a specific vhost
>> >> thread is needed for it to be fair.
>> >> 5. cgroup associations needs to be figured out. I just slightly hacked the
>> >> current cgroup association mechanism to work with the new model. Ccing cgroups
>> >> for input/comments.
>> >> 
>> >> Many thanks to Razya Ladelsky and Eyal Moscovici, IBM for the initial
>> >> patches, the helpful testing suggestions and discussions.
>> >> 
>> >> Bandan Das (4):
>> >>   vhost: Introduce a universal thread to serve all users
>> >>   vhost: Limit the number of devices served by a single worker thread
>> >>   cgroup: Introduce a function to compare cgroups
>> >>   vhost: Add cgroup-aware creation of worker threads
>> >> 
>> >>  drivers/vhost/net.c    |   6 +-
>> >>  drivers/vhost/scsi.c   |  18 ++--
>> >>  drivers/vhost/vhost.c  | 272 +++++++++++++++++++++++++++++++++++--------------
>> >>  drivers/vhost/vhost.h  |  32 +++++-
>> >>  include/linux/cgroup.h |   1 +
>> >>  kernel/cgroup.c        |  40 ++++++++
>> >>  6 files changed, 275 insertions(+), 94 deletions(-)
>> >> 
>> >> -- 
>> >> 2.4.3
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ