lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160314165422.GA10013@potion.brq.redhat.com>
Date:	Mon, 14 Mar 2016 17:54:22 +0100
From:	Radim Krčmář <rkrcmar@...hat.com>
To:	Suravee Suthikulpanit <Suravee.Suthikulpanit@....com>
Cc:	pbonzini@...hat.com, joro@...tes.org, bp@...en8.de,
	gleb@...nel.org, alex.williamson@...hat.com, kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org, wei@...hat.com,
	sherry.hurwitz@....com
Subject: Re: [PART1 RFC v2 10/10] svm: Manage vcpu load/unload when enable
 AVIC

2016-03-14 18:58+0700, Suravee Suthikulpanit:
> On 03/10/2016 09:01 PM, Radim Krčmář wrote:
>>Well, we haven't reached an agreement on is_running yet.  The situation:
>>if we don't unset vcpu1.is_running when vcpu1 is scheduled out and vcpu2
>>gets scheduled on vcpu1's physical core, then vcpu2 would receive a
>>doorbell intended to vcpu1.
> 
> That's why, in V2, I added the logic to check if the is_running bit is set
> for the current vcpu (e.g. vcpu1) when unloaded, then restore the bit during
> loading later of if it was set during previous unloaded. This way, when we
> load the new vcpu (e.g. vcpu2), the is_running will be set as it was before
> unloading.

Yes, that's a good solution and I'm leaning towards it.  The downside is
that IPIs from other VCPUs exit, even though KVM can't do anything,
because the vCPU is already going to run as soon as it can.
Keeping is_running set during unload would prevent meaningless exits.

>>We'd like to keep is_running set when there is no reason to vmexit, but
>>not if a guest can negatively affect other guests.
> 
> Not sure how this can affect other guests?

If is_running is set, then the doorbell is sent to a physical core, so
any running task/vCPU will receive it.  This is safe, but a difference
can be seen in performance.

>>How does receiving a stray doorbell affect the performance?
> 
> As far as I know, the doorbell only affects the CPU during vmrun.

Yeah, I guess that receiving a doorbell outside of vmrun has no
overhead.

>                                                                   Once
> received, it will check the IRR in vAPIC backing page.  So, I think if IRR
> bit is not set, the affect should be rather minimal.

Even empty IRR still needs to be rescanned every time a doorbell
arrives, which might affect the execution pipeline.

After re-reading all relevant quotes, I think that hardware wasn't
designed with this use in mind, so it's safer to assume an adverse
effect and go with solution we have now.  (It'd be hard to measure
anyway.)

Sorry for the tangent.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ