lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c9a1961812b0cbb6e9f641dec5c6edcb21482161.camel@infradead.org>
Date:   Mon, 18 Sep 2023 14:36:42 +0100
From:   David Woodhouse <dwmw2@...radead.org>
To:     paul@....org, kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Cc:     Paul Durrant <pdurrant@...zon.com>,
        Sean Christopherson <seanjc@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [PATCH v2 11/12] KVM: selftests / xen: don't explicitly set the
 vcpu_info address

On Mon, 2023-09-18 at 14:26 +0100, Paul Durrant wrote:
> On 18/09/2023 14:21, David Woodhouse wrote:
> > On Mon, 2023-09-18 at 11:21 +0000, Paul Durrant wrote:
> > > From: Paul Durrant <pdurrant@...zon.com>
> > > 
> > > If the vCPU id is set and the shared_info is mapped using HVA then we can
> > > infer that KVM has the ability to use a default vcpu_info mapping. Hence
> > > we can stop setting the address of the vcpu_info structure.
> > 
> > Again that means we're not *testing* it any more when the test is run
> > on newer kernels. Can we perhaps set it explicitly, after *half* the
> > tests are done? Maybe to a *different* address than the default which
> > is derived from the Xen vcpu_id? And check that the memcpy works right
> > when we do?
> > 
> 
> Ok. The VMM is currently responsible for that memcpy. Are you suggesting 
> we push that into KVM too?

Ah OK.

Hm, maybe we should?

What happened before in the case where interrupts were being delivered,
and the vcpu_info address was changed. 

In Xen, I guess it's effectively atomic? Some locking will mean that
the event channel is delivered to the vcpu_info either *before* the
memcpy, or *after* it, but never to the old address after the copy has
been done, so that the event (well the index of it) gets lost?

In KVM before we did the automatic placement, it was the VMM's problem.

If there are any interrupts set up for direct delivery, I suppose the
VMM should have *removed* the vcpu_info mapping before doing the
memcpy, then restored it at the new address? I may have to check qemu
gets that right.

Then again, it's a very hard race to trigger, given that a guest can
only set the vcpu_info once. So it can move it from the shinfo to a
separate address and attempt to trigger this race just that one time.

But in the case where auto-placement has happened, and then the guest
sets an explicit vcpu_info location... are we saying that the VMM must
explicitly *unmap* the vcpu_info first, then memcpy, then set it to the
new location? Or will we handle the memcpy in-kernel?


Download attachment "smime.p7s" of type "application/pkcs7-signature" (5965 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ