lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 02 Apr 2021 10:32:55 +0100
From:   Marc Zyngier <maz@...nel.org>
To:     Auger Eric <eric.auger@...hat.com>
Cc:     eric.auger.pro@...il.com, linux-kernel@...r.kernel.org,
        kvm@...r.kernel.org, kvmarm@...ts.cs.columbia.edu,
        drjones@...hat.com, alexandru.elisei@....com, james.morse@....com,
        suzuki.poulose@....com, shuah@...nel.org, pbonzini@...hat.com
Subject: Re: [PATCH v4 7/8] KVM: arm64: vgic-v3: Expose GICR_TYPER.Last for userspace

Hi Eric,

On Thu, 01 Apr 2021 20:16:53 +0100,
Auger Eric <eric.auger@...hat.com> wrote:
> 
> Hi Marc,
> 
> On 4/1/21 7:30 PM, Marc Zyngier wrote:
> > On Thu, 01 Apr 2021 18:03:25 +0100,
> > Auger Eric <eric.auger@...hat.com> wrote:
> >>
> >> Hi Marc,
> >>
> >> On 4/1/21 3:42 PM, Marc Zyngier wrote:
> >>> Hi Eric,
> >>>
> >>> On Thu, 01 Apr 2021 09:52:37 +0100,
> >>> Eric Auger <eric.auger@...hat.com> wrote:
> >>>>
> >>>> Commit 23bde34771f1 ("KVM: arm64: vgic-v3: Drop the
> >>>> reporting of GICR_TYPER.Last for userspace") temporarily fixed
> >>>> a bug identified when attempting to access the GICR_TYPER
> >>>> register before the redistributor region setting, but dropped
> >>>> the support of the LAST bit.
> >>>>
> >>>> Emulating the GICR_TYPER.Last bit still makes sense for
> >>>> architecture compliance though. This patch restores its support
> >>>> (if the redistributor region was set) while keeping the code safe.
> >>>>
> >>>> We introduce a new helper, vgic_mmio_vcpu_rdist_is_last() which
> >>>> computes whether a redistributor is the highest one of a series
> >>>> of redistributor contributor pages.
> >>>>
> >>>> The spec says "Indicates whether this Redistributor is the
> >>>> highest-numbered Redistributor in a series of contiguous
> >>>> Redistributor pages."
> >>>>
> >>>> The code is a bit convulated since there is no guarantee
> >>>
> >>> nit: convoluted
> >>>
> >>>> redistributors are added in a given reditributor region in
> >>>> ascending order. In that case the current implementation was
> >>>> wrong. Also redistributor regions can be contiguous
> >>>> and registered in non increasing base address order.
> >>>>
> >>>> So the index of redistributors are stored in an array within
> >>>> the redistributor region structure.
> >>>>
> >>>> With this new implementation we do not need to have a uaccess
> >>>> read accessor anymore.
> >>>>
> >>>> Signed-off-by: Eric Auger <eric.auger@...hat.com>
> >>>
> >>> This patch also hurt my head, a lot more than the first one.  See
> >>> below.
> >>>
> >>>> ---
> >>>>  arch/arm64/kvm/vgic/vgic-init.c    |  7 +--
> >>>>  arch/arm64/kvm/vgic/vgic-mmio-v3.c | 97 ++++++++++++++++++++----------
> >>>>  arch/arm64/kvm/vgic/vgic.h         |  1 +
> >>>>  include/kvm/arm_vgic.h             |  3 +
> >>>>  4 files changed, 73 insertions(+), 35 deletions(-)
> >>>>
> >>>> diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c
> >>>> index cf6faa0aeddb2..61150c34c268c 100644
> >>>> --- a/arch/arm64/kvm/vgic/vgic-init.c
> >>>> +++ b/arch/arm64/kvm/vgic/vgic-init.c
> >>>> @@ -190,6 +190,7 @@ int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
> >>>>  	int i;
> >>>>  
> >>>>  	vgic_cpu->rd_iodev.base_addr = VGIC_ADDR_UNDEF;
> >>>> +	vgic_cpu->index = vcpu->vcpu_id;
> >>>
> >>> Is it so that vgic_cpu->index is always equal to vcpu_id? If so, why
> >>> do we need another field? We can always get to the vcpu using a
> >>> container_of().
> >>>
> >>>>  
> >>>>  	INIT_LIST_HEAD(&vgic_cpu->ap_list_head);
> >>>>  	raw_spin_lock_init(&vgic_cpu->ap_list_lock);
> >>>> @@ -338,10 +339,8 @@ static void kvm_vgic_dist_destroy(struct kvm *kvm)
> >>>>  	dist->vgic_dist_base = VGIC_ADDR_UNDEF;
> >>>>  
> >>>>  	if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
> >>>> -		list_for_each_entry_safe(rdreg, next, &dist->rd_regions, list) {
> >>>> -			list_del(&rdreg->list);
> >>>> -			kfree(rdreg);
> >>>> -		}
> >>>> +		list_for_each_entry_safe(rdreg, next, &dist->rd_regions, list)
> >>>> +			vgic_v3_free_redist_region(rdreg);
> >>>
> >>> Consider moving the introduction of vgic_v3_free_redist_region() into
> >>> a separate patch. On its own, that's a good readability improvement.
> >>>
> >>>>  		INIT_LIST_HEAD(&dist->rd_regions);
> >>>>  	} else {
> >>>>  		dist->vgic_cpu_base = VGIC_ADDR_UNDEF;
> >>>> diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
> >>>> index 987e366c80008..f6a7eed1d6adb 100644
> >>>> --- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c
> >>>> +++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
> >>>> @@ -251,45 +251,57 @@ static void vgic_mmio_write_v3r_ctlr(struct kvm_vcpu *vcpu,
> >>>>  		vgic_enable_lpis(vcpu);
> >>>>  }
> >>>>  
> >>>> +static bool vgic_mmio_vcpu_rdist_is_last(struct kvm_vcpu *vcpu)
> >>>> +{
> >>>> +	struct vgic_dist *vgic = &vcpu->kvm->arch.vgic;
> >>>> +	struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
> >>>> +	struct vgic_redist_region *rdreg = vgic_cpu->rdreg;
> >>>> +
> >>>> +	if (!rdreg)
> >>>> +		return false;
> >>>> +
> >>>> +	if (rdreg->count && vgic_cpu->rdreg_index == (rdreg->count - 1)) {
> >>>> +		/* check whether there is no other contiguous rdist region */
> >>>> +		struct list_head *rd_regions = &vgic->rd_regions;
> >>>> +		struct vgic_redist_region *iter;
> >>>> +
> >>>> +		list_for_each_entry(iter, rd_regions, list) {
> >>>> +			if (iter->base == rdreg->base + rdreg->count * KVM_VGIC_V3_REDIST_SIZE &&
> >>>> +				iter->free_index > 0) {
> >>>> +			/* check the first rdist index of this region, if any */
> >>>> +				if (vgic_cpu->index < iter->rdist_indices[0])
> >>>> +					return false;
> >>>
> >>> rdist_indices[] contains the vcpu_id of the vcpu associated with a
> >>> given RD in the region. At this stage, you have established that there
> >>> is another region that is contiguous with the one associated with our
> >>> vcpu. You also know that this adjacent region has a vcpu mapped in
> >>> (free_index isn't 0). Isn't that enough to declare that our vcpu isn't
> >>> last?  I definitely don't understand what the index comparison does
> >>> here.
> >> Assume the following case:
> >> 2 RDIST region
> >> region #0 contains rdist 1, 2, 4
> >> region #1, adjacent to #0 contains rdist 3
> >>
> >> Spec days:
> >> "Indicates whether this Redistributor is the
> >> highest-numbered Redistributor in a series of contiguous
> >> Redistributor pages."
> >>
> >> To me 4 is last and 3 is last too.
> > 
> > No, only 3 is last, assuming that region 0 is full. I think the
> > phrasing in the spec is just really bad. What this describes is that
> > at the end of a set of contiguous set of RDs, that last RD has Last
> > set. If two regions are contiguous, that's undistinguishable from a
> > single, larger region.
> > 
> > There is no such thing as a "redistributor number" anyway. The closest
> > thing there is would be "processor number", but that has nothing to do
> > with the RD itself.
> 
> Hum OK. That's a different understanding of the spec wording indeed. For
> me redistributor number was the index of the vcpu.

I think that's the source of the confusion. There really is nothing
like a redistributor number. There is a processor number when
GICR_TYPER.PTA=0 (that the guest uses as the target CPU when moving a
LPI), but that's it. The layout is totally dumb, and the last frame in
a contiguous sequence of frames is, well, last. The content of the
frames doesn't matter in the least.

> But well, you're understanding is definitively simpler to implement and
> also matches what was implemented for single RDIST region.

That's a key insight. There is no reason why the RD layout would defer
between a single region and multiple regions.

Think of it from a HW perspective. You design a SoC that has
"clusters" of CPUs, and you lay down a bunch of RDs, one set per
cluster. Each set has a "Last" RD frame, and that's all there is to
it.

I'll try and see if ARM people are willing to clarify the spec (for
which an update is long overdue).

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ