lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86v86j2f9o.wl-maz@kernel.org>
Date: Tue, 20 Feb 2024 17:53:55 +0000
From: Marc Zyngier <maz@...nel.org>
To: Oliver Upton <oliver.upton@...ux.dev>
Cc: Zenghui Yu <zenghui.yu@...ux.dev>,
	kvmarm@...ts.linux.dev,
	kvm@...r.kernel.org,
	James Morse <james.morse@....com>,
	Suzuki K Poulose <suzuki.poulose@....com>,
	Zenghui Yu <yuzenghui@...wei.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 01/10] KVM: arm64: vgic: Store LPIs in an xarray

On Tue, 20 Feb 2024 17:43:03 +0000,
Oliver Upton <oliver.upton@...ux.dev> wrote:
> 
> On Tue, Feb 20, 2024 at 05:24:50PM +0000, Marc Zyngier wrote:
> > On Tue, 20 Feb 2024 16:30:24 +0000,
> > Zenghui Yu <zenghui.yu@...ux.dev> wrote:
> > > 
> > > On 2024/2/17 02:41, Oliver Upton wrote:
> > > > Using a linked-list for LPIs is less than ideal as it of course requires
> > > > iterative searches to find a particular entry. An xarray is a better
> > > > data structure for this use case, as it provides faster searches and can
> > > > still handle a potentially sparse range of INTID allocations.
> > > > 
> > > > Start by storing LPIs in an xarray, punting usage of the xarray to a
> > > > subsequent change.
> > > > 
> > > > Signed-off-by: Oliver Upton <oliver.upton@...ux.dev>
> > > 
> > > [..]
> > > 
> > > > diff --git a/arch/arm64/kvm/vgic/vgic.c b/arch/arm64/kvm/vgic/vgic.c
> > > > index db2a95762b1b..c126014f8395 100644
> > > > --- a/arch/arm64/kvm/vgic/vgic.c
> > > > +++ b/arch/arm64/kvm/vgic/vgic.c
> > > > @@ -131,6 +131,7 @@ void __vgic_put_lpi_locked(struct kvm *kvm, struct vgic_irq *irq)
> > > >  		return;
> > > >   	list_del(&irq->lpi_list);
> > > > +	xa_erase(&dist->lpi_xa, irq->intid);
> > > 
> > > We can get here *after* grabbing the vgic_cpu->ap_list_lock (e.g.,
> > > vgic_flush_pending_lpis()/vgic_put_irq()).  And as according to vGIC's
> > > "Locking order", we should disable interrupts before taking the xa_lock
> > > in xa_erase() and we would otherwise see bad things like deadlock..
> > > 
> > > It's not a problem before patch #10, where we drop the lpi_list_lock and
> > > start taking the xa_lock with interrupts enabled.  Consider switching to
> > > use xa_erase_irq() instead?
> > 
> > But does it actually work? xa_erase_irq() uses spin_lock_irq(),
> > followed by spin_unlock_irq(). So if we were already in interrupt
> > context, we would end-up reenabling interrupts. At least, this should
> > be the irqsave version.
> 
> This is what I was planning to do, although I may kick it out to patch
> 10 to avoid churn.
> 
> > The question is whether we manipulate LPIs (in the get/put sense) on
> > the back of an interrupt handler (like we do for the timer). It isn't
> > obvious to me that it is the case, but I haven't spent much time
> > staring at this code recently.
> 
> I think we can get into here both from contexts w/ interrupts disabled
> or enabled. irqfd_wakeup() expects to be called w/ interrupts disabled.
> 
> All the more reason to use irqsave() / irqrestore() flavors of all of
> this, and a reminder to go check all callsites that implicitly take the
> xa_lock.

Sounds good. Maybe you can also update the locking order
"documentation" to include the xa_lock? I expect that it will
ultimately replace lpi_list_lock.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ