lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bb3a38d9-4eb8-83ff-8b94-dd1bc80d005f@huawei.com>
Date: Wed, 23 Oct 2024 21:51:40 +0800
From: Zenghui Yu <yuzenghui@...wei.com>
To: Marc Zyngier <maz@...nel.org>
CC: <linux-kernel@...r.kernel.org>, <linux-arm-kernel@...ts.infradead.org>,
	Thomas Gleixner <tglx@...utronix.de>, Kunkun Jiang <jiangkunkun@...wei.com>
Subject: Re: [PATCH] irqchip/gic-v4: Don't allow a VMOVP on a dying VPE

On 2024/10/23 16:49, Marc Zyngier wrote:
> Hi Zenghui,
> 
> On Tue, 22 Oct 2024 08:45:17 +0100,
> Zenghui Yu <yuzenghui@...wei.com> wrote:
> >
> > Hi Marc,
> >
> > On 2024/10/3 4:49, Marc Zyngier wrote:
> > > Kunkun Jiang reports that there is a small window of opportunity for
> > > userspace to force a change of affinity for a VPE while the VPE has
> > > already been unmapped, but the corresponding doorbell interrupt still
> > > visible in /proc/irq/.
> > >
> > > Plug the race by checking the value of vmapp_count, which tracks whether
> > > the VPE is mapped ot not, and returning an error in this case.
> > >
> > > This involves making vmapp_count common to both GICv4.1 and its v4.0
> > > ancestor.
> > >
> > > Reported-by: Kunkun Jiang <jiangkunkun@...wei.com>
> > > Signed-off-by: Marc Zyngier <maz@...nel.org>
> > > Link: https://lore.kernel.org/r/c182ece6-2ba0-ce4f-3404-dba7a3ab6c52@huawei.com
> > > ---
> > >  drivers/irqchip/irq-gic-v3-its.c   | 18 ++++++++++++------
> > >  include/linux/irqchip/arm-gic-v4.h |  4 +++-
> > >  2 files changed, 15 insertions(+), 7 deletions(-)
> > >
> > > diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
> > > index fdec478ba5e7..ab597e74ba08 100644
> > > --- a/drivers/irqchip/irq-gic-v3-its.c
> > > +++ b/drivers/irqchip/irq-gic-v3-its.c
> > > @@ -797,8 +797,8 @@ static struct its_vpe *its_build_vmapp_cmd(struct its_node *its,
> > >  	its_encode_valid(cmd, desc->its_vmapp_cmd.valid);
> > >  
> > >  	if (!desc->its_vmapp_cmd.valid) {
> > > +		alloc = !atomic_dec_return(&desc->its_vmapp_cmd.vpe->vmapp_count);
> > >  		if (is_v4_1(its)) {
> > > -			alloc = !atomic_dec_return(&desc->its_vmapp_cmd.vpe->vmapp_count);
> > >  			its_encode_alloc(cmd, alloc);
> > >  			/*
> > >  			 * Unmapping a VPE is self-synchronizing on GICv4.1,
> > > @@ -817,13 +817,13 @@ static struct its_vpe *its_build_vmapp_cmd(struct its_node *its,
> > >  	its_encode_vpt_addr(cmd, vpt_addr);
> > >  	its_encode_vpt_size(cmd, LPI_NRBITS - 1);
> > >  
> > > +	alloc = !atomic_fetch_inc(&desc->its_vmapp_cmd.vpe->vmapp_count);
> > > +
> > >  	if (!is_v4_1(its))
> > >  		goto out;
> > >  
> > >  	vconf_addr = virt_to_phys(page_address(desc->its_vmapp_cmd.vpe->its_vm->vprop_page));
> > >  
> > > -	alloc = !atomic_fetch_inc(&desc->its_vmapp_cmd.vpe->vmapp_count);
> > > -
> > >  	its_encode_alloc(cmd, alloc);
> > >  
> > >  	/*
> > > @@ -3806,6 +3806,13 @@ static int its_vpe_set_affinity(struct irq_data *d,
> > >  	struct cpumask *table_mask;
> > >  	unsigned long flags;
> > >  
> > > +	/*
> > > +	 * Check if we're racing against a VPE being destroyed, for
> > > +	 * which we don't want to allow a VMOVP.
> > > +	 */
> > > +	if (!atomic_read(&vpe->vmapp_count))
> > > +		return -EINVAL;
> >
> > We lazily map the vPE so that vmapp_count is likely to be 0 on GICv4.0
> > implementations with the ITSList feature. Seems that that implementation
> > is not affected by the reported race and we don't need to check
> > vmapp_count for that.
> 
> Indeed, the ITSList guards the sending of VMOVP in that case, and we
> avoid the original issue in that case. However, this still translates
> in the doorbell being moved for no reason (see its_vpe_db_proxy_move).

Yup.

> How about something like this?

I'm pretty sure that the splat will disappear with that.

> diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
> index ab597e74ba08..ac8ed56f1e48 100644
> --- a/drivers/irqchip/irq-gic-v3-its.c
> +++ b/drivers/irqchip/irq-gic-v3-its.c
> @@ -3810,8 +3810,17 @@ static int its_vpe_set_affinity(struct irq_data *d,
>  	 * Check if we're racing against a VPE being destroyed, for
>  	 * which we don't want to allow a VMOVP.
>  	 */
> -	if (!atomic_read(&vpe->vmapp_count))
> -		return -EINVAL;
> +	if (!atomic_read(&vpe->vmapp_count)) {
> +		if (gic_requires_eager_mapping())
> +			return -EINVAL;

Nitpick: why do we treat this as an error?

> +
> +		/*
> +		 * If we lazily map the VPEs, this isn't an error, and
> +		 * we exit cleanly.
> +		 */
> +		irq_data_update_effective_affinity(d, cpumask_of(cpu));

@cpu is uninitialized to a sensible value at this point?

> +		return IRQ_SET_MASK_OK_DONE;
> +	}
>  
>  	/*
>  	 * Changing affinity is mega expensive, so let's be as lazy as

Thanks,
Zenghui

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ