lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86y15k1wz3.wl-maz@kernel.org>
Date: Mon, 29 Jul 2024 08:25:04 +0100
From: Marc Zyngier <maz@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: linux-kernel@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org,
	Zhou Wang <wangzhou1@...ilicon.com>
Subject: Re: [PATCH] irqchip/gic-v4: Fix ordering between vmapp and vpe locks

On Fri, 26 Jul 2024 21:52:40 +0100,
Thomas Gleixner <tglx@...utronix.de> wrote:
> 
> On Tue, Jul 23 2024 at 18:52, Marc Zyngier wrote:
> > @@ -3808,7 +3802,7 @@ static int its_vpe_set_affinity(struct irq_data *d,
> >  	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
> >  	unsigned int from, cpu = nr_cpu_ids;
> >  	struct cpumask *table_mask;
> > -	unsigned long flags;
> > +	unsigned long flags, vmapp_flags;
> 
> What's this flags business for? its_vpe_set_affinity() is called with
> interrupts disabled, no?
>   
> >  	/*
> >  	 * Changing affinity is mega expensive, so let's be as lazy as
> > @@ -3822,7 +3816,14 @@ static int its_vpe_set_affinity(struct irq_data *d,
> >  	 * protect us, and that we must ensure nobody samples vpe->col_idx
> >  	 * during the update, hence the lock below which must also be
> >  	 * taken on any vLPI handling path that evaluates vpe->col_idx.
> > +	 *
> > +	 * Finally, we must protect ourselves against concurrent
> > +	 * updates of the mapping state on this VM should the ITS list
> > +	 * be in use.
> >  	 */
> > +	if (its_list_map)
> > +		raw_spin_lock_irqsave(&vpe->its_vm->vmapp_lock, vmapp_flags);
> 
> Confused. This changes the locking from unconditional to
> conditional. What's the rationale here?

Haven't managed to sleep much, but came to the conclusion that I
wasn't that stupid in my initial patch. Let's look at the full
picture, starting with its_send_vmovp():

        if (!its_list_map) {
                its = list_first_entry(&its_nodes, struct its_node, entry);
                desc.its_vmovp_cmd.col = &its->collections[col_id];
                its_send_single_vcommand(its, its_build_vmovp_cmd, &desc);
                return;
        }

        /*
         * Protect against concurrent updates of the mapping state on
         * individual VMs.
         */
        guard(raw_spinlock_irqsave)(&vpe->its_vm->vmapp_lock);

The vmapp locking *is* conditional. Which makes a lot of sense as the
presence of ITS list is the only thing that prevents the VPEs from
being mapped eagerly at VM startup time (although this is a
performance reason, and not a correctness issue).

So there is no point in taking that lock if there is no ITS list,
given that the VPEs are mapped before we can do anything else. This
has the massive benefit of allowing concurrent VPE affinity changes on
modern HW.

This means that on GICv4.0 without ITSList or GICv4.1, the only lock
we need to acquire is the VPE lock itself on VPE affinity change.

I'll respin the patch shortly.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ