lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87eczd6scg.ffs@tglx>
Date: Tue, 04 Mar 2025 10:46:39 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Tsai Sung-Fu <danielsftsai@...gle.com>
Cc: Jingoo Han <jingoohan1@...il.com>, Manivannan Sadhasivam
 <manivannan.sadhasivam@...aro.org>, Lorenzo Pieralisi
 <lpieralisi@...nel.org>, Krzysztof Wilczyński
 <kw@...ux.com>, Rob Herring
 <robh@...nel.org>, Bjorn Helgaas <bhelgaas@...gle.com>, Andrew Chant
 <achant@...gle.com>, Brian Norris <briannorris@...gle.com>, Sajid Dalvi
 <sdalvi@...gle.com>, Mark Cheng <markcheng@...gle.com>, Ben Cheng
 <bccheng@...gle.com>, linux-pci@...r.kernel.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH] PCI: dwc: Chain the set IRQ affinity request back to
 the parent

On Tue, Mar 04 2025 at 13:48, Tsai Sung-Fu wrote:
> On Mon, Mar 3, 2025 at 5:10 PM Thomas Gleixner <tglx@...utronix.de> wrote:
>> > +     hwirq = ctrl * MAX_MSI_IRQS_PER_CTRL;
>> > +     end = hwirq + MAX_MSI_IRQS_PER_CTRL;
>> > +     for_each_set_bit_from(hwirq, pp->msi_irq_in_use, end) {
>> > +             if (hwirq == hwirq_to_check)
>> > +                     continue;
>> > +             virq = irq_find_mapping(pp->irq_domain, hwirq);
>> > +             if (!virq)
>> > +                     continue;
>> > +             mask = irq_get_affinity_mask(virq);
>>
>> What protects @mask against a concurrent modification?
>
> We hold the &pp->lock in the dw_pci_msi_set_affinity before calling
> this function
> so that should help to protect against concurrent modification ?

So that's the same domain, right?

>> > +     /*
>> > +      * Update all the irq_data's effective mask
>> > +      * bind to this MSI controller, so the correct
>> > +      * affinity would reflect on
>> > +      * /proc/irq/XXX/effective_affinity
>> > +      */
>> > +     hwirq = ctrl * MAX_MSI_IRQS_PER_CTRL;
>> > +     end = hwirq + MAX_MSI_IRQS_PER_CTRL;
>> > +     for_each_set_bit_from(hwirq, pp->msi_irq_in_use, end) {
>> > +             virq_downstream = irq_find_mapping(pp->irq_domain, hwirq);
>> > +             if (!virq_downstream)
>> > +                     continue;
>> > +             desc_downstream = irq_to_desc(virq_downstream);
>> > +             irq_data_update_effective_affinity(&desc_downstream->irq_data,
>> > +                                                effective_mask);
>>
>> Same here.
>
> We hold the &pp->lock in the dw_pci_msi_set_affinity before calling
> here, so that could help I think ?

A comment would be helpful for the casual reader.

>>
>> > +     }
>> > +}
>> > +
>> > +static int dw_pci_msi_set_affinity(struct irq_data *d,
>> > +                                const struct cpumask *mask, bool force)
>> > +{
>> > +     struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(d);
>> > +     struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> > +     int ret;
>> > +     int virq_parent;
>> > +     unsigned long hwirq = d->hwirq;
>> > +     unsigned long flags, ctrl;
>> > +     struct irq_desc *desc_parent;
>> > +     const struct cpumask *effective_mask;
>> > +     cpumask_var_t mask_result;
>> > +
>> > +     ctrl = hwirq / MAX_MSI_IRQS_PER_CTRL;
>> > +     if (!alloc_cpumask_var(&mask_result, GFP_ATOMIC))
>> > +             return -ENOMEM;
>>
>> This does not work on a RT enabled kernel. Allocations with a raw spin
>> lock held are not possible.
>>
>
> Even with the GFP_ATOMIC flag ? I thought that means it would be safe
> to do allocation in the atomic context ?
> Didn't work on the RT kernel before, so please enlighten me if I am
> wrong and if you don't mind.

https://www.kernel.org/doc/html/latest/locking/locktypes.html#preempt-rt-caveats
https://www.kernel.org/doc/html/latest/locking/locktypes.html#raw-spinlock-t-on-rt

>> > +     /*
>> > +      * Loop through all possible MSI vector to check if the
>> > +      * requested one is compatible with all of them
>> > +      */
>> > +     raw_spin_lock_irqsave(&pp->lock, flags);
>> > +     cpumask_copy(mask_result, mask);
>> > +     ret = dw_pci_check_mask_compatibility(pp, ctrl, hwirq, mask_result);
>> > +     if (ret) {
>> > +             dev_dbg(pci->dev, "Incompatible mask, request %*pbl, irq num %u\n",
>> > +                     cpumask_pr_args(mask), d->irq);
>> > +             goto unlock;
>> > +     }
>> > +
>> > +     dev_dbg(pci->dev, "Final mask, request %*pbl, irq num %u\n",
>> > +             cpumask_pr_args(mask_result), d->irq);
>> > +
>> > +     virq_parent = pp->msi_irq[ctrl];
>> > +     desc_parent = irq_to_desc(virq_parent);
>> > +     ret = desc_parent->irq_data.chip->irq_set_affinity(&desc_parent->irq_data,
>> > +                                                        mask_result, force);
>>
>> Again. Completely unserialized.
>>
>
> The reason why we remove the desc_parent->lock protection is because
> the chained IRQ structure didn't export parent IRQ to the user space, so we
> think this call path should be the only caller. And since we have &pp->lock hold
> at the beginning, so that should help to protect against concurrent
> modification here ?

"Should be the only caller" is not really a technical argument. If you
make assumptions like this, then you have to come up with a proper
explanation why this is correct under all circumstances and with all
possible variants of parent interrupt chips.

Aside of that fiddling with the internals of interrupt descriptors is
not really justified here. What's wrong with using the existing
irq_set_affinity() mechanism?

Thanks,

        tglx


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ