lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87tsywbp1e.ffs@tglx>
Date: Fri, 14 Nov 2025 16:40:45 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Frederic Weisbecker <frederic@...nel.org>, Waiman Long <llong@...hat.com>
Cc: LKML <linux-kernel@...r.kernel.org>, Marco Crivellari
 <marco.crivellari@...e.com>, cgroups@...r.kernel.org
Subject: Re: [PATCH] genirq: Fix IRQ threads affinity VS cpuset isolated
 partitions

On Wed, Nov 12 2025 at 13:56, Frederic Weisbecker wrote:
> Le Mon, Nov 10, 2025 at 04:28:49PM -0500, Waiman Long a écrit :
>> This function seems to mirror what is done in irq_thread_check_affinity()
>> when the affinity cpumask is available.  But if affinity isn't defined, it
>> will make this irq kthread immune from changes in the set of isolated CPUs.
>> Should we use IRQD_AFFINITY_SET flag to check if affinity has been set and
>> then set PF_NO_SETAFFINITY only in this case?
>
> So IIUC, the cpumask_available() failure can't really happen because an allocation
> failure would make irq_alloc_descs() fail.

That's indeed a historical leftover.

> __irq_alloc_descs() -> alloc_descs() -> alloc_desc() -> init_desc() - > alloc_mask()
>
> The error doesn't seem as well handled in early_irq_init() but the desc is freed
> anyway if that happens.

Right, the insert should only happen when desc != NULL. OTOH if it fails
at that stage the kernel won't get far anyway and definitely not to the
point where these cpumasks are checked :)

> So this is just a sanity check at best.

I think we can just remove it. It does not make sense at all.

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ