lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 25 Oct 2023 16:34:59 +0200
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Chen Yu <yu.c.chen@...el.com>, Juergen Gross <jgross@...e.com>
Cc:     Len Brown <len.brown@...el.com>,
        "Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
        Dan Williams <dan.j.williams@...el.com>,
        linux-kernel@...r.kernel.org, Chen Yu <yu.chen.surf@...il.com>,
        Chen Yu <yu.c.chen@...el.com>,
        Wendy Wang <wendy.wang@...el.com>
Subject: Re: [RFC PATCH] genirq: Exclude managed irq during irq migration

Chen!

On Fri, Oct 20 2023 at 15:25, Chen Yu wrote:
> The managed IRQ will be shutdown and not be migrated to

Please write out interrupts in change logs, this is not twitter.

> other CPUs during CPU offline. Later when the CPU is online,
> the managed IRQ will be re-enabled on this CPU. The managed
> IRQ can be used to reduce the IRQ migration during CPU hotplug.
>
> Before putting the CPU offline, the number of the already allocated
> IRQs on this offlining CPU will be compared to the total number

The usage of IRQs and vectors is slightly confusing all over the
place.

> of available IRQ vectors on the remaining online CPUs. If there is
> not enough slot for these IRQs to be migrated to, the CPU offline
> will be terminated. However, currently the code treats the managed
> IRQ as migratable, which is not true, and brings false negative
> during CPU hotplug and hibernation stress test.

Your assumption that managed interrupts cannot be migrated is only
correct when the managed interrupts affinity mask has exactly one online
target CPU. Otherwise the interrupt is migrated to one of the other
online CPUs in the affinity mask.

Though that does not affect the migrateability calculation because in
case that a managed interrupt has an affinity mask with more than one
target CPU set, the vectors on the currently not targeted CPUs are
already reserved and accounted for in matrix->global_available. IOW,
migrateability for such managed interrupts is already guaranteed.

I'll amend the changelog to make this clear.

Thanks,

        tglx

Powered by blists - more mailing lists