lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 30 Apr 2021 09:10:04 +0200
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Nitesh Lal <nilal@...hat.com>,
        Jesse Brandeburg <jesse.brandeburg@...el.com>,
        "frederic\@kernel.org" <frederic@...nel.org>,
        "juri.lelli\@redhat.com" <juri.lelli@...hat.com>,
        Marcelo Tosatti <mtosatti@...hat.com>, abelits@...vell.com
Cc:     Robin Murphy <robin.murphy@....com>,
        "linux-kernel\@vger.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-api\@vger.kernel.org" <linux-api@...r.kernel.org>,
        "bhelgaas\@google.com" <bhelgaas@...gle.com>,
        "linux-pci\@vger.kernel.org" <linux-pci@...r.kernel.org>,
        "rostedt\@goodmis.org" <rostedt@...dmis.org>,
        "mingo\@kernel.org" <mingo@...nel.org>,
        "peterz\@infradead.org" <peterz@...radead.org>,
        "davem\@davemloft.net" <davem@...emloft.net>,
        "akpm\@linux-foundation.org" <akpm@...ux-foundation.org>,
        "sfr\@canb.auug.org.au" <sfr@...b.auug.org.au>,
        "stephen\@networkplumber.org" <stephen@...workplumber.org>,
        "rppt\@linux.vnet.ibm.com" <rppt@...ux.vnet.ibm.com>,
        "jinyuqi\@huawei.com" <jinyuqi@...wei.com>,
        "zhangshaokun\@hisilicon.com" <zhangshaokun@...ilicon.com>,
        netdev@...r.kernel.org, chris.friesen@...driver.com
Subject: Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs

Nitesh,

On Thu, Apr 29 2021 at 17:44, Nitesh Lal wrote:

First of all: Nice analysis, well done!

> So to understand further what the problem was with the older kernel based
> on Jesse's description and whether it is still there I did some more
> digging. Following are some of the findings (kindly correct me if
> there is a gap in my understanding):
>
> Part-1: Why there was a problem with the older kernel?
> ------
> With a kernel built on top of the tag v4.0.0 (with Jesse's patch reverted
> and irqbalance disabled), if we observe the/proc/irq for ixgbe device IRQs
> then there are two things to note:
>
> # No separate effective affinity (Since it has been introduced as a part of
>   the 2017 IRQ re-work)
>   $ ls /proc/irq/86/
>     affinity_hint  node  p2p1  smp_affinity  smp_affinity_list  spurious
>
> # Multiple CPUs are set in the smp_affinity_list and the first CPU is CPU0:
>
>   $ proc/irq/60/p2p1-TxRx-0
>     0,2,4,6,8,10,12,14,16,18,20,22
>
>   $ /proc/irq/61/p2p1-TxRx-1
>     0,2,4,6,8,10,12,14,16,18,20,22
>
>   $ /proc/irq/62/p2p1-TxRx-2
>     0,2,4,6,8,10,12,14,16,18,20,22
>      ...
>
>
> Now,  if we read the commit message from Thomas's patch that was part of
> this IRQ re-work:
> fdba46ff:  x86/apic: Get rid of multi CPU affinity
> "
> ..
> 2) Experiments have shown that the benefit of multi CPU affinity is close
>    to zero and in some tests even worse than setting the affinity to a single
>    CPU.
>
> The reason for this is that the delivery targets the APIC with the lowest
> ID first and only if that APIC is busy (servicing an interrupt, i.e. ISR is
> not empty) it hands it over to the next APIC. In the conducted tests the
> vast majority of interrupts ends up on the APIC with the lowest ID anyway,
> so there is no natural spreading of the interrupts possible.”
> "
>
> I think this explains why even if we have multiple CPUs in the SMP affinity
> mask the interrupts may only land on CPU0.

There are two issues in the pre rework vector management:

  1) The allocation logic itself which preferred lower numbered CPUs and
     did not try to spread out the vectors accross CPUs. This was pretty
     much true for any APIC addressing mode.

  2) The multi CPU affinity support if supported by the APIC
     mode. That's restricted to logical APIC addressing mode. That is
     available for non X2APIC up to 8 CPUs and with X2APIC it requires
     to be in cluster mode.
     
     All other addressing modes had a single CPU target selected under
     the hood which due to #1 was ending up on CPU0 most of the time at
     least up to the point where it still had vectors available.

     Also logical addressing mode with multiple target CPUs was subject
     to #1 and due to the delivery logic the lowest numbered CPU (APIC)
     was where most interrupts ended up.

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ