lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK8P3a0uy8JcHP_G_ebz61AMB-Mx6jr5+vuzJHmWbDCajTdTfQ@mail.gmail.com>
Date:   Thu, 14 Apr 2022 13:41:11 +0200
From:   Arnd Bergmann <arnd@...db.de>
To:     Libo Chen <libo.chen@...cle.com>
Cc:     Arnd Bergmann <arnd@...db.de>,
        Randy Dunlap <rdunlap@...radead.org>,
        gregkh <gregkh@...uxfoundation.org>,
        Masahiro Yamada <masahiroy@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux Kbuild mailing list <linux-kbuild@...r.kernel.org>,
        Linux ARM <linux-arm-kernel@...ts.infradead.org>,
        linux-arch <linux-arch@...r.kernel.org>
Subject: Re: [PATCH RESEND 1/1] lib/Kconfig: remove DEBUG_PER_CPU_MAPS
 dependency for CPUMASK_OFFSTACK

On Wed, Apr 13, 2022 at 11:50 PM Libo Chen <libo.chen@...cle.com> wrote:
> On 4/13/22 13:52, Arnd Bergmann wrote:
> >>> Yes, it is. I don't know that the problem is...
> >> Masahiro explained that CPUMASK_OFFSTACK can only be configured by
> >> options not users if DEBUG_PER_CPU_MASK is not enabled. This doesn't
> >> seem to be what we want.
> > I think the correct way to do it is to follow x86 and powerpc, and tying
> > CPUMASK_OFFSTACK to "large" values of CONFIG_NR_CPUS.
> > For smaller values of NR_CPUS, the onstack masks are obviously
> > cheaper, we just need to decide what the cut-off point is.
>
> I agree. It appears enabling CPUMASK_OFFSTACK breaks kernel builds on
> some architectures such as parisc and nios2 as reported by kernel test
> robot. Maybe it makes sense to use DEBUG_PER_CPU_MAPS as some kind of
> guard on CPUMASK_OFFSTACK.

NIOS2 does not support SMP builds at all, so it should never be possible to
select CPUMASK_OFFSTACK there. We may want to guard
DEBUG_PER_CPU_MAPS by adding a 'depends on SMP' in order to
prevent it from getting selected.

For PARISC, the largest configuration is 32-way SMP, so CPUMASK_OFFSTACK
is clearly pointless there as well, even though it should technically
be possible
to support. What is the build error on parisc?

> > In x86, the onstack masks can be selected for normal SMP builds with
> > up to 512 CPUs, while CONFIG_MAXSMP=y raises the limit to 8192
> > CPUs while selecting CPUMASK_OFFSTACK.
> > PowerPC does it the other way round, selecting CPUMASK_OFFSTACK
> > implicitly whenever NR_CPUS is set to 8192 or more.
> >
> > I think we can easily do the same as powerpc on arm64. With the
> I am leaning more towards x86's way because even NR_CPUS=160 is too
> expensive for 4-core arm64 VMs according to apachebench. I highly doubt
> that there is a good cut-off point to make everybody happy (or not unhappy).

It seems surprising that you would see any improvement for offstack masks
when using NR_CPUS=160, that is just three 64-bit words worth of data, but
it requires allocating the mask dynamically, which takes way more memory
to initialize.

> > ApacheBench test you cite in the patch description, what is the
> > value of NR_CPUS at which you start seeing a noticeable
> > benefit for offstack masks? Can you do the same test for
> > NR_CPUS=1024 or 2048?
>
> As mentioned above, a good cut-off point moves depends on the actual
> number of CPUs. But yeah I can do the same test for 1024 or even smaller
> NR_CPUs values on the same 64-core arm64 VM setup.

If you see an improvement for small NR_CPUS values using offstack masks,
it's possible that the actual difference is something completely
different and we
can just make the on-stack case faster, possibly the cause is something about
cacheline alignment or inlining decisions using your specific kernel config.

Are you able to compare the 'perf report' output between runs with either
size to see where the extra time gets spent?

        Arnd

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ