lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZQiLX0W2Tcr+wdJT@gmail.com>
Date:   Mon, 18 Sep 2023 19:39:43 +0200
From:   Ingo Molnar <mingo@...nel.org>
To:     Matteo Rizzo <matteorizzo@...gle.com>
Cc:     "Lameter, Christopher" <cl@...amperecomputing.com>,
        Dave Hansen <dave.hansen@...el.com>, penberg@...nel.org,
        rientjes@...gle.com, iamjoonsoo.kim@....com,
        akpm@...ux-foundation.org, vbabka@...e.cz,
        roman.gushchin@...ux.dev, 42.hyeyoo@...il.com,
        keescook@...omium.org, linux-kernel@...r.kernel.org,
        linux-doc@...r.kernel.org, linux-mm@...ck.org,
        linux-hardening@...r.kernel.org, tglx@...utronix.de,
        mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
        x86@...nel.org, hpa@...or.com, corbet@....net, luto@...nel.org,
        peterz@...radead.org, jannh@...gle.com, evn@...gle.com,
        poprdi@...gle.com, jordyzomer@...gle.com,
        Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [RFC PATCH 00/14] Prevent cross-cache attacks in the SLUB
 allocator


* Matteo Rizzo <matteorizzo@...gle.com> wrote:

> On Fri, 15 Sept 2023 at 18:30, Lameter, Christopher
> <cl@...amperecomputing.com> wrote:
> >
> > On Fri, 15 Sep 2023, Dave Hansen wrote:
> >
> > > What's the cost?
> >
> > The only thing that I see is 1-2% on kernel compilations (and "more on
> > machines with lots of cores")?
> 
> I used kernel compilation time (wall clock time) as a benchmark while
> preparing the series. Lower is better.
> 
> Intel Skylake, 112 cores:
> 
>       LABEL    | COUNT |   MIN   |   MAX   |   MEAN  |  MEDIAN | STDDEV
> ---------------+-------+---------+---------+---------+---------+--------
> SLAB_VIRTUAL=n | 150   | 49.700s | 51.320s | 50.449s | 50.430s | 0.29959
> SLAB_VIRTUAL=y | 150   | 50.020s | 51.660s | 50.880s | 50.880s | 0.30495
>                |       | +0.64%  | +0.66%  | +0.85%  | +0.89%  | +1.79%
> 
> AMD Milan, 256 cores:
> 
>     LABEL      | COUNT |   MIN   |   MAX   |   MEAN  |  MEDIAN | STDDEV
> ---------------+-------+---------+---------+---------+---------+--------
> SLAB_VIRTUAL=n | 150   | 25.480s | 26.550s | 26.065s | 26.055s | 0.23495
> SLAB_VIRTUAL=y | 150   | 25.820s | 27.080s | 26.531s | 26.540s | 0.25974
>                |       | +1.33%  | +2.00%  | +1.79%  | +1.86%  | +10.55%

That's sadly a rather substantial overhead for a compiler/linker workload 
that is dominantly user-space: a kernel build is about 90% user-time and 
10% system-time:

   $ perf stat --null make -j64 vmlinux
   ...

   Performance counter stats for 'make -j64 vmlinux':

        59.840704481 seconds time elapsed

      2000.774537000 seconds user
       219.138280000 seconds sys

What's the split of the increase in overhead due to SLAB_VIRTUAL=y, between 
user-space execution and kernel-space execution?

Thanks,

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ