lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z_jfGlOEb4Bjl3vO@gmail.com>
Date: Fri, 11 Apr 2025 11:21:30 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Kevin Brodsky <kevin.brodsky@....com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Mark Brown <broonie@...nel.org>,
	Catalin Marinas <catalin.marinas@....com>,
	Dave Hansen <dave.hansen@...ux.intel.com>,
	David Hildenbrand <david@...hat.com>,
	Ira Weiny <ira.weiny@...el.com>, Jann Horn <jannh@...gle.com>,
	Jeff Xu <jeffxu@...omium.org>, Joey Gouly <joey.gouly@....com>,
	Kees Cook <kees@...nel.org>,
	Linus Walleij <linus.walleij@...aro.org>,
	Andy Lutomirski <luto@...nel.org>, Marc Zyngier <maz@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Pierre Langlois <pierre.langlois@....com>,
	Quentin Perret <qperret@...gle.com>,
	Rick Edgecombe <rick.p.edgecombe@...el.com>,
	"Mike Rapoport (IBM)" <rppt@...nel.org>,
	Ryan Roberts <ryan.roberts@....com>,
	Thomas Gleixner <tglx@...utronix.de>, Will Deacon <will@...nel.org>,
	Matthew Wilcox <willy@...radead.org>,
	Qi Zheng <zhengqi.arch@...edance.com>,
	linux-arm-kernel@...ts.infradead.org, x86@...nel.org
Subject: Re: [RFC PATCH v4 00/18] pkeys-based page table hardening


* Kevin Brodsky <kevin.brodsky@....com> wrote:

> Performance
> ===========
> 
> Caveat: these numbers should be seen as a lower bound for the overhead
> of a real POE-based protection. The hardware checks added by POE are
> however not expected to incur significant extra overhead.
> 
> +-------------------+----------------------------------+------------------+---------------+
> | Benchmark         | Result Class                     | Without batching | With batching |
> +===================+==================================+==================+===============+
> | mmtests/kernbench | elsp-64                          |            0.20% |         0.20% |
> |                   | syst-64                          |            1.62% |         0.63% |
> |                   | user-64                          |           -0.04% |         0.05% |
> +-------------------+----------------------------------+------------------+---------------+
> | micromm/fork      | fork: p:1                        |      (R) 225.56% |        -0.07% |
> |                   | fork: p:512                      |      (R) 254.32% |         0.73% |
> +-------------------+----------------------------------+------------------+---------------+
> | micromm/munmap    | munmap: p:1                      |       (R) 24.49% |         4.29% |
> |                   | munmap: p:512                    |      (R) 161.47% |     (R) 6.06% |
> +-------------------+----------------------------------+------------------+---------------+
> | micromm/vmalloc   | fix_size_alloc_test: p:1, h:0    |       (R) 14.80% |    (R) 11.85% |
> |                   | fix_size_alloc_test: p:4, h:0    |       (R) 38.42% |    (R) 10.47% |
> |                   | fix_size_alloc_test: p:16, h:0   |       (R) 64.74% |     (R) 6.41% |
> |                   | fix_size_alloc_test: p:64, h:0   |       (R) 79.98% |     (R) 3.24% |
> |                   | fix_size_alloc_test: p:256, h:0  |       (R) 85.46% |     (R) 2.77% |
> |                   | fix_size_alloc_test: p:16, h:1   |       (R) 47.89% |         3.10% |
> |                   | fix_size_alloc_test: p:64, h:1   |       (R) 62.43% |         3.36% |
> |                   | fix_size_alloc_test: p:256, h:1  |       (R) 64.30% |     (R) 2.68% |
> |                   | random_size_alloc_test: p:1, h:0 |       (R) 74.94% |     (R) 3.13% |
> |                   | vm_map_ram_test: p:1, h:0        |       (R) 30.53% |    (R) 26.20% |
> +-------------------+----------------------------------+------------------+---------------+

So I had to look 3 times to figure out what the numbers mean: they are 
the extra overhead from this hardening feature, measured in system time 
percentage, right?

So "4.29%" means there's a 4.29% slowdown on that particular workload 
when the feature is enabled. Maybe add an explanation to the next iteration? :-)

Thanks,

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ