lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1494897409-14408-12-git-send-email-iamjoonsoo.kim@lge.com>
Date:   Tue, 16 May 2017 10:16:49 +0900
From:   js1304@...il.com
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Andrey Ryabinin <aryabinin@...tuozzo.com>,
        Alexander Potapenko <glider@...gle.com>,
        Dmitry Vyukov <dvyukov@...gle.com>, kasan-dev@...glegroups.com,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H . Peter Anvin" <hpa@...or.com>, kernel-team@....com,
        Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: [PATCH v1 11/11] mm/kasan: change the order of shadow memory check

From: Joonsoo Kim <iamjoonsoo.kim@....com>

Majority of access in the kernel is an access to slab objects.
In current implementation, we checks two types of shadow memory
in this case and it causes performance regression.

kernel build (2048 MB QEMU)
Base vs per-page
219 sec vs 238 sec

Although current per-page shadow implementation is easy
to understand in terms of concept, this performance regression is
too bad so this patch changes the check order from per-page and
then per-byte shadow to per-byte and then per-page shadow.

This change would increases chance of stale TLB problem since
mapping for per-byte shadow isn't fully synchronized and we will try
to access all the region on this shadow memory. But, it doesn't hurt
the correctness so there is no problem on this new implementation.
Following is the result of this patch.

kernel build (2048 MB QEMU)
base vs per-page vs this patch
219 sec vs 238 sec vs 222 sec

Performance is restored.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
---
 mm/kasan/kasan.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index e5612be..76c1c37 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -587,14 +587,6 @@ static __always_inline u8 pshadow_val(unsigned long addr, size_t size)
 
 static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
 {
-	u8 shadow_val = pshadow_val(addr, size);
-
-	if (!shadow_val)
-		return false;
-
-	if (shadow_val != KASAN_PER_PAGE_BYPASS)
-		return true;
-
 	if (__builtin_constant_p(size)) {
 		switch (size) {
 		case 1:
@@ -649,6 +641,9 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
 	if (likely(!memory_is_poisoned(addr, size)))
 		return;
 
+	if (!pshadow_val(addr, size))
+		return;
+
 	check_memory_region_slow(addr, size, write, ret_ip);
 }
 
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ