[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170601162338.23540-2-aryabinin@virtuozzo.com>
Date: Thu, 1 Jun 2017 19:23:36 +0300
From: Andrey Ryabinin <aryabinin@...tuozzo.com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: Alexander Potapenko <glider@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
<kasan-dev@...glegroups.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, <x86@...nel.org>
Subject: [PATCH 2/4] x86/kasan: don't allocate extra shadow memory
We used to read several bytes of the shadow memory in advance.
Therefore additional shadow memory mapped to prevent crash if
speculative load would happen near the end of the mapped shadow memory.
Now we don't have such speculative loads, so we no longer need to map
additional shadow memory.
Signed-off-by: Andrey Ryabinin <aryabinin@...tuozzo.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: "H. Peter Anvin" <hpa@...or.com>
Cc: x86@...nel.org
---
arch/x86/mm/kasan_init_64.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 0c7d8129bed6..39231a9c981a 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -23,12 +23,7 @@ static int __init map_range(struct range *range)
start = (unsigned long)kasan_mem_to_shadow(pfn_to_kaddr(range->start));
end = (unsigned long)kasan_mem_to_shadow(pfn_to_kaddr(range->end));
- /*
- * end + 1 here is intentional. We check several shadow bytes in advance
- * to slightly speed up fastpath. In some rare cases we could cross
- * boundary of mapped shadow, so we just map some more here.
- */
- return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+ return vmemmap_populate(start, end, NUMA_NO_NODE);
}
static void __init clear_pgds(unsigned long start,
--
2.13.0
Powered by blists - more mailing lists