[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <jbmfvznqtzmeyejegflmznwfj7lzlshpmek7jgy7drjfla2btb@bqjufhxforw2>
Date: Mon, 17 Nov 2025 18:26:19 +0000
From: Maciej Wieczór-Retman <m.wieczorretman@...me>
To: Marco Elver <elver@...gle.com>
Cc: xin@...or.com, peterz@...radead.org, kaleshsingh@...gle.com, kbingham@...nel.org, akpm@...ux-foundation.org, nathan@...nel.org, ryabinin.a.a@...il.com, dave.hansen@...ux.intel.com, bp@...en8.de, morbo@...gle.com, jeremy.linton@....com, smostafa@...gle.com, kees@...nel.org, baohua@...nel.org, vbabka@...e.cz, justinstitt@...gle.com, wangkefeng.wang@...wei.com, leitao@...ian.org, jan.kiszka@...mens.com, fujita.tomonori@...il.com, hpa@...or.com, urezki@...il.com, ubizjak@...il.com, ada.coupriediaz@....com, nick.desaulniers+lkml@...il.com, ojeda@...nel.org, brgerst@...il.com, pankaj.gupta@....com, glider@...gle.com, mark.rutland@....com, trintaeoitogc@...il.com, jpoimboe@...nel.org, thuth@...hat.com, pasha.tatashin@...een.com, dvyukov@...gle.com, jhubbard@...dia.com, catalin.marinas@....com, yeoreum.yun@....com, mhocko@...e.com, lorenzo.stoakes@...cle.com, samuel.holland@...ive.com, vincenzo.frascino@....com, bigeasy@...utronix.de, surenb@...gle.com, ardb@...nel.org,
Liam.Howlett@...cle.com, nicolas.schier@...ux.dev, ziy@...dia.com, kas@...nel.org, tglx@...utronix.de, mingo@...hat.com, broonie@...nel.org, corbet@....net, andreyknvl@...il.com, maciej.wieczor-retman@...el.com, david@...hat.com, maz@...nel.org, rppt@...nel.org, will@...nel.org, luto@...nel.org, kasan-dev@...glegroups.com, linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org, x86@...nel.org, linux-kbuild@...r.kernel.org, linux-mm@...ck.org, llvm@...ts.linux.dev, linux-doc@...r.kernel.org
Subject: Re: [PATCH v6 17/18] x86/kasan: Logical bit shift for kasan_mem_to_shadow
On 2025-11-10 at 15:49:22 +0100, Marco Elver wrote:
>On Wed, 29 Oct 2025 at 21:11, Maciej Wieczor-Retman
><m.wieczorretman@...me> wrote:
>>
>> From: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>
>>
>> While generally tag-based KASAN adopts an arithemitc bit shift to
>> convert a memory address to a shadow memory address, it doesn't work for
>> all cases on x86. Testing different shadow memory offsets proved that
>> either 4 or 5 level paging didn't work correctly or inline mode ran into
>> issues. Thus the best working scheme is the logical bit shift and
>> non-canonical shadow offset that x86 uses for generic KASAN, of course
>> adjusted for the increased granularity from 8 to 16 bytes.
>>
>> Add an arch specific implementation of kasan_mem_to_shadow() that uses
>> the logical bit shift.
>>
>> The non-canonical hook tries to calculate whether an address came from
>> kasan_mem_to_shadow(). First it checks whether this address fits into
>> the legal set of values possible to output from the mem to shadow
>> function.
>>
>> Tie both generic and tag-based x86 KASAN modes to the address range
>> check associated with generic KASAN.
>>
>> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>
>> ---
>> Changelog v4:
>> - Add this patch to the series.
>>
>> arch/x86/include/asm/kasan.h | 7 +++++++
>> mm/kasan/report.c | 5 +++--
>> 2 files changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
>> index 375651d9b114..2372397bc3e5 100644
>> --- a/arch/x86/include/asm/kasan.h
>> +++ b/arch/x86/include/asm/kasan.h
>> @@ -49,6 +49,13 @@
>> #include <linux/bits.h>
>>
>> #ifdef CONFIG_KASAN_SW_TAGS
>> +static inline void *__kasan_mem_to_shadow(const void *addr)
>> +{
>> + return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
>> + + KASAN_SHADOW_OFFSET;
>> +}
>
>You're effectively undoing "kasan: sw_tags: Use arithmetic shift for
>shadow computation" for x86 - why?
>This function needs a comment explaining this.
Sure, I'll add a comment here.
While the signed approach seems to work well for arm64 and risc-v it
doesn't play well with x86 which wants to keep the top bit for
canonicality checks.
Trying to keep signed mem to shadow scheme for all corner cases on all
configs always turned into ugly workarounds for something. There is a
mechanism, when there is a fault, to guess if the address came from a
KASAN check - some address format always didn't work when I tried
validating 4 and 5 paging levels. One approach to getting the signed mem
to shadow was also using a non-canonial kasan shadow offset. It worked
great for paging as far as I remember (some 5 lvl fixup code could be
removed) but it made the inline mode either hard to implement or much
slower due to extended checks.
>Also, the commit message just says "it doesn't work for all cases" - why?
Fair enough, this was a little not verbose. I'll update the patch
message with an explanation.
Powered by blists - more mailing lists