[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 9 Sep 2015 14:40:42 +0800
From: "long.wanglong" <long.wanglong@...wei.com>
To: Xishi Qiu <qiuxishi@...wei.com>, <ryabinin.a.a@...il.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Andrey Konovalov <adech.fo@...il.com>,
Rusty Russell <rusty@...tcorp.com.au>,
Michal Marek <mmarek@...e.cz>, <zhongjiang@...wei.com>,
Linux MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Wang Long <long.wanglong@...wei.com>
Subject: Re: [PATCH V2] kasan: fix last shadow judgement in memory_is_poisoned_16()
On 2015/9/8 20:12, Xishi Qiu wrote:
> The shadow which correspond 16 bytes memory may span 2 or 3 bytes. If the
> memory is aligned on 8, then the shadow takes only 2 bytes. So we check
> "shadow_first_bytes" is enough, and need not to call "memory_is_poisoned_1(addr + 15);".
> But the code "if (likely(!last_byte))" is wrong judgement.
>
> e.g. addr=0, so last_byte = 15 & KASAN_SHADOW_MASK = 7, then the code will
> continue to call "memory_is_poisoned_1(addr + 15);"
>
> Signed-off-by: Xishi Qiu <qiuxishi@...wei.com>
> ---
> mm/kasan/kasan.c | 3 +--
> 1 files changed, 1 insertions(+), 2 deletions(-)
>
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 7b28e9c..8da2114 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -135,12 +135,11 @@ static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>
> if (unlikely(*shadow_addr)) {
> u16 shadow_first_bytes = *(u16 *)shadow_addr;
> - s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
>
> if (unlikely(shadow_first_bytes))
> return true;
>
> - if (likely(!last_byte))
> + if (likely(IS_ALIGNED(addr, 8)))
> return false;
>
> return memory_is_poisoned_1(addr + 15);
>
Hi,
I also notice this problem, how about another method to fix it:
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 5d65d06..6a20dda 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -140,7 +140,7 @@ static __always_inline bool memory_is_poisoned_16(unsigned long addr)
if (unlikely(shadow_first_bytes))
return true;
- if (likely(!last_byte))
+ if (likely(last_byte >= 7))
return false;
return memory_is_poisoned_1(addr + 15);
This method can ensure consistency of code, for example, in memory_is_poisoned_8:
static __always_inline bool memory_is_poisoned_8(unsigned long addr)
{
u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
if (unlikely(*shadow_addr)) {
if (memory_is_poisoned_1(addr + 7))
return true;
if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
return false;
return unlikely(*(u8 *)shadow_addr);
}
return false;
}
Otherwise, we also should use IS_ALIGNED macro in memory_is_poisoned_8!
Best Regards
Wang Long
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists