lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 25 Nov 2014 16:18:06 +0300
From:	Andrey Ryabinin <a.ryabinin@...sung.com>
To:	Dmitry Chernenkov <dmitryc@...gle.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Dmitry Vyukov <dvyukov@...gle.com>,
	Konstantin Serebryany <kcc@...gle.com>,
	Andrey Konovalov <adech.fo@...il.com>,
	Yuri Gribov <tetra2005@...il.com>,
	Konstantin Khlebnikov <koct9i@...il.com>,
	Sasha Levin <sasha.levin@...cle.com>,
	Christoph Lameter <cl@...ux.com>,
	Joonsoo Kim <iamjoonsoo.kim@....com>,
	Dave Hansen <dave.hansen@...el.com>,
	Andi Kleen <andi@...stfloor.org>,
	Vegard Nossum <vegard.nossum@...il.com>,
	"H. Peter Anvin" <hpa@...or.com>, Dave Jones <davej@...hat.com>,
	"x86@...nel.org" <x86@...nel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Pekka Enberg <penberg@...nel.org>,
	David Rientjes <rientjes@...gle.com>
Subject: Re: [PATCH v7 08/12] mm: slub: add kernel address sanitizer support
 for slub allocator

On 11/25/2014 03:17 PM, Dmitry Chernenkov wrote:
> FYI, when I backported Kasan to 3.14, in kasan_mark_slab_padding()
> sometimes a negative size of padding was generated.

I don't see how this could happen if pointers passed to kasan_mark_slab_padding() are correct.

Negative padding would mean that (object + s->size) is crossing slab page boundary.
This is either slub allocator bug (very unlikely), or some pointers passed to kasan_mark_slab_padding()
not correct.

Or maybe I'm missing something?

> This started
> working when the patch below was applied:
> 
> @@ -262,12 +264,11 @@ void kasan_free_pages(struct page *page,
> unsigned int order)
>  void kasan_mark_slab_padding(struct kmem_cache *s, void *object,
>   struct page *page)
>  {
> - unsigned long object_end = (unsigned long)object + s->size;
> - unsigned long padding_start = round_up(object_end,
> - KASAN_SHADOW_SCALE_SIZE);
> - unsigned long padding_end = (unsigned long)page_address(page) +
> - (PAGE_SIZE << compound_order(page));
> - size_t size = padding_end - padding_start;
> + unsigned long page_start = (unsigned long) page_address(page);
> + unsigned long page_end = page_start + (PAGE_SIZE << compound_order(page));
> + unsigned long padding_start = round_up(page_end - s->reserved,
> + KASAN_SHADOW_SCALE_SIZE);
> + size_t size = page_end - padding_start;
> 
>   kasan_poison_shadow((void *)padding_start, size, KASAN_SLAB_PADDING);
>  }
> 
> Also, in kasan_slab_free you poison the shadow with FREE not just the
> object space, but also redzones. This is inefficient and will mistake
> right out-of-bounds error for the next object with use-after-free.
> This is fixed here
> https://github.com/google/kasan/commit/4b3238be392ba0bc56bbc934ac545df3ff840782
> , please patch.
> 

Makes sense.


> 
> LGTM
> 



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ