lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DF4PR84MB0089ACE8F082A1293A7BACACF45F0@DF4PR84MB0089.NAMPRD84.PROD.OUTLOOK.COM>
Date:	Thu, 9 Jun 2016 16:54:57 +0000
From:	"Luruo, Kuthonuzo" <kuthonuzo.luruo@....com>
To:	Alexander Potapenko <glider@...gle.com>
CC:	Andrey Ryabinin <aryabinin@...tuozzo.com>,
	Dmitriy Vyukov <dvyukov@...gle.com>,
	Christoph Lameter <cl@...ux.com>,
	"penberg@...nel.org" <penberg@...nel.org>,
	David Rientjes <rientjes@...gle.com>,
	Joonsoo Kim <iamjoonsoo.kim@....com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	kasan-dev <kasan-dev@...glegroups.com>,
	LKML <linux-kernel@...r.kernel.org>,
	"ynorov@...iumnetworks.com" <ynorov@...iumnetworks.com>
Subject: RE: [PATCH v5 1/2] mm, kasan: improve double-free detection

> > Currently, KASAN may fail to detect concurrent deallocations of the same
> > object due to a race in kasan_slab_free(). This patch makes double-free
> > detection more reliable by serializing access to KASAN object metadata.
> > New functions kasan_meta_lock() and kasan_meta_unlock() are provided to
> > lock/unlock per-object metadata. Double-free errors are now reported via
> > kasan_report().
> >
> > Per-object lock concept from suggestion/observations by Dmitry Vyukov.
> Note I've sent out a patch that enables stackdepot support in SLUB.
> I'll probably need to wait till you patch lands and add locking to SLUB as well.

My patch can wait;  It can be rebased and resent for consideration by
maintainers/reviewers after your patch has been reviewed.

> > +void kasan_init_object(struct kmem_cache *cache, void *object)
> > +{
> > +       if (cache->flags & SLAB_KASAN) {
> > +               struct kasan_alloc_meta *allocp = get_alloc_info(cache, object);
> > +               union kasan_shadow_meta *shadow_meta =
> get_shadow_meta(allocp);
> > +
> > +               __memset(allocp, 0, sizeof(*allocp));
> I think we need initialize the lock first, then lock it in order to
> touch *allocp.
> > +               shadow_meta->data = KASAN_KMALLOC_META;
> Shouldn't this be a release store?

Object at this point is being initialized by SLAB off a newly allocated page/slab.
Concurrent access to the embryonic object is unlikely/not expected. I don't
think locking here is of any benefit...

> >                 default:
> > +                       pr_err("invalid allocation state!\n");
> I suggest you also print the object pointer here.

ok.

> > +
> >  struct kasan_alloc_meta {
> > +       u32 alloc_size : 24;
> Why reduce the alloc size?

Thought to save some bits while still accounting for max object size
instrumented by KASAN. But now, with the spectre of OOB writes,
header real estate suddenly seems a lot less valuable ;-). A reset to
u32 won’t be inappropriate. 

> >         if (!(cache->flags & SLAB_KASAN))
> >                 return;
> > -       switch (alloc_info->state) {
> > +       if (info->access_size)
> > +               kasan_meta_lock(alloc_info);
> In which case can info->access_size be zero? Guess even in that case
> we need to lock the metadata.

For a double-free error, access size is zero and lock is already held.

Thank you very much for reviewing the patch!

Kuthonuzo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ