[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20E775CA4D599049A25800DE5799F6DD1F62744C@G4W3225.americas.hpqcorp.net>
Date: Sat, 7 May 2016 15:15:59 +0000
From: "Luruo, Kuthonuzo" <kuthonuzo.luruo@....com>
To: Yury Norov <ynorov@...iumnetworks.com>
CC: "aryabinin@...tuozzo.com" <aryabinin@...tuozzo.com>,
"glider@...gle.com" <glider@...gle.com>,
"dvyukov@...gle.com" <dvyukov@...gle.com>,
"cl@...ux.com" <cl@...ux.com>,
"penberg@...nel.org" <penberg@...nel.org>,
"rientjes@...gle.com" <rientjes@...gle.com>,
"iamjoonsoo.kim@....com" <iamjoonsoo.kim@....com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"kasan-dev@...glegroups.com" <kasan-dev@...glegroups.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"klimov.linux@...il.com" <klimov.linux@...il.com>
Subject: RE: [PATCH v2 1/2] mm, kasan: improve double-free detection
Thank you for the review!
> > +
> > +/* acquire per-object lock for access to KASAN metadata. */
>
> I believe there's strong reason not to use standard spin_lock() or
> similar. I think it's proper place to explain it.
>
will do.
> > +void kasan_meta_lock(struct kasan_alloc_meta *alloc_info)
> > +{
> > + union kasan_alloc_data old, new;
> > +
> > + preempt_disable();
>
> It's better to disable and enable preemption inside the loop
> on each iteration, to decrease contention.
>
ok, makes sense; will do.
> > + for (;;) {
> > + old.packed = READ_ONCE(alloc_info->data);
> > + if (unlikely(old.lock)) {
> > + cpu_relax();
> > + continue;
> > + }
> > + new.packed = old.packed;
> > + new.lock = 1;
> > + if (likely(cmpxchg(&alloc_info->data, old.packed, new.packed)
> > + == old.packed))
> > + break;
> > + }
> > +}
> > +
> > +/* release lock after a kasan_meta_lock(). */
> > +void kasan_meta_unlock(struct kasan_alloc_meta *alloc_info)
> > +{
> > + union kasan_alloc_data alloc_data;
> > +
> > + alloc_data.packed = READ_ONCE(alloc_info->data);
> > + alloc_data.lock = 0;
> > + if (unlikely(xchg(&alloc_info->data, alloc_data.packed) !=
> > + (alloc_data.packed | 0x1U)))
> > + WARN_ONCE(1, "%s: lock not held!\n", __func__);
>
> Nitpick. It never happens in normal case, correct?. Why don't you place it under
> some developer config, or even leave at dev branch? The function will
> be twice shorter without it.
ok, will remove/shorten.
> > + alloc_data.packed = alloc_info->data;
> > + if (alloc_data.state == KASAN_STATE_ALLOC) {
> > + free_info = get_free_info(cache, object);
> > + quarantine_put(free_info, cache);
>
> I just pulled master and didn't find this function. If your patchset
> is based on other branch, please notice it.
Sorry; patchset is based on linux-next 'next-20160506' which has Alexander
Potapenko's patches for KASAN SLAB support with memory quarantine +
stackdepot features.
>
> > + set_track(&free_info->track, GFP_NOWAIT);
>
> It may fail for many reasons. Is it OK to ignore it? If OK, I think it
> should be explained.
It's ok. A subsequent bug report on object would have a missing alloc/dealloc
stack trace.
>
> > + kasan_poison_slab_free(cache, object);
> > + alloc_data.state = KASAN_STATE_QUARANTINE;
> > + alloc_info->data = alloc_data.packed;
> > + kasan_meta_unlock(alloc_info);
> > + return true;
> > }
> > + switch (alloc_data.state) {
> > + case KASAN_STATE_QUARANTINE:
> > + case KASAN_STATE_FREE:
> > + kasan_report((unsigned long)object, 0, false,
> > + (unsigned long)__builtin_return_address(1));
>
> __builtin_return_address() is unsafe if argument is non-zero. Use
> return_address() instead.
hmm, I/cscope can't seem to find an x86 implementation for return_address().
Will dig further; thanks.
> > + local_irq_save(flags);
> > + kasan_meta_lock(alloc_info);
> > + alloc_data.packed = alloc_info->data;
> > + alloc_data.state = KASAN_STATE_ALLOC;
> > + alloc_data.size_delta = cache->object_size - size;
> > + alloc_info->data = alloc_data.packed;
> > set_track(&alloc_info->track, flags);
>
> Same as above
>
As above.
Kuthonuzo
Powered by blists - more mailing lists