lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+YgGp1XxBqSp=V=2KpkcK2r+9fn4tL-S46=Tmi9EB=geA@mail.gmail.com>
Date:	Mon, 9 May 2016 09:07:29 +0200
From:	Dmitry Vyukov <dvyukov@...gle.com>
To:	"Luruo, Kuthonuzo" <kuthonuzo.luruo@....com>
Cc:	Yury Norov <ynorov@...iumnetworks.com>,
	"aryabinin@...tuozzo.com" <aryabinin@...tuozzo.com>,
	"glider@...gle.com" <glider@...gle.com>,
	"cl@...ux.com" <cl@...ux.com>,
	"penberg@...nel.org" <penberg@...nel.org>,
	"rientjes@...gle.com" <rientjes@...gle.com>,
	"iamjoonsoo.kim@....com" <iamjoonsoo.kim@....com>,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
	"kasan-dev@...glegroups.com" <kasan-dev@...glegroups.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"klimov.linux@...il.com" <klimov.linux@...il.com>
Subject: Re: [PATCH v2 1/2] mm, kasan: improve double-free detection

On Sat, May 7, 2016 at 5:15 PM, Luruo, Kuthonuzo
<kuthonuzo.luruo@....com> wrote:
> Thank you for the review!
>
>> > +
>> > +/* acquire per-object lock for access to KASAN metadata. */
>>
>> I believe there's strong reason not to use standard spin_lock() or
>> similar. I think it's proper place to explain it.
>>
>
> will do.
>
>> > +void kasan_meta_lock(struct kasan_alloc_meta *alloc_info)
>> > +{
>> > +   union kasan_alloc_data old, new;
>> > +
>> > +   preempt_disable();
>>
>> It's better to disable and enable preemption inside the loop
>> on each iteration, to decrease contention.
>>
>
> ok, makes sense; will do.
>
>> > +   for (;;) {
>> > +           old.packed = READ_ONCE(alloc_info->data);
>> > +           if (unlikely(old.lock)) {
>> > +                   cpu_relax();
>> > +                   continue;
>> > +           }
>> > +           new.packed = old.packed;
>> > +           new.lock = 1;
>> > +           if (likely(cmpxchg(&alloc_info->data, old.packed, new.packed)
>> > +                                   == old.packed))
>> > +                   break;
>> > +   }
>> > +}
>> > +
>> > +/* release lock after a kasan_meta_lock(). */
>> > +void kasan_meta_unlock(struct kasan_alloc_meta *alloc_info)
>> > +{
>> > +   union kasan_alloc_data alloc_data;
>> > +
>> > +   alloc_data.packed = READ_ONCE(alloc_info->data);
>> > +   alloc_data.lock = 0;
>> > +   if (unlikely(xchg(&alloc_info->data, alloc_data.packed) !=
>> > +                           (alloc_data.packed | 0x1U)))
>> > +           WARN_ONCE(1, "%s: lock not held!\n", __func__);
>>
>> Nitpick. It never happens in normal case, correct?. Why don't you place it under
>> some developer config, or even leave at dev branch? The function will
>> be twice shorter without it.
>
> ok, will remove/shorten

My concern here is performance.
We do lock/unlock 3 times per allocated object. Currently that's 6
atomic RMW. The unlock one is not necessary, so that would reduce
number of atomic RMWs to 3.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ