[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20E775CA4D599049A25800DE5799F6DD1F6273B2@G4W3225.americas.hpqcorp.net>
Date: Sat, 7 May 2016 10:21:44 +0000
From: "Luruo, Kuthonuzo" <kuthonuzo.luruo@....com>
To: Dmitry Vyukov <dvyukov@...gle.com>
CC: Andrey Ryabinin <aryabinin@...tuozzo.com>,
Alexander Potapenko <glider@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
kasan-dev <kasan-dev@...glegroups.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] kasan: improve double-free detection
> >> We can use per-header lock by setting status to KASAN_STATE_LOCKED. A
> >> thread can CAS any status to KASAN_STATE_LOCKED which means that it
> >> locked the header. If any thread tried to modify/read the status and
> >> the status is KASAN_STATE_LOCKED, then the thread waits.
> >
> > Thanks, Dmitry. I've successfully tested with the concurrent free slab_test test
> > (alloc on cpu 0; then concurrent frees on all other cpus on a 12-vcpu KVM)
> using:
> >
> > static inline bool kasan_alloc_state_lock(struct kasan_alloc_meta *alloc_info)
> > {
> > if (cmpxchg(&alloc_info->state, KASAN_STATE_ALLOC,
> > KASAN_STATE_LOCKED) == KASAN_STATE_ALLOC)
> > return true;
> > return false;
> > }
> >
> > static inline void kasan_alloc_state_unlock_wait(struct kasan_alloc_meta
> > *alloc_info)
> > {
> > while (alloc_info->state == KASAN_STATE_LOCKED)
> > cpu_relax();
> > }
> >
> > Race "winner" sets state to quarantine as the last step:
> >
> > if (kasan_alloc_state_lock(alloc_info)) {
> > free_info = get_free_info(cache, object);
> > quarantine_put(free_info, cache);
> > set_track(&free_info->track, GFP_NOWAIT);
> > kasan_poison_slab_free(cache, object);
> > alloc_info->state = KASAN_STATE_QUARANTINE;
> > return true;
> > } else
> > kasan_alloc_state_unlock_wait(alloc_info);
> >
> > Now, I'm not sure whether on current KASAN-supported archs, state byte load
> in
> > the busy-wait loop is atomic wrt the KASAN_STATE_QUARANTINE byte store.
> > Would you advise using CAS primitives for load/store here too?
>
> Store to state needs to use smp_store_release function, otherwise
> stores to free_info->track can sink below the store to state.
> Similarly, loads of state in kasan_alloc_state_unlock_wait need to use
> smp_store_acquire.
>
> A function similar to kasan_alloc_state_lock will also be needed for
> KASAN_STATE_QUARANTINE -> KASAN_STATE_ALLOC state transition (when we
> reuse the object). If a thread tried to report use-after-free when
> another thread pushes the object out of quarantine and overwrites
> alloc_info->track, the thread will print a bogus stack.
>
> kasan_alloc_state_unlock_wait is not enough to prevent the races.
> Consider that a thread executes kasan_alloc_state_unlock_wait and
> proceeds to reporting, at this point another thread pushes the object
> to quarantine or out of the quarantine and overwrites tracks. The
> first thread will read inconsistent data from the header. Any thread
> that reads/writes header needs to (1) wait while status is
> KASAN_STATE_LOCKED, (2) CAS status to KASAN_STATE_LOCKED, (3)
> read/write header, (4) restore/update status and effectively unlock
> the header.
> Alternatively, we can introduce LOCKED bit to header. Then it will be
> simpler for readers to set/unset the bit.
Thanks. As implemented in v2, all accesses to object alloc metadata (alloc,
dealloc, bug report and quarantine release) are now performed under
protection of a lock bit. Your suggested cmpxchg() loop for the lock is
paired with an xchg() on unlock which should address the memory ordering
issue.
In your race scenario between UAF and object reuse from new alloc, if
kmalloc wins, "UAF" report would be bad (probably bug type
"unknown-crash" with thread stack + alloc stack). To close this window,
object meta lock would need to acquired much closer to point of bad
access detection. Patch does not address this race. How does ASAN
address this?
Kuthonuzo
Powered by blists - more mailing lists