[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1802131100470.1130@nanos.tec.linutronix.de>
Date: Tue, 13 Feb 2018 11:02:07 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Yang Shi <yang.shi@...ux.alibaba.com>
cc: longman@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/4 v6] lib: debugobjects: add global free list and the
counter
On Mon, 12 Feb 2018, Yang Shi wrote:
> On 2/12/18 8:25 AM, Thomas Gleixner wrote:
> > On Tue, 6 Feb 2018, Yang Shi wrote:
> > > + /*
> > > + * Reuse objs from the global free list, they will be reinitialized
> > > + * when allocating
> > > + */
> > > + while (obj_nr_tofree > 0 && (obj_pool_free < obj_pool_min_free)) {
> > > + raw_spin_lock_irqsave(&pool_lock, flags);
> > > + obj = hlist_entry(obj_to_free.first, typeof(*obj), node);
> > This is racy vs. the worker thread. Assume obj_nr_tofree = 1:
> >
> > CPU0 CPU1
> > worker
> > lock(&pool_lock); while (obj_nr_tofree > 0 && ...) {
> > obj = hlist_entry(obj_to_free); lock(&pool_lock);
> > hlist_del(obj);
> > obj_nr_tofree--;
> > ...
> > unlock(&pool_lock);
> > obj = hlist_entry(obj_to_free);
> > hlist_del(obj); <------- NULL
> > pointer dereference
> >
> > Not what you want, right? The counter or the list head need to be rechecked
> > after the lock is acquired.
>
> Yes, you are right. Will fix the race in newer version.
I fixed up all the minor issues with this series and applied it to:
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git core/debugobjects
Please double check the result.
Thanks,
tglx
Powered by blists - more mailing lists