[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b86f2dcb-a61e-a582-3538-5ecce34f9afb@linux.alibaba.com>
Date: Tue, 13 Feb 2018 21:33:25 -0800
From: Yang Shi <yang.shi@...ux.alibaba.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: longman@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/4 v6] lib: debugobjects: add global free list and the
counter
On 2/13/18 2:02 AM, Thomas Gleixner wrote:
> On Mon, 12 Feb 2018, Yang Shi wrote:
>> On 2/12/18 8:25 AM, Thomas Gleixner wrote:
>>> On Tue, 6 Feb 2018, Yang Shi wrote:
>>>> + /*
>>>> + * Reuse objs from the global free list, they will be reinitialized
>>>> + * when allocating
>>>> + */
>>>> + while (obj_nr_tofree > 0 && (obj_pool_free < obj_pool_min_free)) {
>>>> + raw_spin_lock_irqsave(&pool_lock, flags);
>>>> + obj = hlist_entry(obj_to_free.first, typeof(*obj), node);
>>> This is racy vs. the worker thread. Assume obj_nr_tofree = 1:
>>>
>>> CPU0 CPU1
>>> worker
>>> lock(&pool_lock); while (obj_nr_tofree > 0 && ...) {
>>> obj = hlist_entry(obj_to_free); lock(&pool_lock);
>>> hlist_del(obj);
>>> obj_nr_tofree--;
>>> ...
>>> unlock(&pool_lock);
>>> obj = hlist_entry(obj_to_free);
>>> hlist_del(obj); <------- NULL
>>> pointer dereference
>>>
>>> Not what you want, right? The counter or the list head need to be rechecked
>>> after the lock is acquired.
>> Yes, you are right. Will fix the race in newer version.
> I fixed up all the minor issues with this series and applied it to:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git core/debugobjects
>
> Please double check the result.
Thanks a lot. It looks good.
Regards,
Yang
>
> Thanks,
>
> tglx
Powered by blists - more mailing lists