[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e6dc78dc-fdd5-81b7-9a6a-c3e0f02286cc@redhat.com>
Date: Tue, 25 Sep 2018 12:36:24 -0400
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
Will Deacon <will.deacon@....com>,
linux-kernel@...r.kernel.org,
Yang Shi <yang.shi@...ux.alibaba.com>,
Arnd Bergmann <arnd@...db.de>, chuhu@...hat.com
Subject: Re: [PATCH v2 2/2] debugobjects: Disable lockdep tracking of
debugobjects internal locks
On 09/25/2018 12:31 PM, Peter Zijlstra wrote:
> On Tue, Sep 25, 2018 at 12:20:05PM -0400, Waiman Long wrote:
>> On 09/25/2018 11:32 AM, Peter Zijlstra wrote:
>>> On Tue, Sep 25, 2018 at 10:41:09AM -0400, Waiman Long wrote:
>>>> diff --git a/lib/debugobjects.c b/lib/debugobjects.c
>>>> index 70935ed91125..68d72ed9ca22 100644
>>>> --- a/lib/debugobjects.c
>>>> +++ b/lib/debugobjects.c
>>>> @@ -1106,8 +1106,15 @@ void __init debug_objects_early_init(void)
>>>> {
>>>> int i;
>>>>
>>>> - for (i = 0; i < ODEBUG_HASH_SIZE; i++)
>>>> + /*
>>>> + * We don't need lockdep to verify correctness of debugobjects
>>>> + * internal locks.
>>>> + */
>>>> + lockdep_set_novalidate_class(&pool_lock);
>>>> + for (i = 0; i < ODEBUG_HASH_SIZE; i++) {
>>>> raw_spin_lock_init(&obj_hash[i].lock);
>>>> + lockdep_set_novalidate_class(&obj_hash[i].lock);
>>>> + }
>>>>
>>>> for (i = 0; i < ODEBUG_POOL_SIZE; i++)
>>>> hlist_add_head(&obj_static_pool[i].node, &obj_pool);
>>> NAK, we do not _EVER_ set novalidate if it can at all be avoided.
>>>
>>> If there is a severe performance problem with lockdep, try and cure
>>> that. But really, who runs lockdep kernels on 8 sockets?
>> We do. It is part of our testing process to run both production and
>> debug kernels on a variety of different machines to see if anything
>> breaks. Some of them just happen to be 8-socket systems.
>>
>> The internal locks in the debugobjects code don't interact with other
>> locks at all as memory allocation isn't called with those lock held. So
>> disabling lockdep for those locks won't materially affect the accuracy
>> of the lockdep code.
>>
>> How about the ability to declare a class of locks as terminal in the
>> sense that no further lock acquisition is allowed while holding a
>> terminal lock? That will allow the the lockdep code to fast track the
>> handling of those locks and hopefully prevent hard lockup problem like that.
> Who guarantees they stay leaf locks? What if someone mucks up the
> debugobject locking because its Monday morning and they haven't had
> their coffee yet. You want lockdep to catch all that.
Lockdep will need to do checking to guarantee that.
> Also, where is the performance benefit here. The normal lock_acquire
> path doesn't change gobal state (it will not see new lock ordering in
> 99%+ of the cases).
>
> Typically we only push the lock on the task local lock stack, compute
> the new hash, and do the hash lookup, find it already exists and return.
>
> So what is the expensive part, and can we do something about that?
I will need to do some experimentation to find out. Will update this
thread with my finding.
Cheers,
Longman
Powered by blists - more mailing lists