[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d82e647a0907130656la867a03u2a4759443b299b09@mail.gmail.com>
Date: Mon, 13 Jul 2009 21:56:09 +0800
From: Ming Lei <tom.leiming@...il.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Ingo Molnar <mingo@...e.hu>,
Frederic Weisbecker <fweisbec@...il.com>,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org
Subject: Re: [RESEND PATCH 01/11] kernel:lockdep:print the shortest dependency
chain if finding a circle
2009/7/13 Peter Zijlstra <a.p.zijlstra@...llo.nl>:
> On Mon, 2009-07-13 at 09:01 +0200, Ingo Molnar wrote:
>>
>> It's a nice byproduct, beyond the primary advantage of not being a
>> stack based recursion check.
>>
>> I think this patch-set is great, and there's just one more step
>> needed to make it round: it would be nice to remove the limitation
>> of maximum number of locks held per task. (MAX_LOCK_DEPTH)
>>
>> The way we could do it is to split out this bit of struct task:
>>
>> #ifdef CONFIG_LOCKDEP
>> # define MAX_LOCK_DEPTH 48UL
>> u64 curr_chain_key;
>> int lockdep_depth;
>> unsigned int lockdep_recursion;
>> struct held_lock held_locks[MAX_LOCK_DEPTH];
>> gfp_t lockdep_reclaim_gfp;
>> #endif
>>
>> into a separate 'struct lockdep_state' structure, and allocate it
>> dynamically during fork with a initial pre-set size of say 64 locks
>> depth. If we hit that limit, we'd double the allocation threshold,
>> which would cause a larger structure to be allocated for all newly
>> allocated tasks.
>
> Right, except allocating stuff while in the middle of lockdep is very
> hard since it involves taking more locks :-)
Yes, it is a little dangerous to allocating memory dynamically in lockdep.
>
> I've tried it several times but never quite managed it in a way that I
> felt comfortable with.
>
> It would require having a reserve and serializing over that reserve.
Yes, good idea.
--
Lei Ming
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists