[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181129120143.GG2149@hirez.programming.kicks-ass.net>
Date: Thu, 29 Nov 2018 13:01:43 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Bart Van Assche <bvanassche@....org>
Cc: mingo@...hat.com, tj@...nel.org, johannes.berg@...el.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 22/27] locking/lockdep: Reuse list entries that are no
longer in use
On Thu, Nov 29, 2018 at 11:49:02AM +0100, Peter Zijlstra wrote:
> On Wed, Nov 28, 2018 at 03:43:20PM -0800, Bart Van Assche wrote:
> > Instead of abandoning elements of list_entries[] that are no longer in
> > use, make alloc_list_entry() reuse array elements that have been freed.
>
> > diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> > index 43327a1dd488..01e55fca7c2c 100644
> > --- a/include/linux/lockdep.h
> > +++ b/include/linux/lockdep.h
> > @@ -183,6 +183,11 @@ static inline void lockdep_copy_map(struct lockdep_map *to,
> > struct lock_list {
> > /* Entry in locks_after or locks_before. */
> > struct list_head lock_order_entry;
> > + /*
> > + * Entry in all_list_entries when in use and entry in
> > + * free_list_entries when not in use.
> > + */
> > + struct list_head alloc_entry;
> > struct lock_class *class;
> > struct lock_class *links_to;
> > struct stack_trace trace;
>
> > +static LIST_HEAD(all_list_entries);
> > +static LIST_HEAD(free_list_entries);
> >
>
> > @@ -862,7 +867,10 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
> > */
> > static struct lock_list *alloc_list_entry(void)
> > {
> > - if (nr_list_entries >= MAX_LOCKDEP_ENTRIES) {
> > + struct lock_list *e = list_first_entry_or_null(&free_list_entries,
> > + typeof(*e), alloc_entry);
> > +
> > + if (!e) {
> > if (!debug_locks_off_graph_unlock())
> > return NULL;
> >
> > @@ -870,7 +878,8 @@ static struct lock_list *alloc_list_entry(void)
> > dump_stack();
> > return NULL;
> > }
> > - return list_entries + nr_list_entries++;
> > + list_move_tail(&e->alloc_entry, &all_list_entries);
> > + return e;
> > }
>
> > @@ -4235,19 +4244,19 @@ static void zap_class(struct list_head *zapped_classes,
> > struct lock_class *class)
> > {
> > struct lock_class *links_to;
> > + struct lock_list *entry, *tmp;
> >
> > /*
> > * Remove all dependencies this lock is
> > * involved in:
> > */
> > + list_for_each_entry_safe(entry, tmp, &all_list_entries, alloc_entry) {
> > if (entry->class != class && entry->links_to != class)
> > continue;
> > links_to = entry->links_to;
> > WARN_ON_ONCE(entry->class == links_to);
> > list_del_rcu(&entry->lock_order_entry);
> > + list_move(&entry->alloc_entry, &free_list_entries);
> > entry->class = NULL;
> > entry->links_to = NULL;
> > check_free_class(zapped_classes, class);
>
> Hurm.. I'm confused here.
>
> The reason you cannot re-use lock_order_entry for the free list is
> because list_del_rcu(), right? But if so, then what ensures the
> list_entry is not re-used before it's grace-period?
Also; if you have to grow lock_list by 16 bytes just to be able to free
it, a bitmap allocator is much cheaper, space wise.
Some people seem to really care about the static image size, and
lockdep's .data section does matter to them.
Powered by blists - more mailing lists