[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1543959767.185366.217.camel@acm.org>
Date: Tue, 04 Dec 2018 13:42:47 -0800
From: Bart Van Assche <bvanassche@....org>
To: Waiman Long <longman@...hat.com>, mingo@...hat.com
Cc: peterz@...radead.org, tj@...nel.org, johannes.berg@...el.com,
linux-kernel@...r.kernel.org,
Johannes Berg <johannes@...solutions.net>
Subject: Re: [PATCH v2 17/24] locking/lockdep: Free lock classes that are no
longer in use
On Tue, 2018-12-04 at 15:27 -0500, Waiman Long wrote:
> On 12/03/2018 07:28 PM, Bart Van Assche wrote:
> > +/* Must be called with the graph lock held. */
> > +static void remove_class_from_lock_chain(struct lock_chain *chain,
> > + struct lock_class *class)
> > +{
> > + u64 chain_key;
> > + int i;
> > +
> > +#ifdef CONFIG_PROVE_LOCKING
> > + for (i = chain->base; i < chain->base + chain->depth; i++) {
> > + if (chain_hlocks[i] != class - lock_classes)
> > + continue;
> > + if (--chain->depth == 0)
> > + break;
> > + memmove(&chain_hlocks[i], &chain_hlocks[i + 1],
> > + (chain->base + chain->depth - i) *
> > + sizeof(chain_hlocks[0]));
> > + /*
> > + * Each lock class occurs at most once in a
> > + * lock chain so once we found a match we can
> > + * break out of this loop.
> > + */
> > + break;
> > + }
> > + /*
> > + * Note: calling hlist_del_rcu() from inside a
> > + * hlist_for_each_entry_rcu() loop is safe.
> > + */
> > + if (chain->depth == 0) {
> > + /* To do: decrease chain count. See also inc_chains(). */
> > + hlist_del_rcu(&chain->entry);
> > + return;
> > + }
> > + chain_key = 0;
> > + for (i = chain->base; i < chain->base + chain->depth; i++)
> > + chain_key = iterate_chain_key(chain_key, chain_hlocks[i] + 1);
>
> Do you need to recompute the chain_key if no entry in the chain is removed?
Thanks for having pointed that out. I will modify this function such that the
chain key is only recalculated if necessary.
> >
> > @@ -4141,14 +4253,31 @@ static void zap_class(struct lock_class *class)
> > for (i = 0, entry = list_entries; i < nr_list_entries; i++, entry++) {
> > if (entry->class != class && entry->links_to != class)
> > continue;
> > + links_to = entry->links_to;
> > + WARN_ON_ONCE(entry->class == links_to);
> > list_del_rcu(&entry->entry);
> > + check_free_class(class);
>
> Is the check_free_class() call redundant? You are going to call it near
> the end below.
I think so. I will remove the check_free_class() that is inside the for-loop.
> > +static void reinit_class(struct lock_class *class)
> > +{
> > + void *const p = class;
> > + const unsigned int offset = offsetof(struct lock_class, key);
> > +
> > + WARN_ON_ONCE(!class->lock_entry.next);
> > + WARN_ON_ONCE(!list_empty(&class->locks_after));
> > + WARN_ON_ONCE(!list_empty(&class->locks_before));
> > + memset(p + offset, 0, sizeof(*class) - offset);
> > + WARN_ON_ONCE(!class->lock_entry.next);
> > + WARN_ON_ONCE(!list_empty(&class->locks_after));
> > + WARN_ON_ONCE(!list_empty(&class->locks_before));
> > }
>
> Is it safer to just reinit those fields before "key" instead of using
> memset()? Lockdep is slow anyway, doing that individually won't
> introduce any noticeable slowdown.
The warning statements will only be hit if the order of the struct lock_class members
would be modified. I don't think that we need to change the approach of this function.
> > @@ -4193,18 +4355,14 @@ void lockdep_free_key_range(void *start, unsigned long size)
> > raw_local_irq_restore(flags);
> >
> > /*
> > - * Wait for any possible iterators from look_up_lock_class() to pass
> > - * before continuing to free the memory they refer to.
> > - *
> > - * sync_sched() is sufficient because the read-side is IRQ disable.
> > + * Do not wait for concurrent look_up_lock_class() calls. If any such
> > + * concurrent call would return a pointer to one of the lock classes
> > + * freed by this function then that means that there is a race in the
> > + * code that calls look_up_lock_class(), namely concurrently accessing
> > + * and freeing a synchronization object.
> > */
> > - synchronize_sched();
> >
> > - /*
> > - * XXX at this point we could return the resources to the pool;
> > - * instead we leak them. We would need to change to bitmap allocators
> > - * instead of the linear allocators we have now.
> > - */
> > + schedule_free_zapped_classes();
>
> Should you move the graph_unlock() and raw_lock_irq_restore() down to
> after this? The schedule_free_zapped_classes must be called with
> graph_lock held. Right?
I will modify this and other patches such that all schedule_free_zapped_classes()
calls are protected by the graph lock.
Bart.
Powered by blists - more mailing lists