lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 1 Mar 2017 14:17:07 +0900
From:   Byungchul Park <byungchul.park@....com>
To:     Peter Zijlstra <peterz@...radead.org>
CC:     <mingo@...nel.org>, <tglx@...utronix.de>, <walken@...gle.com>,
        <boqun.feng@...il.com>, <kirill@...temov.name>,
        <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
        <iamjoonsoo.kim@....com>, <akpm@...ux-foundation.org>,
        <npiggin@...il.com>, <kernel-team@....com>
Subject: Re: [PATCH v5 06/13] lockdep: Implement crossrelease feature

On Tue, Feb 28, 2017 at 04:49:00PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> 
> > +struct cross_lock {
> > +	/*
> > +	 * When more than one acquisition of crosslocks are overlapped,
> > +	 * we do actual commit only when ref == 0.
> > +	 */
> > +	atomic_t ref;
> 
> That comment doesn't seem right, should that be: ref != 0 ?
> Also; would it not be much clearer to call this: nr_blocked, or waiters
> or something along those lines, because that is what it appears to be.
> 
> > +	/*
> > +	 * Seperate hlock instance. This will be used at commit step.
> > +	 *
> > +	 * TODO: Use a smaller data structure containing only necessary
> > +	 * data. However, we should make lockdep code able to handle the
> > +	 * smaller one first.
> > +	 */
> > +	struct held_lock	hlock;
> > +};
> 
> > +static int add_xlock(struct held_lock *hlock)
> > +{
> > +	struct cross_lock *xlock;
> > +	unsigned int gen_id;
> > +
> > +	if (!depend_after(hlock))
> > +		return 1;
> > +
> > +	if (!graph_lock())
> > +		return 0;
> > +
> > +	xlock = &((struct lockdep_map_cross *)hlock->instance)->xlock;
> > +
> > +	/*
> > +	 * When acquisitions for a xlock are overlapped, we use
> > +	 * a reference counter to handle it.
> 
> Handle what!? That comment is near empty.

I will add more comment so that it can fully descibe.

> 
> > +	 */
> > +	if (atomic_inc_return(&xlock->ref) > 1)
> > +		goto unlock;
> 
> So you set the xlock's generation only once, to the oldest blocking-on
> relation, which makes sense, you want to be able to related to all
> historical locks since.
> 
> > +
> > +	gen_id = (unsigned int)atomic_inc_return(&cross_gen_id);
> > +	xlock->hlock = *hlock;
> > +	xlock->hlock.gen_id = gen_id;
> > +unlock:
> > +	graph_unlock();
> > +	return 1;
> > +}
> 
> > +void lock_commit_crosslock(struct lockdep_map *lock)
> > +{
> > +	struct cross_lock *xlock;
> > +	unsigned long flags;
> > +
> > +	if (!current->xhlocks)
> > +		return;
> > +
> > +	if (unlikely(current->lockdep_recursion))
> > +		return;
> > +
> > +	raw_local_irq_save(flags);
> > +	check_flags(flags);
> > +	current->lockdep_recursion = 1;
> > +
> > +	if (unlikely(!debug_locks))
> > +		return;
> > +
> > +	if (!graph_lock())
> > +		return;
> > +
> > +	xlock = &((struct lockdep_map_cross *)lock)->xlock;
> > +	if (atomic_read(&xlock->ref) > 0 && !commit_xhlocks(xlock))
> 
> You terminate with graph_lock() held.

Oops. What did I do? I'll fix it.

> 
> Also, I think you can do the atomic_read() outside of graph lock, to
> avoid taking graph_lock when its 0.

I'll do that if possible after thinking more.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ