lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080520153543.4bafcac9@infradead.org>
Date:	Tue, 20 May 2008 15:35:43 -0700
From:	Arjan van de Ven <arjan@...radead.org>
To:	Joel Becker <Joel.Becker@...cle.com>
Cc:	Louis Rilling <Louis.Rilling@...labs.com>,
	linux-kernel@...r.kernel.org, ocfs2-devel@....oracle.com
Subject: Re: [RFC][PATCH 0/3] configfs: Make nested default groups
 lockdep-friendly

On Tue, 20 May 2008 15:27:02 -0700
Joel Becker <Joel.Becker@...cle.com> wrote:

> On Tue, May 20, 2008 at 03:13:41PM -0700, Arjan van de Ven wrote:
> > > 	Louis, what about sticking the recursion level on
> > > configfs_dirent?  That is, you could add sd->s_level and then use
> > > it when needed.  THis would hopefully avoid having to pass the
> > > level as an argument to every function.  Then we can go back to
> > > your original scheme.  If they recurse too much and hit the
> > > lockdep limit, just rewind everything and return -ELOOP.
> > 
> > you can also make a new lockdep key for each level... not pretty
> > but it works
> 
> 	I think that's what we're talking about here.  The toplevel is
> I_MUTEX_PARENT, then each child has a class of (I_MUTEX_CHILD +
> depth), where depth is the value of s_level.  His original try passed
> depth everywhere.  I'm asking him to attach it to the configfs_dirent
> so that the code stays readable.  We run into a depth limit at
> (MAX_LOCKDEP_SUBCLASS - I_MUTEX_PARENT - 1 == 5), which I think is
> probably sane.
> 	Do you mean something else?  Perhaps not starting from
> I_MUTEX_PARENT/CHILD and instead creating CONFIGFS_MUTEX_XXX?

not quite what I meant; what I meant is more like how sched.c deals
with per cpu queues:

(from sched.c)

                spin_lock_init(&rq->lock);
                lockdep_set_class(&rq->lock, &rq->rq_lock_key); 


eg you can override the class (not just add a subclass) for a lock
based on a "key"..
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ