lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181201202446.GA19706@hirez.programming.kicks-ass.net>
Date:   Sat, 1 Dec 2018 21:24:46 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Bart Van Assche <bvanassche@....org>
Cc:     mingo@...hat.com, tj@...nel.org, johannes.berg@...el.com,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 22/27] locking/lockdep: Reuse list entries that are no
 longer in use

On Thu, Nov 29, 2018 at 08:48:50AM -0800, Bart Van Assche wrote:
> On Thu, 2018-11-29 at 13:01 +0100, Peter Zijlstra wrote:
> > On Thu, Nov 29, 2018 at 11:49:02AM +0100, Peter Zijlstra wrote:
> > > On Wed, Nov 28, 2018 at 03:43:20PM -0800, Bart Van Assche wrote:
> > > >  	/*
> > > >  	 * Remove all dependencies this lock is
> > > >  	 * involved in:
> > > >  	 */
> > > > +	list_for_each_entry_safe(entry, tmp, &all_list_entries, alloc_entry) {
> > > >  		if (entry->class != class && entry->links_to != class)
> > > >  			continue;
> > > >  		links_to = entry->links_to;
> > > >  		WARN_ON_ONCE(entry->class == links_to);
> > > >  		list_del_rcu(&entry->lock_order_entry);
> > > > +		list_move(&entry->alloc_entry, &free_list_entries);
> > > >  		entry->class = NULL;
> > > >  		entry->links_to = NULL;
> > > >  		check_free_class(zapped_classes, class);
> > > 
> > > Hurm.. I'm confused here.
> > > 
> > > The reason you cannot re-use lock_order_entry for the free list is
> > > because list_del_rcu(), right? But if so, then what ensures the
> > > list_entry is not re-used before it's grace-period?
> > 
> > Also; if you have to grow lock_list by 16 bytes just to be able to free
> > it, a bitmap allocator is much cheaper, space wise.
> > 
> > Some people seem to really care about the static image size, and
> > lockdep's .data section does matter to them.
> 
> How about addressing this by moving removed list entries to a "zapped_entries"
> list and only moving list entries from the zapped_entries list to the
> free_list_entries list after an RCU grace period? I'm not sure that it is
> possible to implement that approach without introducing a new list_head in
> struct lock_list.

I think we can do this with a free bitmap and an array of 2 pending
bitmaps and an index. Add newly freed entries to the pending bitmap
indicated by the current index, when complete flip the index -- such
that further new bits go to the other pending bitmap -- and call_rcu().

Then, on the call_rcu() callback, ie. after a GP has happened, OR our
pending bitmap into the free bitmap, and when the other pending bitmap
isn't empty, flip the index again and start it all again.

This ensures there is at least one full GP between setting a bit and it
landing in the free mask.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ