lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 3 Mar 2017 09:14:16 +0100 From: Peter Zijlstra <peterz@...radead.org> To: Byungchul Park <byungchul.park@....com> Cc: mingo@...nel.org, tglx@...utronix.de, walken@...gle.com, boqun.feng@...il.com, kirill@...temov.name, linux-kernel@...r.kernel.org, linux-mm@...ck.org, iamjoonsoo.kim@....com, akpm@...ux-foundation.org, npiggin@...il.com, kernel-team@....com, Michal Hocko <mhocko@...nel.org>, Nikolay Borisov <nborisov@...e.com>, Mel Gorman <mgorman@...e.de> Subject: Re: [PATCH v5 06/13] lockdep: Implement crossrelease feature On Fri, Mar 03, 2017 at 09:17:37AM +0900, Byungchul Park wrote: > On Thu, Mar 02, 2017 at 02:40:31PM +0100, Peter Zijlstra wrote: > > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c > > index a95e5d1..7baea89 100644 > > --- a/kernel/locking/lockdep.c > > +++ b/kernel/locking/lockdep.c > > @@ -1860,6 +1860,17 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev, > > } > > } > > > > + /* > > + * Is the <prev> -> <next> redundant? > > + */ > > + this.class = hlock_class(prev); > > + this.parent = NULL; > > + ret = check_noncircular(&this, hlock_class(next), &target_entry); > > + if (!ret) /* exists, redundant */ > > + return 2; > > + if (ret < 0) > > + return print_bfs_bug(ret); > > + > > if (!*stack_saved) { > > if (!save_trace(&trace)) > > return 0; > > This whoud be very nice if you allow to add this code. However, prev_gen_id > thingy is still useful, the code above can achieve it though. Agree? So my goal was to avoid prev_gen_id, and yes I think the above does that. Now the problem with the above condition is that it makes reports harder to decipher, because by avoiding adding redundant links to our graph we loose a possible shorter path. So while for correctness sake it doesn't matter, it is irrelevant how long the cycle is after all, all that matters is that there is a cycle. But the humans on the receiving end tend to like shorter cycles. And I think the same is true for crossrelease, avoiding redundant links increases cycle length. (And remember, BFS will otherwise find the shortest cycle.) That said; I'd be fairly interested in numbers on how many links this avoids, I'll go make a check_redundant() version of the above and put a proper counter in so I can see what it does for a regular boot etc..
Powered by blists - more mailing lists