lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 30 May 2013 12:25:24 +0400 From: Roman Gushchin <klamm@...dex-team.ru> To: Eric Dumazet <eric.dumazet@...il.com> CC: David Miller <davem@...emloft.net>, paulmck@...ux.vnet.ibm.com, Jesper Dangaard Brouer <brouer@...hat.com>, Dipankar Sarma <dipankar@...ibm.com>, zhmurov@...dex-team.ru, linux-kernel@...r.kernel.org, netdev@...r.kernel.org, Alexey Kuznetsov <kuznet@....inr.ac.ru>, James Morris <jmorris@...ei.org>, Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>, Patrick McHardy <kaber@...sh.net>, David Laight <David.Laight@...LAB.COM> Subject: Re: [PATCH v2] rcu: fix a race in hlist_nulls_for_each_entry_rcu macro On 29.05.2013 23:06, Eric Dumazet wrote: > On Wed, 2013-05-29 at 14:09 +0400, Roman Gushchin wrote: > > True, these lookup functions are usually structured the same around the > hlist_nulls_for_each_entry_rcu() loop. > > A barrier() right before the loop seems to be a benefit, the size of > assembly code is reduced by 48 bytes. > > And its one of the documented way to handle this kind of problems > (Documentation/atomic_ops.txt line 114) > > I guess we should amend this documentation, eventually. > > Thanks, please add you "Signed-off-by" if you agree with the patch. > Signed-off-by: Roman Gushchin <klamm@...dex-team.ru> Many thanks to you, Paul E. McKenney and David Laight for your patches, help and participation in this discussion. > > [PATCH] net: force a reload of first item in hlist_nulls_for_each_entry_rcu > > Roman Gushchin discovered that udp4_lib_lookup2() was not reloading > first item in the rcu protected list, in case the loop was restarted. > > This produced soft lockups as in https://lkml.org/lkml/2013/4/16/37 > > rcu_dereference(X)/ACCESS_ONCE(X) seem to not work as intended if X is > ptr->field : > > In some cases, gcc caches the value or ptr->field in a register. > > Use a barrier() to disallow such caching, as documented in > Documentation/atomic_ops.txt line 114 > > Thanks a lot to Roman for providing analysis and numerous patches. > > Diagnosed-by: Roman Gushchin <klamm@...dex-team.ru> > Signed-off-by: Eric Dumazet <edumazet@...gle.com> > Reported-by: Boris Zhmurov <zhmurov@...dex-team.ru> > --- > include/linux/rculist_nulls.h | 7 ++++++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h > index 2ae1371..c7557fa 100644 > --- a/include/linux/rculist_nulls.h > +++ b/include/linux/rculist_nulls.h > @@ -105,9 +105,14 @@ static inline void hlist_nulls_add_head_rcu(struct hlist_nulls_node *n, > * @head: the head for your list. > * @member: the name of the hlist_nulls_node within the struct. > * > + * The barrier() is needed to make sure compiler doesn't cache first element [1], > + * as this loop can be restarted [2] > + * [1] Documentation/atomic_ops.txt around line 114 > + * [2] Documentation/RCU/rculist_nulls.txt around line 146 > */ > #define hlist_nulls_for_each_entry_rcu(tpos, pos, head, member) \ > - for (pos = rcu_dereference_raw(hlist_nulls_first_rcu(head)); \ > + for (({barrier();}), \ > + pos = rcu_dereference_raw(hlist_nulls_first_rcu(head)); \ > (!is_a_nulls(pos)) && \ > ({ tpos = hlist_nulls_entry(pos, typeof(*tpos), member); 1; }); \ > pos = rcu_dereference_raw(hlist_nulls_next_rcu(pos))) > > -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists