lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 24 Apr 2009 06:58:39 +0200
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Stephen Hemminger <shemminger@...tta.com>
CC:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>, Paul Mackerras <paulus@...ba.org>,
	paulmck@...ux.vnet.ibm.com, Evgeniy Polyakov <zbr@...emap.net>,
	David Miller <davem@...emloft.net>, kaber@...sh.net,
	jeff.chua.linux@...il.com, laijs@...fujitsu.com,
	jengelh@...ozas.de, r000n@...0n.net, linux-kernel@...r.kernel.org,
	netfilter-devel@...r.kernel.org, netdev@...r.kernel.org,
	benh@...nel.crashing.org, mathieu.desnoyers@...ymtl.ca
Subject: Re: [PATCH]  netfilter: use per-CPU recursive lock {XIV}

Stephen Hemminger a écrit :
> In days of old in 2.6.29, netfilter did locketh using a 
> lock of the reader kind when doing its table business, and do
> a writer when with pen in hand like a overworked accountant
> did replace the tables. This sucketh and caused the single
> lock to fly back and forth like a poor errant boy.
> 
> But then netfilter was blessed with RCU and the performance
> was divine, but alas there were those that suffered for
> trying to replace their many rules one at a time.
> 
> So now RCU must be vanquished from the scene, and better
> chastity belts be placed upon this valuable asset most dear.
> The locks that were but one are now replaced by one per suitor.
> 
> The repair was made after much discussion involving
> Eric the wise, and Linus the foul. With flowers springing
> up amid the thorns some peace has finally prevailed and
> all is soothed. This patch and purple prose was penned by
> in honor of "Talk like Shakespeare" day.
> 
> Signed-off-by: Stephen Hemminger <shemminger@...tta.com>


Philip Davis of the university’s School of English said :

  "Shakespeare surprises the brain and catches it off guard in
  a manner that produces a sudden burst of activity - a sense 
  of drama created out of the simplest of things."

http://www.physorg.com/news85664210.html

> 
> ---
> What hath changed over the last two setting suns:
>   * more words, mostly correct...
> 
>   * no need to locketh for writeh on current cpu tis 
>     always so
> 
>   * the locking of all cpu's on replace is always done as
>     part of the get_counters cycle, so the sychronize swip
>     in replace tables is gone with only a comment remaing
> 
>  include/linux/netfilter/x_tables.h |   55 ++++++++++++++--
>  net/ipv4/netfilter/arp_tables.c    |  125 ++++++++++--------------------------
>  net/ipv4/netfilter/ip_tables.c     |  126 ++++++++++---------------------------
>  net/ipv6/netfilter/ip6_tables.c    |  123 ++++++++++--------------------------
>  net/netfilter/x_tables.c           |   55 ++++++++--------
>  5 files changed, 188 insertions(+), 296 deletions(-)
> 


>  
>  static int __init xt_init(void)
>  {
> -	int i, rv;
> +	unsigned int i;
> +	int rv;
> +	static struct lock_class_key xt_lock_key[NR_CPUS];

Could we avoid this [NR_CPUS] thing ?

> +
> +	for_each_possible_cpu(i) {
> +		rwlock_t *lock = &per_cpu(xt_info_locks, i);
> +
> +		rwlock_init(lock);
> +		lockdep_set_class(lock, xt_lock_key+i);
> +	}


Did you tried :

static DECLARE_PER_CPU(struct lock_class_key, xt_locks_key);

static int __init xt_init(void)
 {
	unsigned int i;
	int rv;

	for_each_possible_cpu(i) {
		rwlock_t *lock = &per_cpu(xt_info_locks, i);

		rwlock_init(lock);
		lockdep_set_class(lock, &per_cpu(&xt_locks_key, i));
	}
...

Thanks

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ