lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090209091437.5d5cbf48@extreme>
Date:	Mon, 9 Feb 2009 09:14:37 -0800
From:	Stephen Hemminger <shemminger@...tta.com>
To:	Patrick McHardy <kaber@...sh.net>
Cc:	Eric Dumazet <dada1@...mosbay.com>,
	David Miller <davem@...emloft.net>,
	Rick Jones <rick.jones2@...com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	netdev@...r.kernel.org, netfilter-devel@...r.kernel.org
Subject: Re: [RFT 3/3] iptables: lock free counters

On Mon, 09 Feb 2009 16:52:59 +0100
Patrick McHardy <kaber@...sh.net> wrote:

> Eric Dumazet wrote:
> > Stephen Hemminger a écrit :
> >> @@ -939,14 +973,30 @@ static struct xt_counters * alloc_counte
> >>  	counters = vmalloc_node(countersize, numa_node_id());
> >>  
> >>  	if (counters == NULL)
> >> -		return ERR_PTR(-ENOMEM);
> >> +		goto nomem;
> >> +
> >> +	tmp = xt_alloc_table_info(private->size);
> >> +	if (!tmp)
> >> +		goto free_counters;
> >> +
> > 
> >> +	xt_zero_table_entries(tmp);
> > This is not correct. We must copy rules and zero counters on the copied stuff.
> 
> Indeed.
It is in next version.

> >>  static int
> >>  do_add_counters(struct net *net, void __user *user, unsigned int len, int compat)
> >> @@ -1393,13 +1422,14 @@ do_add_counters(struct net *net, void __
> >>  		goto free;
> >>  	}
> >>  
> >> -	write_lock_bh(&t->lock);
> >> +	mutex_lock(&t->lock);
> >>  	private = t->private;
> >>  	if (private->number != num_counters) {
> >>  		ret = -EINVAL;
> >>  		goto unlock_up_free;
> >>  	}
> >>  
> >> +	preempt_disable();
> >>  	i = 0;
> >>  	/* Choose the copy that is on our node */
> 
> This isn't actually necessary, its merely an optimization. Since this
> can take quite a while, it might be nicer not to disable preempt.

Need to stay on same cpu, to avoid race of preempt and two cpu's updating
same counter entry (and 64 bit counter update is not atomic)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ