lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1281649657.2305.38.camel@edumazet-laptop>
Date:	Thu, 12 Aug 2010 23:47:37 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	David Miller <davem@...emloft.net>,
	Stephen Hemminger <shemminger@...ux-foundation.org>,
	netdev@...r.kernel.org, bhutchings@...arflare.com,
	Nick Piggin <npiggin@...e.de>
Subject: Re: [PATCH net-next-2.6] bridge: 64bit rx/tx counters

Le jeudi 12 août 2010 à 08:07 -0700, Andrew Morton a écrit : 
> On Thu, 12 Aug 2010 14:16:15 +0200 Eric Dumazet <eric.dumazet@...il.com> wrote:
> 
> > > And all this open-coded per-cpu counter stuff added all over the place.
> > > Were percpu_counters tested or reviewed and found inadequate and unfixable?
> > > If so, please do tell.
> > > 
> > 
> > percpu_counters tries hard to maintain a view of the current value of
> > the (global) counter. This adds a cost because of a shared cache line
> > and locking. (__percpu_counter_sum() is not very scalable on big hosts,
> > it locks the percpu_counter lock for a possibly long iteration)
> 
> Could be.  Is percpu_counter_read_positive() unsuitable?
> 

I bet most people want precise counters when doing 'ifconfig lo'

SNMP applications would be very surprised to get non increasing values
between two samples, or inexact values.

> > 
> > For network stats we dont want to maintain this central value, we do the
> > folding only when necessary.
> 
> hm.  Well, why?  That big walk across all possible CPUs could be really
> expensive for some applications.  Especially if num_possible_cpus is
> much larger than num_online_cpus, which iirc can happen in
> virtualisation setups; probably it can happen in non-virtualised
> machines too.
> 

Agreed.

> > And this folding has zero effect on
> > concurrent writers (counter updates)
> 
> The fastpath looks a little expensive in the code you've added.  The
> write_seqlock() does an rmw and a wmb() and the stats inc is a 64-bit
> rmw whereas percpu_counters do a simple 32-bit add.  So I'd expect that
> at some suitable batch value, percpu-counters are faster on 32-bit. 
> 

Hmm... 6 instructions (16 bytes of text) are a "little expensive" versus
120 instructions if we use percpu_counter ?

Following code from drivers/net/loopback.c

	u64_stats_update_begin(&lb_stats->syncp);
	lb_stats->bytes += len;
	lb_stats->packets++;
	u64_stats_update_end(&lb_stats->syncp);

maps on i386 to :

	ff 46 10             	incl   0x10(%esi)  // u64_stats_update_begin(&lb_stats->syncp);
	89 f8                	mov    %edi,%eax
	99                   	cltd   
	01 7e 08             	add    %edi,0x8(%esi)
	11 56 0c             	adc    %edx,0xc(%esi)
	83 06 01             	addl   $0x1,(%esi)
	83 56 04 00          	adcl   $0x0,0x4(%esi)
	ff 46 10             	incl   0x10(%esi) // u64_stats_update_end(&lb_stats->syncp);


Exactly 6 added instructions compared to previous kernel (32bit
counters), only on 32bit hosts. These instructions are not expensive (no
conditional branches, no extra register pressure) and access private cpu
data.

While two calls to __percpu_counter_add() add about 120 instructions,
even on 64bit hosts, wasting precious cpu cycles.



> They'll usually be slower on 64-bit, until that num_possible_cpus walk
> bites you.
> 

But are you aware we already fold SNMP values using for_each_possible()
macros, before adding 64bit counters ? Not related to 64bit stuff
really...

> percpu_counters might need some work to make them irq-friendly.  That
> bare spin_lock().
> 
> btw, I worry a bit about seqlocks in the presence of interrupts:
> 

Please note that nothing is assumed about interrupts and seqcounts

Both readers and writers must mask them if necessary.

In most situations, masking softirq is enough for networking cases
(updates are performed from softirq handler, reads from process context)

> static inline void write_seqcount_begin(seqcount_t *s)
> {
> 	s->sequence++;
> 	smp_wmb();
> }
> 
> are we assuming that the ++ there is atomic wrt interrupts?  I think
> so.  Is that always true for all architectures, compiler versions, etc?
> 

s->sequence++ is certainly not atomic wrt interrupts on RISC arches

> > For network stack, we also need to update two values, a packet counter
> > and a bytes counter. percpu_counter is not very good for the 'bytes
> > counter', since we would have to use a arbitrary big bias value.
> 
> OK, that's a nasty problem for percpu-counters.
> 
> > Using several percpu_counter would also probably use more cache lines.
> > 
> > Also please note this stuff is only needed for 32bit arches. 
> > 
> > Using percpu_counter would slow down network stack on modern arches.
> 
> Was this ever quantified?

A single misplacement of dst refcount was responsible for a 25% tbench
slowdown on a small machine (8 cores). Without any lock, only atomic
operations on a shared cache line...

So I think we could easily quantify a big slow down adding two
percpu_counters add() in a driver fastpath and a 16 or 32 cores machine.
(It would be a revert of percpu stuff we added last years)

Improvements would be

0) Just forget about 64bit stuff on 32bit arches as we did from linux
0.99. People should not run 40Gb links on 32bit kernels :)

1) If we really want percpu_counter() stuff, find a way to make it
hierarchical or use a a very big BIAS (2^30 ?). And/Or reduce
percpu_counter_add() complexity for increasing unsigned counters.

2) Avoid the write_seqcount_begin()/end() stuff when a writer changes
only the low order parts of the 64bit counter.

   (ie maintain a 32bit percpu value, and only atomicaly touch the
shared upper 32bits (and the seqcount) when overflowing this 32bit
percpu value.

Not sure its worth the added conditional branch.

Thanks


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ