[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1340859654.26242.201.camel@edumazet-glaptop>
Date: Thu, 28 Jun 2012 07:00:54 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Rick Jones <rick.jones2@...com>
Cc: Ben Greear <greearb@...delatech.com>,
Stephen Hemminger <shemminger@...tta.com>,
Tom Parkin <tparkin@...alix.com>, netdev@...r.kernel.org,
David.Laight@...LAB.COM, James Chapman <jchapman@...alix.com>
Subject: Re: [PATCH v2] l2tp: use per-cpu variables for u64_stats updates
On Wed, 2012-06-27 at 16:01 -0700, Rick Jones wrote:
> Today, sure, generalizing to packet counters in general, that bloat is
> likely on its way. At 100 Gbit/s Ethernet, that is upwards of 147
> million packets per second each way. At 1 GbE it is 125 million octets
> per second. So, if 32 bit octet counters were insufficient for 1 GbE,
> 32 bit packet counters likely will be insufficient for 100GbE.
>
> Or, I suppose, 3 or more bonded 40 GbEs or 10 or more bonded 10 GbEs
> (unlikely though that last one may be) assuming there is stats
> aggregation in the bond interface.
Note that I am all for 64bit counters on 64bit kernels because they are
almost[1] free, since they fit in a machine word (unsigned long).
tx_dropped is the count of dropped _packets_.
If more than 32bits are needed, and someone must run this 100GbE on a
32bit machine of the last century, he really has a big problem.
[1] : LLTX drivers case
since ndo_start_xmit() can be run concurrently by many cpus, safely
updating an "unsigned long" requires additional hassle :
1) Use of a spinlock to protect the update.
2) Use atomic_long_t instead of "unsigned long"
3) Use percpu data
3) is overkill for devices with light traffic, because it consumes lot
of RAM on machines with 2048 possible cpus, _and_ the reader must fold
the data of all possible values.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists