[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1321636462.2883.3.camel@bwh-desktop>
Date: Fri, 18 Nov 2011 17:14:22 +0000
From: Ben Hutchings <bhutchings@...arflare.com>
To: Sasha Levin <levinsasha928@...il.com>
CC: Krishna Kumar <krkumar2@...ibm.com>, <rusty@...tcorp.com.au>,
<mst@...hat.com>, <netdev@...r.kernel.org>, <kvm@...r.kernel.org>,
<davem@...emloft.net>, <virtualization@...ts.linux-foundation.org>
Subject: Re: [RFC] [ver3 PATCH 3/6] virtio_net: virtio_net driver changes
On Fri, 2011-11-18 at 18:18 +0200, Sasha Levin wrote:
> On Fri, 2011-11-18 at 15:40 +0000, Ben Hutchings wrote:
> > On Fri, 2011-11-18 at 08:24 +0200, Sasha Levin wrote:
> > > On Fri, 2011-11-18 at 01:08 +0000, Ben Hutchings wrote:
> > > > On Fri, 2011-11-11 at 18:34 +0530, Krishna Kumar wrote:
> > > > > Changes for multiqueue virtio_net driver.
> > > > [...]
> > > > > @@ -677,25 +730,35 @@ static struct rtnl_link_stats64 *virtnet
> > > > > {
> > > > > struct virtnet_info *vi = netdev_priv(dev);
> > > > > int cpu;
> > > > > - unsigned int start;
> > > > >
> > > > > for_each_possible_cpu(cpu) {
> > > > > - struct virtnet_stats __percpu *stats
> > > > > - = per_cpu_ptr(vi->stats, cpu);
> > > > > - u64 tpackets, tbytes, rpackets, rbytes;
> > > > > -
> > > > > - do {
> > > > > - start = u64_stats_fetch_begin(&stats->syncp);
> > > > > - tpackets = stats->tx_packets;
> > > > > - tbytes = stats->tx_bytes;
> > > > > - rpackets = stats->rx_packets;
> > > > > - rbytes = stats->rx_bytes;
> > > > > - } while (u64_stats_fetch_retry(&stats->syncp, start));
> > > > > -
> > > > > - tot->rx_packets += rpackets;
> > > > > - tot->tx_packets += tpackets;
> > > > > - tot->rx_bytes += rbytes;
> > > > > - tot->tx_bytes += tbytes;
> > > > > + int qpair;
> > > > > +
> > > > > + for (qpair = 0; qpair < vi->num_queue_pairs; qpair++) {
> > > > > + struct virtnet_send_stats __percpu *tx_stat;
> > > > > + struct virtnet_recv_stats __percpu *rx_stat;
> > > >
> > > > While you're at it, you can drop the per-CPU stats and make them only
> > > > per-queue. There is unlikely to be any benefit in maintaining them
> > > > per-CPU while receive and transmit processing is serialised per-queue.
> > >
> > > It allows you to update stats without a lock.
> >
> > But you'll already be holding a lock related to the queue.
>
> Right, but now you're holding a queue lock just when playing with the
> queue, we don't hold it when we process the data - which is when we
> usually need to update stats.
[...]
The *stack* is holding the appropriate lock when calling the NAPI poll
function or ndo_start_xmit function.
Ben.
--
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists