[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1317166968.2845.45.camel@bwh-desktop>
Date: Wed, 28 Sep 2011 00:42:48 +0100
From: Ben Hutchings <bhutchings@...arflare.com>
To: Amir Vadai <amirv@...lanox.co.il>
Cc: Tom Herbert <therbert@...gle.com>, oren@...lanox.co.il,
liranl@...lanox.co.il, netdev@...r.kernel.org,
Diego Crupnicoff <Diego@...lanox.com>
Subject: Re: RFS issue: no HW filter for paused stream
On Thu, 2011-09-22 at 09:11 +0300, Amir Vadai wrote:
> Looks good.
> and now the code is much clearer
Does that mean that this change *works* for you?
Ben.
[...]
> > But that means we never move the flow to a new CPU in the non-
> > accelerated case. So maybe the proper change would be:
> >
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -2652,10 +2652,7 @@ static struct rps_dev_flow *
> > set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
> > struct rps_dev_flow *rflow, u16 next_cpu)
> > {
> > - u16 tcpu;
> > -
> > - tcpu = rflow->cpu = next_cpu;
> > - if (tcpu != RPS_NO_CPU) {
> > + if (next_cpu != RPS_NO_CPU) {
> > #ifdef CONFIG_RFS_ACCEL
> > struct netdev_rx_queue *rxqueue;
> > struct rps_dev_flow_table *flow_table;
> > @@ -2683,16 +2680,16 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
> > goto out;
> > old_rflow = rflow;
> > rflow =&flow_table->flows[flow_id];
> > - rflow->cpu = next_cpu;
> > rflow->filter = rc;
> > if (old_rflow->filter == rflow->filter)
> > old_rflow->filter = RPS_NO_FILTER;
> > out:
> > #endif
> > rflow->last_qtail =
> > - per_cpu(softnet_data, tcpu).input_queue_head;
> > + per_cpu(softnet_data, next_cpu).input_queue_head;
> > }
> >
> > + rflow->cpu = next_cpu;
> > return rflow;
> > }
> >
> > --- END ---
> >
--
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists