[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1336164807.3752.465.camel@edumazet-glaptop>
Date: Fri, 04 May 2012 22:53:27 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Tom Herbert <therbert@...gle.com>
Cc: Deng-Cheng Zhu <dczhu@...s.com>, davem@...emloft.net,
netdev@...r.kernel.org
Subject: Re: [PATCH v2] RPS: Sparse connection optimizations - v2
On Fri, 2012-05-04 at 17:47 +0200, Eric Dumazet wrote:
> On Fri, 2012-05-04 at 08:31 -0700, Tom Herbert wrote:
> > > I think the mechanisms of rps_dev_flow_table and cpu_flow (in this
> > > patch) are different: The former works along with rps_sock_flow_table
> > > whose CPU info is based on recvmsg by the application. But for the tests
> > > like what I did, there's no application involved.
> > >
> > While rps_sock_flow_table is currently only managed by recvmsg, it
> > still is the general mechanism that maps flows to CPUs for steering.
> > There should be nothing preventing you from populating and managing
> > entries in other ways.
>
> It might be done from a netfilter module, activated in FORWARD chain for
> example.
>
>
A good indicator of the network load of a cpu would be to gather
&per_cpu(softnet_data, cpu)->input_pkt_queue.qlen in an EWMA.
We could dynamically adjust active cpus in RPS set given the load of the
machine.
On low load, cpu handling NIC interrupt could also bypass RPS and avoid
IPI to other cpus for low overhead.
tcpu = map->cpus[((u64) skb->rxhash * map->len) >> 32];
->
if (map->curlen) {
tcpu = map->cpus[((u64) skb->rxhash * map->curlen) >> 32];
if (cpu_online(tcpu))
return tcpu;
}
return -1;
Every second or so (to reduce Out Of Order impact), allow curlen to be
incremented/decremented in [0 .. map->len] if load is
increasing/lowering.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists