[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1336379153.3752.2273.camel@edumazet-glaptop>
Date: Mon, 07 May 2012 10:25:53 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Deng-Cheng Zhu <dczhu@...s.com>
Cc: Tom Herbert <therbert@...gle.com>, davem@...emloft.net,
netdev@...r.kernel.org
Subject: Re: [PATCH v2] RPS: Sparse connection optimizations - v2
On Mon, 2012-05-07 at 16:01 +0800, Deng-Cheng Zhu wrote:
> Did you really read my patch and understand what I commented? When I was
> talking about using rps_sparse_flow (initially cpu_flow), neither
> rps_sock_flow_table nor rps_dev_flow_table is activated (number of
> entries: 0).
I read your patch and am concerned of performance issues when handling
typical workload. Say between 0.1 and 20 Mpps on current hardware.
The argument "oh its only selected when
CONFIG_RPS_SPARSE_FLOW_OPTIMIZATION is set" is wrong.
CONFIG_NR_RPS_MAP_LOOPS is wrong.
Your HZ timeout is yet another dark side of your patch.
Your (flow->dev == skb->dev) test is wrong.
Your : flow->ts = now; is wrong (dirtying memory for each packet)
Really I dont like your patch.
You are kindly asked to find another way to solve your problem, a
generic mechanism that can help others, not only you.
We do think activating RFS is the way to go. Its the standard layer we
added below RPS, its configurable and scales. It can be expanded at will
with configurable plugins.
For example, using single queue NICS, it makes sense to select cpu on
the output device only, not on the rxhash by itself (a modulo or
something), to reduce false sharing and qdisc/device lock on tx path.
If your machine has 4 cpus, and 4 nics, you can instruct RFS table to
prefer cpu on the NIC that packet will use for output.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists