[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <h2m65634d661004121013uf2c86b81ndded3bb138dee7a9@mail.gmail.com>
Date: Mon, 12 Apr 2010 10:13:03 -0700
From: Tom Herbert <therbert@...gle.com>
To: Changli Gao <xiaosuo@...il.com>
Cc: "David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [PATCH] rps: add flow director support
On Mon, Apr 12, 2010 at 7:27 AM, Changli Gao <xiaosuo@...il.com> wrote:
> On Mon, Apr 12, 2010 at 9:34 PM, Tom Herbert <therbert@...gle.com> wrote:
>> On Sun, Apr 11, 2010 at 2:42 PM, Changli Gao <xiaosuo@...il.com> wrote:
>>> add rps flow director support
>>>
>>> with rps flow director, users can do weighted packet dispatching among CPUs.
>>> For example, CPU0:CPU1 is 1:3 for eth0's rx-0:
>>>
>> "Flow director" is a misnomer here in that it has no per flow
>> awareness, that is what RFS provides. Please use a different name.
>
> Flow here is a bundle of flow, not the original meaning. How about
> "rps_buckets" and "rps_bucket_x"?
>
Ideally, this should replace rps_cpus if it's a better interface....
right now these would be conflicting interfaces.
>>
>>> localhost linux # echo 4 > /sys/class/net/eth0/queues/rx-0/rps_flows
>>> localhost linux # echo 0 > /sys/class/net/eth0/queues/rx-0/rps_flow_0
>>> localhost linux # echo 1 > /sys/class/net/eth0/queues/rx-0/rps_flow_1
>>> localhost linux # echo 1 > /sys/class/net/eth0/queues/rx-0/rps_flow_2
>>> localhost linux # echo 1 > /sys/class/net/eth0/queues/rx-0/rps_flow_3
>>>
>> It might be better to put this in its own directory
>
> I have thought that before, but since they control the same data in
> kernel as rps_cpus does, I put them in the same directory.
>
>> and also do it per
>> CPU instead of hash entry. This should result in a lot fewer entries
>> and I'm not sure how you would deal with holes in the hash table for
>> unspecified entries. Also, it would be nice not to have to specify a
>> number of entries. Maybe something like:
>>
>> localhost linux # echo 1 > /sys/class/net/eth0/queues/rx-0/rps_cpu_map/0
>> localhost linux # echo 3 > /sys/class/net/eth0/queues/rx-0/rps_cpu_map/1
>>
>> To specify CPU 0 with weight 1, CPU 1 with weight 3.
>>
>
> Your way is more simple and straightforward. My idea has it own advantage:
> 1. control the rate precision through rps_flows.
> 2. do dynamic weighted packet dispatching by migrating some flows from
> some CPUs to other CPUs. During this operations, only the flows
> migrated are affected, and OOO only occurs in these flows.
It's probably a little more work, but the CPU->weight mappings could
be implemented to cause minimal disruption in the rps_map. Also, if
OOO is an issue, then the mitigation technique in RFS could be applied
(this will work best when hash table is larger I believe).
Tom
>
> --
> Regards,
> Changli Gao(xiaosuo@...il.com)
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists