[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTiltF2APEnqHyHTtHA7JMNp3HX3aCu9UL3nUJ5_u@mail.gmail.com>
Date: Sat, 5 Jun 2010 21:26:05 +0800
From: Changli Gao <xiaosuo@...il.com>
To: hadi@...erus.ca
Cc: Eric Dumazet <eric.dumazet@...il.com>,
"David S. Miller" <davem@...emloft.net>,
Tom Herbert <therbert@...gle.com>,
Linux Netdev List <netdev@...r.kernel.org>
Subject: Re: [RFC] act_cpu: redirect skb receiving to a special CPU.
On Sat, Jun 5, 2010 at 9:07 PM, jamal <hadi@...erus.ca> wrote:
> Changli,
>
> I like the idea..
>
> My preference would be to not change ingress qdisc to have queues.
ingress doesn't have any qdisc, but a class tree. The ingress_queue
will be sth. like this:
while (1) {
result = tc_classify(..., &res);
cl = ingress_find(res.classid, ...);
if (!cl->level)
break;
...
}
Then we can classify skbs in tree manner.
> The cpuid should be sufficient to map to a remote cpu queue, no?
It should be sufficient, but it isn't efficient. With map option, we
can use cls_flow to map traffic to classid, and use act_cpu map to map
classid to cpuid.
> Now, if you could represent each cpu as a netdevice, then we wouldnt
> need any change;-> And we could have multiple types of ways to redirect
> to cpus instead of just doing IPIs - example, ive always thought of
> sending over something like HT (I think it would be a lot cheaper).
I won't implement a new netdevice, but reuse the softnet. Even, I'll
reuse the enqueue_to_backlog() introduced by RPS, and of course, use
IPIs as RPS. Is there another way to trigger an IRQ of the remote CPU?
>
> I didnt queit understand the map OFFSET part. is this part of rfs?
>
No. As class IDs are started from 1, but CPU IDs are started from 0, I
need to minus/add a number from/to class IDs to map class IDs from CPU
IDs.
--
Regards,
Changli Gao(xiaosuo@...il.com)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists