[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20110425110210.GA24801@hmsreliant.think-freely.org>
Date: Mon, 25 Apr 2011 07:02:10 -0400
From: Neil Horman <nhorman@...driver.com>
To: zhou rui <zhourui.cn@...il.com>
Cc: Eric Dumazet <eric.dumazet@...il.com>,
Tom Herbert <therbert@...gle.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: RPS will assign different smp_processor_id for the same packet?
On Sun, Apr 24, 2011 at 05:36:51PM +0800, zhou rui wrote:
> On Sun, Apr 24, 2011 at 4:00 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> > Le dimanche 24 avril 2011 à 10:00 +0800, zhou rui a écrit :
> >> On Sun, Apr 24, 2011 at 3:56 AM, Tom Herbert <therbert@...gle.com> wrote:
> >> > On Sat, Apr 23, 2011 at 8:31 AM, zhou rui <zhourui.cn@...il.com> wrote:
> >> >> one more question is:
> >> >>
> >> >> in the function "int netif_receive_skb(struct sk_buff *skb)"
> >> >>
> >> >> cpu = get_rps_cpu(skb->dev, skb, &rflow);
> >> >> if (cpu >= 0) {
> >> >> ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
> >> >> ....
> >> >>
> >> >> probably the cpu is different from the current processor id?(smp_processor_id)
> >> >> let's say: get_rps_cpu->cpu 0, smp_processor_id->cpu1
> >> >> when this happen, does it mean that cpu1 is handling the softirq but
> >> >> have to divert the packet to cpu0?(via a new softirq?)
> >> >>
> >> >> so for one packet it involve 2 softirqs?
> >> >>
> >> >> possible to get_rps_cpu in interrupt,then let the target cpu do only
> >> >> one softirq to hanle the packet?
> >> >>
> >> > Yes, this is what a non-NAPI driver would do.
> >> >
> >> > Tom
> >> >
> >> non-NAPI will get_rps_cpu in irq, but why NAPI will get_rps_cpu in
> >> softirq?(if I understand correctly netif_receive_skb executed in
> >> softirq?)
> >
> > Thats hard to understand what your problem or question is.
> >
> > All heavy Receive networking stuff is handled in softirq.
> >
> > NAPI allows to reduce number of hardware IRQS in stress/load situations,
> > fetching several frames at once.
> >
> >
> >
> >
>
> my understanding:
>
> non-NAPI scenario:
>
> netif_rx( in irq, get_rps_cpu,enqueue_to_backlog to deliver packet to
> cpu queue) ------>net_rx_action(in softirq,deque and process packet)
>
>
> NAPI:
>
> what does RPS do?(in
> irq)------------------------>net_rx_action(softirq)----->netif_receive_skb(get_rps_cpu,enque
> packet)
>
> so my question is:
> for NAPI, get_rps_cpu will be done in softirq?
>
> if the above situation is true,will this happen?
> packet_for_cpu_1 --> cpu0(netif_receive_skb,in softirq) --->delivered
> to cpu1(softirq)
As Eric noted, NAPI enabled drivers using RPS will have a flow like he
described:
irq
napi_schedule(driver)
driver poll
netif_receive_skb
enqueue_to_backlog
ipi
napi_schedule (backlog dev)
backlog dev poll
netif_receive_skb
drivers using netif_rx do the same thing, the only difference is that they
schedule the local backlog device for a napi poll rather than their own device.
This is done because the ipi to the remote cpu is issued from within the napi
softirq, so we need to kick the local cpu to run a napi poll cycle after such a
driver receives a frame.
Neil
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists