[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <r2q412e6f7f1004160710j6d575f36t8e39a283328cf2d7@mail.gmail.com>
Date: Fri, 16 Apr 2010 22:10:28 +0800
From: Changli Gao <xiaosuo@...il.com>
To: hadi@...erus.ca
Cc: Eric Dumazet <eric.dumazet@...il.com>,
Rick Jones <rick.jones2@...com>,
David Miller <davem@...emloft.net>, therbert@...gle.com,
netdev@...r.kernel.org, robert@...julf.net, andi@...stfloor.org
Subject: Re: rps perfomance WAS(Re: rps: question
On Fri, Apr 16, 2010 at 9:49 PM, jamal <hadi@...erus.ca> wrote:
> On Fri, 2010-04-16 at 21:34 +0800, Changli Gao wrote:
>
>
> my observation is:
> s->total is the sum of all packets received by cpu (some directly from
> ethernet)
It is meaningless currently. If rps is enabled, it may be twice of the
number of the packets received, because one packet may be count twice:
one in enqueue_to_backlog(), and the other in __netif_receive_skb(). I
had posted a patch to solve this problem.
http://patchwork.ozlabs.org/patch/50217/
If you don't apply my patch, you'd better refer to /proc/net/dev for
the total number.
> s->received_rps was what the count receiver cpu saw incoming if they
> were sent by another cpu.
Maybe its name confused you.
/* Called from hardirq (IPI) context */
static void trigger_softirq(void *data)
{
struct softnet_data *queue = data;
__napi_schedule(&queue->backlog);
__get_cpu_var(netdev_rx_stat).received_rps++;
}
the function above is called in IRQ of IPI. It counts the number of
IPIs received. It is actually ipi_rps you need.
> s-> ipi_rps is the times we tried to enq to remote cpu but found it to
> be empty and had to send an IPI.
> ipi_rps can be < received_rps if we receive > 1 packet without
> generating an IPI. What did i miss?
>
--
Regards,
Changli Gao(xiaosuo@...il.com)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists