[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9d793ea2f58c16ec1ceca0124eed4d67.squirrel@www.codeaurora.org>
Date: Thu, 15 Jan 2015 22:29:26 -0000
From: subashab@...eaurora.org
To: "Eric Dumazet" <eric.dumazet@...il.com>
Cc: "Prasad Sodagudi" <psodagud@...eaurora.org>,
netdev@...r.kernel.org, "Tom Herbert" <therbert@...gle.com>
Subject: Re: [PATCH net] net: rps: fix cpu unplug
Thanks for the patch. I shall try it out and provide feedback soon.
But we think the race condition issue is different. The crash was observed
in the process_queue.
On the event of a CPU hotplug, the NAPI poll_list is copied over from the
offline CPU to the CPU on which dev_cpu_callback() was called. These
operations happens in dev_cpu_callback() in the context of the notifier
chain from hotplug framework. Also in the same hotplug notifier context
(dev_cpu_callback) the input_pkt_queue and process_queue of the offline
CPU are dequeued and sent up the network stack and this is where I think
the race/problem is.
Context1: The online CPU starts processing the poll_list from
net_rx_action since a
softIRQ was raised in dev_cpu_callback(). process_backlog() draining the
process queue
Context2: hotplug notifier dev_cpu_callback() draining the queues and
calling netif_rx().
from dev_cpu_callback()
/* Process offline CPU's input_pkt_queue */
while ((skb = __skb_dequeue(&oldsd->process_queue))) {
netif_rx(skb);
input_queue_head_incr(oldsd);
}
while ((skb = __skb_dequeue(&oldsd->input_pkt_queue))) {
netif_rx(skb);
input_queue_head_incr(oldsd);
}
Is this de-queuing(the above code snippet from dev_cpu_callback())
actually necessary since the poll_list should already have the backlog
napi struct of the old CPU? In this case when process_backlog()
actually runs it should drain these two queues of older CPU.
Let me know your thoughts.
Thanks
KS
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
> From: Eric Dumazet <edumazet@...gle.com>
>
> softnet_data.input_pkt_queue is protected by a spinlock that
> we must hold when transferring packets from victim queue to an active
> one. This is because other cpus could still be trying to enqueue packets
> into victim queue.
>
> Based on initial patch from Prasad Sodagudi & Subash Abhinov
> Kasiviswanathan.
>
> This version is better because we do not slow down packet processing,
> only make migration safer.
>
> Reported-by: Prasad Sodagudi <psodagud@...eaurora.org>
> Reported-by: Subash Abhinov Kasiviswanathan <subashab@...eaurora.org>
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Cc: Tom Herbert <therbert@...gle.com>
> ---
>
> Could you test this fix instead of yours ? Thanks !
>
> net/core/dev.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/net/core/dev.c b/net/core/dev.c
> index
> 1e325adc43678084418ef9e1abb1fca8059ff599..76f72762b325cfa927a793af180189c51e9eaffd
> 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -7089,7 +7089,7 @@ static int dev_cpu_callback(struct notifier_block
> *nfb,
> netif_rx_internal(skb);
> input_queue_head_incr(oldsd);
> }
> - while ((skb = __skb_dequeue(&oldsd->input_pkt_queue))) {
> + while ((skb = skb_dequeue(&oldsd->input_pkt_queue))) {
> netif_rx_internal(skb);
> input_queue_head_incr(oldsd);
> }
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists