[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-id: <230561650.749911455108197509.JavaMail.weblogic@epmlwas05a>
Date: Wed, 10 Feb 2016 12:43:21 +0000 (GMT)
From: Vaneet Narang <v.narang@...sung.com>
To: Daniel Borkmann <daniel@...earbox.net>,
Maninder Singh <maninder1.s@...sung.com>,
Vaneet Narang <v.narang@...sung.com>
Cc: "davem@...emloft.net" <davem@...emloft.net>,
"willemb@...gle.com" <willemb@...gle.com>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"eyal.birger@...il.com" <eyal.birger@...il.com>,
"tklauser@...tanz.ch" <tklauser@...tanz.ch>,
"fruggeri@...stanetworks.com" <fruggeri@...stanetworks.com>,
"dwmw2@...radead.org" <dwmw2@...radead.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
PANKAJ MISHRA <pankaj.m@...sung.com>,
Geon-ho Kim <gh007.kim@...sung.com>,
Hak-Bong Lee <hakbong5.lee@...sung.com>,
"ajeet.y@...sung.com" <ajeet.y@...sung.com>,
AKHILESH KUMAR <akhilesh.k@...sung.com>,
AMIT NAGAL <amit.nagal@...sung.com>
Subject: Re: [PATCH] af_packet: Raw socket destruction warning fix
Hi,
>What driver are you using (is that in-tree)? Can you reproduce the same issue
>with a latest -net kernel, for example (or, a 'reasonably' recent one like 4.3 or
>4.4)? There has been quite a bit of changes in err queue handling (which also
>accounts rmem) as well. How reliably can you trigger the issue? Does it trigger
>with a completely different in-tree network driver as well with your tests? Would
>be useful to track/debug sk_rmem_alloc increases/decreases to see from which path
>new rmem is being charged in the time between packet_release() and packet_sock_destruct()
>for that socket ...
>
It seems race condition to us between packet_rcv and packet_close, we have tried to reproduce
this issue by adding delay in skb_set_owner_r and issue gets reproduced quite frequently.
we have added some traces and on analyzing we have realised following possible race condition.
packet_rcv
packet_close
skb_set_owner_r(skb, sk);
skb_queue_purge(&sk->sk_receive_queue);
spin_lock(&sk->sk_receive_queue.lock);
__skb_queue_tail(&sk->sk_receive_queue, skb);
spin_unlock(&sk->sk_receive_queue.lock);
Since packet was not added to receive queue so receive queue purge will no have any impact.
It will not free sk_buff stored in receive queue. So to fix this issue, we have make sure
skb_set_owner_r(skb, sk) & __skb_queue_tail(&sk->sk_receive_queue, skb) is called under
receive queue lock and we have moved receive queue purge from packet_release to
packet_sock_destruct.
we have added some traces in skb_set_owner_r & packet_sock_destruct. So this is
what we got
CPU 0
sk = db6d17c0 sk->sk_flags = 0x320 Size = 1984
Backtrace:
(dump_backtrace+0x0/0x128) from (show_stack+0x20/0x28)
(show_stack+0x0/0x28) from (dump_stack+0x24/0x28)
(dump_stack+0x0/0x28) from (packet_rcv+0x480/0x488)
(packet_rcv+0x0/0x488) from (__netif_receive_skb_core+0x53c/0x674)
(__netif_receive_skb_core+0x0/0x674) from (__netif_receive_skb+0x20/0x74)
(__netif_receive_skb+0x0/0x74) from (netif_receive_skb+0x2c/0xbc)
(netif_receive_skb+0x0/0xbc) from (napi_gro_receive+0x90/0xc0)
......
(net_rx_action+0x0/0x300) from(__do_softirq+0x160/0x340)
(__do_softirq+0x0/0x340) from (do_softirq+0xc4/0xe0)
(do_softirq+0x0/0xe0) from (irq_exit+0xc4/0xf8)
(irq_exit+0x0/0xf8) from (handle_IRQ+0x88/0x10c)
(handle_IRQ+0x0/0x10c) from (gic_handle_irq+0x64/0xac)
CPU 1
Backtrace:
sk = db6d17c0 sk->sk_rmem_alloc=1984 sk->sk_flags = 0x141
Receive Queue Empty = "Yes" Error queue empty = "Yes"
(packet_sock_destruct+0x0/0x1f4) from (__sk_free+0x28/0x18c)
(__sk_free+0x0/0x18c) from (sk_free+0x40/0x48)
(sk_free+0x0/0x48) from (packet_release+0x29c/0x310)
(packet_release+0x0/0x310) from (sock_release+0x30/0xb8)
(sock_release+0x0/0xb8) from (sock_close+0x1c/0x28)
(sock_close+0x0/0x28) from (__fput+0x9c/0x2b4)
(__fput+0x0/0x2b4) from (____fput+0x18/0x20)
(____fput+0x0/0x20) from (task_work_run+0xc0/0xfc)
(task_work_run+0x0/0xfc) from (do_work_pending+0x108/0x114)
(do_work_pending+0x0/0x114) from (work_pending+0xc/0x20)
From this it appears packet_rcv was called when packet_release
was not done as sk_flags = 0x320 (SOCK_DEAD is not set) & packet_sock_destruct was called
when sk_rmem_alloc was increased but packet was not added to receive queue.
sk_rmem_alloc pending is same as size of last packet received on socket.
Kindly comment on the fix shared at following link.
http://www.spinics.net/lists/kernel/msg2184815.html
Thanks & Regards,
Vaneet Narang
Powered by blists - more mailing lists