[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yfuk11on6XiaB6Di@myrica>
Date: Thu, 3 Feb 2022 09:48:07 +0000
From: Jean-Philippe Brucker <jean-philippe@...aro.org>
To: Yunsheng Lin <linyunsheng@...wei.com>
Cc: linuxarm@...neuler.org, ilias.apalodimas@...aro.org,
salil.mehta@...wei.com, netdev@...r.kernel.org,
moyufeng@...wei.com, alexanderduyck@...com, brouer@...hat.com,
kuba@...nel.org
Subject: Re: [PATCH net-next v2 4/4] net: hns3: support skb's frag page
recycling based on page pool
On Sat, Jan 29, 2022 at 04:44:34PM +0800, Yunsheng Lin wrote:
> >> My initial thinking is to track if the reference counting or pp_frag_count of
> >> the page is manipulated correctly.
> >
> > It looks like pp_frag_count is dropped too many times: after (1),
> > pp_frag_count only has 1 ref, so (2) drops it to 0 and (3) results in
> > underflow. I turned page_pool_atomic_sub_frag_count_return() into
> > "atomic_long_sub_return(nr, &page->pp_frag_count)" to make sure (the
> > atomic_long_read() bit normally hides this). Wasn't entirely sure if this
> > is expected behavior, though.
>
> Are you true the above 1~3 step is happening for the same page?
Yes they happen on the same page. What I did was save the backtrace of
each call to page_pool_atomic_sub_frag_count_return() and, when an
underflow error happens on a page, print out the history of that page
only.
My report was not right, though, I forgot to save the backtrace for
pp_frag_count==0. There's actually two refs on the page. It goes like
this:
(1) T-1535, drop BIAS_MAX - 2, pp_frag_count now 2
page_pool_alloc_frag+0x128/0x240
hns3_alloc_and_map_buffer+0x30/0x170
hns3_nic_alloc_rx_buffers+0x9c/0x170
hns3_clean_rx_ring+0x864/0x960
hns3_nic_common_poll+0xa0/0x218
__napi_poll+0x38/0x188
net_rx_action+0xe8/0x248
__do_softirq+0x120/0x284
(2) T-4, drop 1, pp_frag_count now 1
page_pool_put_page+0x98/0x338
page_pool_return_skb_page+0x48/0x60
skb_release_data+0x170/0x190
skb_release_all+0x28/0x38
kfree_skb_reason+0x30/0x90
packet_rcv+0x58/0x430
__netif_receive_skb_list_core+0x1f4/0x218
netif_receive_skb_list_internal+0x18c/0x2a8
(3) T-1, drop 1, pp_frag_count now 0
page_pool_put_page+0x98/0x338
page_pool_return_skb_page+0x48/0x60
skb_release_data+0x170/0x190
skb_release_all+0x28/0x38
__kfree_skb+0x18/0x30
__sk_defer_free_flush+0x44/0x58
tcp_recvmsg+0x94/0x1b8
inet_recvmsg+0x50/0x128
(4) T, drop 1, pp_frag_count now -1 (underflow)
page_pool_put_page+0x2d0/0x338
hns3_clean_rx_ring+0x74c/0x960
hns3_nic_common_poll+0xa0/0x218
__napi_poll+0x38/0x188
net_rx_action+0xe8/0x248
> If it is the same page, there must be something wrong here.
>
> Normally there are 1024 BD for a rx ring:
>
> BD_0 BD_1 BD_2 BD_3 BD_4 .... BD_1020 BD_1021 BD_1022 BD_1023
> ^ ^
> head tail
>
> Suppose head is manipulated by driver, and tail is manipulated by
> hw.
>
> driver allocates buffer for BD pointed by head, as the frag page
> recycling is introduced in this patch, the BD_0 and BD_1's buffer
> may point to the same pageļ¼4K page size, and each BD only need
> 2k Buffer.
> hw dma the data to the buffer pointed by tail when packet is received.
>
> so step 1 Normally happen for the BD pointed by head,
> and step 2 & 3 Normally happen for the BD pointed by tail.
> And Normally there are at least (1024 - RCB_NOF_ALLOC_RX_BUFF_ONCE) BD
> between head and tail, so it is unlikely that head and tail's BD buffer
> points to the same page.
I think a new page is allocated at step 1, no? The driver calls
page_pool_alloc_frag() when refilling the rx ring, and since the current
pool->frag_page (P1) is still used by BD_0 and BD_1, then
page_pool_drain_frag() drops (BIAS_MAX - 2) references and
page_pool_alloc_frag() replaces frag_page with a new page, P2. Later, head
points to BD_1, the driver can drop the remaining 2 references to P1 in
steps 2 and 3, and P1 can be unmapped and freed/recycled
What I don't get is which of steps 2, 3 and 4 is the wrong one. Could be
2 or 3 because the device is evidently still doing DMA to the page after
it's released, but it could also be that the driver doesn't properly clear
the BD in which case step 4 is wrong. I'll try to find out which fragment
gets dropped twice.
Thanks,
Jean
Powered by blists - more mailing lists