[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7f8b32a91f5849c99609f78520b23535@realtek.com>
Date: Thu, 7 Sep 2023 07:16:50 +0000
From: Hayes Wang <hayeswang@...ltek.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: "davem@...emloft.net" <davem@...emloft.net>,
"netdev@...r.kernel.org"
<netdev@...r.kernel.org>,
nic_swsd <nic_swsd@...ltek.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-usb@...r.kernel.org" <linux-usb@...r.kernel.org>
Subject: RE: [PATCH net v2] r8152: avoid the driver drops a lot of packets
Jakub Kicinski <kuba@...nel.org>
> Sent: Thursday, September 7, 2023 8:29 AM
[...]
> Good to see that you can repro the problem.
I don't reproduce the problem. I just find some information about it.
> Before we tweak the heuristics let's make sure rx_bottom() behaves
> correctly. Could you make sure that
> - we don't perform _any_ rx processing when budget is 0
> (see the NAPI documentation under Documentation/networking)
The work_done would be 0, and napi_complete_done() wouldn't be called.
However, skb_queue_len(&tp->rx_queue) may be increased. I think it is
not acceptable, right?
> - finish the current aggregate even if budget run out, return
> work_done = budget in that case.
> With this change the rx_queue thing should be gone completely.
Excuse me. I don't understand this part. I know that when the packets are
more than budget, the maximum packets which could be handled is budget.
That is, return work_done = budget. However, the extra packets would be queued
to rx_queue. I don't understand what you mean about " the rx_queue thing
should be gone completely". I think the current driver would return
work_done = budget, and queue the other packets. I don't sure what you
want me to change.
> - instead of copying the head use napi_get_frags() + napi_gro_frags()
> it gives you an skb, you just attach the page to it as a frag and
> hand it back to GRO. This makes sure you never pull data into head
> rather than just headers.
I would study about them. Thanks.
Should I include above changes for this patch?
I think I have to submit another patches for above.
> Please share the performance results with those changes.
I couldn't reproduce the problem, so I couldn't provide the result
with the differences.
Best Regards,
Hayes
Powered by blists - more mailing lists