[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20251217091643.307-1-angus.chen@jaguarmicro.com>
Date: Wed, 17 Dec 2025 17:16:43 +0800
From: Angus Chen <angus.chen@...uarmicro.com>
To: jasowang@...hat.com,
mst@...hat.com,
xuanzhuo@...ux.alibaba.com,
eperezma@...hat.com
Cc: linux-kernel@...r.kernel.org,
virtualization@...ts.linux.dev,
Angus Chen <angus.chen@...uarmicro.com>
Subject: [PATCH v2] vdpa_sim_net: Fix the error handling logic
The vdpasim_net_work function handles both transmit TX completion and
RX processing. A descriptor is taken from the TX VQ at the start
of the loop, representing a buffer previously used for the TX path's
initial read from the virtual network device. This TX descriptor should
be completed back to the Rx queue at the end of the work loop.
However, the error handling logic was flawed:
1. Read Failure (err <= 0): The TX VQ completion
(vdpasim_net_complete(txq, 0)) was performed, but the unconditional
break that followed prevented the processing of subsequent pending work
items (other descriptors) in the queue in the same work function call.
2. Write Failure (write <= 0): If data could not be written to the RX VQ
descriptor (e.g., vringh_iov_push_iotlb failed), the flow would 'break'
before the TX VQ descriptor was completed. This led to a leak of TX VQ
descriptors, as the work was never notified that the TX buffer was
free to use again.
This commit refactors the control flow to ensure:
a) The TX VQ descriptor is completed back to the rx in all error and
success paths related to the current descriptor processing.
b) The unconditional break on read failure is removed, allowing the
function to continue processing remaining work items
until the loop naturally exits.
c) It updated some statistic logic also.
Fixes: 0899774cb360 ("vdpa_sim_net: vendor satistics")
Suggested-by: Jason Wang <jasowang@...hat.com>
Signed-off-by: Angus Chen <angus.chen@...uarmicro.com>
---
v2:
- Add Suggested-by tag.
- Shorten the subject line.
- Consolidate vdpasim_net_complete calls.
---
drivers/vdpa/vdpa_sim/vdpa_sim_net.c | 31 ++++++++++++++--------------
1 file changed, 16 insertions(+), 15 deletions(-)
diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim_net.c b/drivers/vdpa/vdpa_sim/vdpa_sim_net.c
index 6caf09a1907b..8c7b943f148d 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim_net.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim_net.c
@@ -231,33 +231,34 @@ static void vdpasim_net_work(struct vdpasim *vdpasim)
tx_bytes += read;
+ vdpasim_net_complete(txq, 0);
if (!receive_filter(vdpasim, read)) {
++rx_drops;
- vdpasim_net_complete(txq, 0);
continue;
}
err = vringh_getdesc_iotlb(&rxq->vring, NULL, &rxq->in_iov,
&rxq->head, GFP_ATOMIC);
- if (err <= 0) {
+ if (err == 0) {
+ /* rx has no available ring, no need to break */
++rx_overruns;
- vdpasim_net_complete(txq, 0);
- break;
- }
-
- write = vringh_iov_push_iotlb(&rxq->vring, &rxq->in_iov,
- net->buffer, read);
- if (write <= 0) {
+ } else if (err < 0) {
++rx_errors;
break;
+ } else {
+
+ write = vringh_iov_push_iotlb(&rxq->vring, &rxq->in_iov,
+ net->buffer, read);
+ if (write <= 0) {
+ ++rx_errors;
+ break;
+ }
+
+ ++rx_pkts;
+ rx_bytes += write;
+ vdpasim_net_complete(rxq, write);
}
- ++rx_pkts;
- rx_bytes += write;
-
- vdpasim_net_complete(txq, 0);
- vdpasim_net_complete(rxq, write);
-
if (tx_pkts > 4) {
vdpasim_schedule_work(vdpasim);
goto out;
--
2.34.1
Powered by blists - more mailing lists