[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f6434e4a-c37c-41dc-91b4-0cc2d33730ba@ti.com>
Date: Fri, 2 May 2025 14:37:49 +0530
From: "Malladi, Meghana" <m-malladi@...com>
To: Jesper Dangaard Brouer <hawk@...nel.org>, <dan.carpenter@...aro.org>,
<john.fastabend@...il.com>, <daniel@...earbox.net>, <ast@...nel.org>,
<pabeni@...hat.com>, <kuba@...nel.org>, <edumazet@...gle.com>,
<davem@...emloft.net>, <andrew+netdev@...n.ch>
CC: <bpf@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<netdev@...r.kernel.org>, <linux-arm-kernel@...ts.infradead.org>,
<srk@...com>, Vignesh Raghavendra <vigneshr@...com>,
Roger Quadros
<rogerq@...nel.org>, <danishanwar@...com>
Subject: Re: [PATCH net 4/4] net: ti: icssg-prueth: Fix kernel panic during
concurrent Tx queue access
Hi Jesper,
On 5/2/2025 12:44 PM, Jesper Dangaard Brouer wrote:
>
>
> On 28/04/2025 14.04, Meghana Malladi wrote:
>> Add __netif_tx_lock() to ensure that only one packet is being
>> transmitted at a time to avoid race conditions in the netif_txq
>> struct and prevent packet data corruption. Failing to do so causes
>> kernel panic with the following error:
>>
>> [ 2184.746764] ------------[ cut here ]------------
>> [ 2184.751412] kernel BUG at lib/dynamic_queue_limits.c:99!
>> [ 2184.756728] Internal error: Oops - BUG: 00000000f2000800 [#1]
>> PREEMPT SMP
>>
>> logs: https://gist.github.com/
>> MeghanaMalladiTI/9c7aa5fc3b7fb03f87c74aad487956e9
>>
>> The lock is acquired before calling emac_xmit_xdp_frame() and released
>> after the
>> call returns. This ensures that the TX queue is protected from
>> concurrent access
>> during the transmission of XDP frames.
>>
>> Fixes: 62aa3246f462 ("net: ti: icssg-prueth: Add XDP support")
>> Signed-off-by: Meghana Malladi <m-malladi@...com>
>> ---
>> drivers/net/ethernet/ti/icssg/icssg_common.c | 7 ++++++-
>> drivers/net/ethernet/ti/icssg/icssg_prueth.c | 7 ++++++-
>> 2 files changed, 12 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/ti/icssg/icssg_common.c b/drivers/
>> net/ethernet/ti/icssg/icssg_common.c
>> index a120ff6fec8f..e509b6ff81e7 100644
>> --- a/drivers/net/ethernet/ti/icssg/icssg_common.c
>> +++ b/drivers/net/ethernet/ti/icssg/icssg_common.c
>> @@ -660,6 +660,8 @@ static u32 emac_run_xdp(struct prueth_emac *emac,
>> struct xdp_buff *xdp,
>> struct page *page, u32 *len)
>> {
>> struct net_device *ndev = emac->ndev;
>> + struct netdev_queue *netif_txq;
>> + int cpu = smp_processor_id();
>> struct bpf_prog *xdp_prog;
>> struct xdp_frame *xdpf;
>> u32 pkt_len = *len;
>> @@ -679,8 +681,11 @@ static u32 emac_run_xdp(struct prueth_emac *emac,
>> struct xdp_buff *xdp,
>> goto drop;
>> }
>> - q_idx = smp_processor_id() % emac->tx_ch_num;
>> + q_idx = cpu % emac->tx_ch_num;
>> + netif_txq = netdev_get_tx_queue(ndev, q_idx);
>> + __netif_tx_lock(netif_txq, cpu);
>> result = emac_xmit_xdp_frame(emac, xdpf, page, q_idx);
>> + __netif_tx_unlock(netif_txq);
>> if (result == ICSSG_XDP_CONSUMED) {
>> ndev->stats.tx_dropped++;
>> goto drop;
>> diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.c b/drivers/
>> net/ethernet/ti/icssg/icssg_prueth.c
>> index ee35fecf61e7..b31060e7f698 100644
>> --- a/drivers/net/ethernet/ti/icssg/icssg_prueth.c
>> +++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
>> @@ -1075,20 +1075,25 @@ static int emac_xdp_xmit(struct net_device
>> *dev, int n, struct xdp_frame **frame
>> {
>> struct prueth_emac *emac = netdev_priv(dev);
>> struct net_device *ndev = emac->ndev;
>> + struct netdev_queue *netif_txq;
>> + int cpu = smp_processor_id();
>> struct xdp_frame *xdpf;
>> unsigned int q_idx;
>> int nxmit = 0;
>> u32 err;
>> int i;
>> - q_idx = smp_processor_id() % emac->tx_ch_num;
>> + q_idx = cpu % emac->tx_ch_num;
>> + netif_txq = netdev_get_tx_queue(ndev, q_idx);
>> if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
>> return -EINVAL;
>> for (i = 0; i < n; i++) {
>> xdpf = frames[i];
>> + __netif_tx_lock(netif_txq, cpu);
>> err = emac_xmit_xdp_frame(emac, xdpf, NULL, q_idx);
>> + __netif_tx_unlock(netif_txq);
>
> Why are you taking and releasing this lock in a loop?
>
> XDP gain performance by sending a batch of 'n' packets.
> This approach looks like a performance killer.
>
Yes, I agree with you. This wasn't the intended change. Thank you for
pointing this out. The lock and unlock should happen outside the loop.
Will fix this in v2.
>
>> if (err != ICSSG_XDP_TX) {
>> ndev->stats.tx_dropped++;
>> break;
>
>
--
Thanks,
Meghana Malladi
Powered by blists - more mailing lists