[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ba1b48dc-b544-4c4b-be8a-d39b104cda21@ti.com>
Date: Thu, 30 Oct 2025 10:13:51 +0530
From: Meghana Malladi <m-malladi@...com>
To: Paolo Abeni <pabeni@...hat.com>, <horms@...nel.org>,
<namcao@...utronix.de>, <vadim.fedorenko@...ux.dev>,
<jacob.e.keller@...el.com>, <christian.koenig@....com>,
<sumit.semwal@...aro.org>, <sdf@...ichev.me>, <john.fastabend@...il.com>,
<hawk@...nel.org>, <daniel@...earbox.net>, <ast@...nel.org>,
<kuba@...nel.org>, <edumazet@...gle.com>, <davem@...emloft.net>,
<andrew+netdev@...n.ch>
CC: <linaro-mm-sig@...ts.linaro.org>, <dri-devel@...ts.freedesktop.org>,
<linux-media@...r.kernel.org>, <bpf@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <netdev@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>, <srk@...com>, Vignesh Raghavendra
<vigneshr@...com>, Roger Quadros <rogerq@...nel.org>, <danishanwar@...com>
Subject: Re: [EXTERNAL] Re: [PATCH net-next v4 2/6] net: ti: icssg-prueth: Add
XSK pool helpers
Hi Paolo,
On 10/28/25 16:27, Paolo Abeni wrote:
> On 10/23/25 11: 39 AM, Meghana Malladi wrote: > @@ -1200,6 +1218,109 @@
> static int emac_xdp_setup(struct prueth_emac *emac, struct netdev_bpf
> *bpf) > return 0; > } > > +static int prueth_xsk_pool_enable(struct
> prueth_emac *emac,
> ZjQcmQRYFpfptBannerStart
> This message was sent from outside of Texas Instruments.
> Do not click links or open attachments unless you recognize the source
> of this email and know the content is safe.
> Report Suspicious
> <https://us-phishalarm-ewt.proofpoint.com/EWT/v1/G3vK!
> updqHb0lvOd6ACXFPDODXzFjW2RtkIpblpWr3zui2O2JqWTyRCLKc2i7Pa7uSMBZYpq8H7tTr-jp_nDelg_OUrmNCgZ8_m0$>
> ZjQcmQRYFpfptBannerEnd
>
> On 10/23/25 11:39 AM, Meghana Malladi wrote:
>> @@ -1200,6 +1218,109 @@ static int emac_xdp_setup(struct prueth_emac *emac, struct netdev_bpf *bpf)
>> return 0;
>> }
>>
>> +static int prueth_xsk_pool_enable(struct prueth_emac *emac,
>> + struct xsk_buff_pool *pool, u16 queue_id)
>> +{
>> + struct prueth_rx_chn *rx_chn = &emac->rx_chns;
>> + u32 frame_size;
>> + int ret;
>> +
>> + if (queue_id >= PRUETH_MAX_RX_FLOWS ||
>> + queue_id >= emac->tx_ch_num) {
>> + netdev_err(emac->ndev, "Invalid XSK queue ID %d\n", queue_id);
>> + return -EINVAL;
>> + }
>> +
>> + frame_size = xsk_pool_get_rx_frame_size(pool);
>> + if (frame_size < PRUETH_MAX_PKT_SIZE)
>> + return -EOPNOTSUPP;
>> +
>> + ret = xsk_pool_dma_map(pool, rx_chn->dma_dev, PRUETH_RX_DMA_ATTR);
>> + if (ret) {
>> + netdev_err(emac->ndev, "Failed to map XSK pool: %d\n", ret);
>> + return ret;
>> + }
>> +
>> + if (netif_running(emac->ndev)) {
>> + /* stop packets from wire for graceful teardown */
>> + ret = icssg_set_port_state(emac, ICSSG_EMAC_PORT_DISABLE);
>> + if (ret)
>> + return ret;
>> + prueth_destroy_rxq(emac);
>> + }
>> +
>> + emac->xsk_qid = queue_id;
>> + prueth_set_xsk_pool(emac, queue_id);
>> +
>> + if (netif_running(emac->ndev)) {
>> + ret = prueth_create_rxq(emac);
>
> It looks like this falls short of Jakub's request on v2:
>
> https://urldefense.com/v3/__https://lore.kernel.org/
> netdev/20250903174847.5d8d1c9f@...nel.org/__;!!G3vK!
> TxEOF2PZA-2oagU7Gmq2PdyHrceI_sWFRSCMP2meOxVrs8eqStDUSTPi2kyzjva1rgUzQUtYbd9g$ <https://urldefense.com/v3/__https://lore.kernel.org/netdev/20250903174847.5d8d1c9f@kernel.org/__;!!G3vK!TxEOF2PZA-2oagU7Gmq2PdyHrceI_sWFRSCMP2meOxVrs8eqStDUSTPi2kyzjva1rgUzQUtYbd9g$>
>
> about not freeing the rx queue for reconfig.
>
I tried honoring Jakub's comment to avoid freeing the rx memory wherever
necessary.
"In case of icssg driver, freeing the rx memory is necessary as the
rx descriptor memory is owned by the cppi dma controller and can be
mapped to a single memory model (pages/xdp buffers) at a given time.
In order to remap it, the memory needs to be freed and reallocated."
> I think you should:
> - stop the H/W from processing incoming packets,
> - spool all the pending packets
> - attach/detach the xsk_pool
> - refill the ring
> - re-enable the H/W
>
Current implementation follows the same sequence:
1. Does a channel teardown -> stop incoming traffic
2. free the rx descriptors from free queue and completion queue -> spool
all pending packets/descriptors
3. attach/detach the xsk pool
4. allocate rx descriptors and fill the freeq after mapping them to the
correct memory buffers -> refill the ring
5. restart the NAPI - re-enable the H/W to recv the traffic
I am still working on skipping 2 and 4 steps but this will be a long
shot. Need to make sure all corner cases are getting covered. If this
approach looks doable without causing any regressions I might post it as
a followup patch later in the future.
> /P
>
Powered by blists - more mailing lists