[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MN0PR12MB5953A9FEC556D07494DB8E37B711A@MN0PR12MB5953.namprd12.prod.outlook.com>
Date: Sat, 12 Aug 2023 15:27:19 +0000
From: "Pandey, Radhey Shyam" <radhey.shyam.pandey@....com>
To: Jakub Kicinski <kuba@...nel.org>
CC: "vkoul@...nel.org" <vkoul@...nel.org>,
"robh+dt@...nel.org" <robh+dt@...nel.org>,
"krzysztof.kozlowski+dt@...aro.org"
<krzysztof.kozlowski+dt@...aro.org>,
"conor+dt@...nel.org" <conor+dt@...nel.org>,
"Simek, Michal" <michal.simek@....com>,
"davem@...emloft.net" <davem@...emloft.net>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"pabeni@...hat.com" <pabeni@...hat.com>,
"linux@...linux.org.uk" <linux@...linux.org.uk>,
"dmaengine@...r.kernel.org" <dmaengine@...r.kernel.org>,
"devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"git (AMD-Xilinx)" <git@....com>
Subject: RE: [PATCH net-next v5 10/10] net: axienet: Introduce dmaengine
support
> -----Original Message-----
> From: Jakub Kicinski <kuba@...nel.org>
> Sent: Wednesday, August 9, 2023 4:19 AM
> To: Pandey, Radhey Shyam <radhey.shyam.pandey@....com>
> Cc: vkoul@...nel.org; robh+dt@...nel.org;
> krzysztof.kozlowski+dt@...aro.org; conor+dt@...nel.org; Simek, Michal
> <michal.simek@....com>; davem@...emloft.net; edumazet@...gle.com;
> pabeni@...hat.com; linux@...linux.org.uk; dmaengine@...r.kernel.org;
> devicetree@...r.kernel.org; linux-arm-kernel@...ts.infradead.org; linux-
> kernel@...r.kernel.org; netdev@...r.kernel.org; git (AMD-Xilinx)
> <git@....com>
> Subject: Re: [PATCH net-next v5 10/10] net: axienet: Introduce dmaengine
> support
>
> On Mon, 7 Aug 2023 11:21:49 +0530 Radhey Shyam Pandey wrote:
> > +struct axi_skbuff {
> > + struct scatterlist sgl[MAX_SKB_FRAGS + 1];
> > + struct dma_async_tx_descriptor *desc;
> > + dma_addr_t dma_address;
> > + struct sk_buff *skb;
> > + struct list_head lh;
> > + int sg_len;
> > +} __packed;
>
> Why __packed?
It was added considering size optimization, but I see it may have
performance overheads. Will remove it in next version.
>
> > +static netdev_tx_t
> > +axienet_start_xmit_dmaengine(struct sk_buff *skb, struct net_device
> > +*ndev) {
> > + struct dma_async_tx_descriptor *dma_tx_desc = NULL;
> > + struct axienet_local *lp = netdev_priv(ndev);
> > + u32 app[DMA_NUM_APP_WORDS] = {0};
> > + struct axi_skbuff *axi_skb;
> > + u32 csum_start_off;
> > + u32 csum_index_off;
> > + int sg_len;
> > + int ret;
> > +
> > + sg_len = skb_shinfo(skb)->nr_frags + 1;
> > + axi_skb = kmem_cache_zalloc(lp->skb_cache, GFP_KERNEL);
> > + if (!axi_skb)
> > + return NETDEV_TX_BUSY;
>
> Drop on error, you're not stopping the queue correctly, just drop, return OK
> and avoid bugs.
As I understand NETDEV_TX_OK returns means driver took care of packet.
So inline with non-dmaengine xmit (axienet_start_xmit_legacy) should
we stop the queue and return TX_BUSY?
>
> > +static inline int axienet_init_dmaengine(struct net_device *ndev)
>
> Why inline? Please remove the spurious inline annotations.
Ok will fix it in next version
>
> > +{
> > + struct axienet_local *lp = netdev_priv(ndev);
> > + int i, ret;
> > +
> > + lp->tx_chan = dma_request_chan(lp->dev, "tx_chan0");
> > + if (IS_ERR(lp->tx_chan)) {
> > + ret = PTR_ERR(lp->tx_chan);
> > + return dev_err_probe(lp->dev, ret, "No Ethernet DMA (TX)
> channel found\n");
> > + }
> > +
> > + lp->rx_chan = dma_request_chan(lp->dev, "rx_chan0");
> > + if (IS_ERR(lp->rx_chan)) {
> > + ret = PTR_ERR(lp->rx_chan);
> > + dev_err_probe(lp->dev, ret, "No Ethernet DMA (RX) channel
> found\n");
> > + goto err_dma_request_rx;
>
> name labels after the target, not the source. Makes it a ton easier to
> maintain sanity when changing the code later.
Ok will rename it to target i.e goto err_dma_free_tx and fix other as well.
>
> > + }
> > +
> > + lp->skb_cache = kmem_cache_create("ethernet", sizeof(struct
> axi_skbuff),
> > + 0, 0, NULL);
>
> Why create a cache ?
> Isn't it cleaner to create a fake ring buffer of sgl? Most packets will not have
> MAX_SKB_FRAGS of memory. On a ring buffer you can use only as many sg
> entries as the packet requires. Also no need to alloc/free.
The kmem_cache is used with intent to use slab cache interface and
make use of reusing objects in the kernel. slab cache maintains a
cache of objects. When we free an object, instead of
deallocating it, it give it back to the cache. Next time, if we
want to create a new object, slab cache gives us one object from the
slab cache.
If we maintain custom circular buffer (struct circ_buf) ring buffer
we have to create two such ring buffers one for TX and other for RX.
For multichannel this will multiply to * no of queues. Also we have to
ensure proper occupancy checks and head/tail pointer updates.
With kmem_cache pool we are offloading queue maintenance ops to
framework with a benefit of optimized alloc/dealloc. Let me know if it
looks functionally fine and can retain it for this baseline dmaengine
support version?
>
> > + if (!lp->skb_cache) {
> > + ret = -ENOMEM;
>
> double space, please fix everywhere
Will fix it in next version.
>
> > + goto err_kmem;
>
> > @@ -2140,6 +2432,31 @@ static int axienet_probe(struct platform_device
> *pdev)
> > }
> > netif_napi_add(ndev, &lp->napi_rx, axienet_rx_poll);
> > netif_napi_add(ndev, &lp->napi_tx, axienet_tx_poll);
> > + } else {
> > + struct xilinx_vdma_config cfg;
> > + struct dma_chan *tx_chan;
> > +
> > + lp->eth_irq = platform_get_irq_optional(pdev, 0);
>
> This can't fail?
I will add check for lp->eth_irq != -ENXIO and add its error handling.
>
> > + tx_chan = dma_request_chan(lp->dev, "tx_chan0");
> > +
> > + if (IS_ERR(tx_chan)) {
>
> no empty lines between call and its error check, please fix thru out
Ok will fix and cross-check for all other references.
>
> > + ret = PTR_ERR(tx_chan);
> > + dev_err_probe(lp->dev, ret, "No Ethernet DMA (TX)
> channel found\n");
> > + goto cleanup_clk;
> > + }
> --
> pw-bot: cr
Powered by blists - more mailing lists