[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YeGFCYTaQ0GNLhyx@smile.fi.intel.com>
Date: Fri, 14 Jan 2022 16:13:29 +0200
From: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
To: Ricardo Martinez <ricardo.martinez@...ux.intel.com>
Cc: netdev@...r.kernel.org, linux-wireless@...r.kernel.org,
kuba@...nel.org, davem@...emloft.net, johannes@...solutions.net,
ryazanov.s.a@...il.com, loic.poulain@...aro.org,
m.chetan.kumar@...el.com, chandrashekar.devegowda@...el.com,
linuxwwan@...el.com, chiranjeevi.rapolu@...ux.intel.com,
haijun.liu@...iatek.com, amir.hanania@...el.com,
dinesh.sharma@...el.com, eliot.lee@...el.com,
ilpo.johannes.jarvinen@...el.com, moises.veleta@...el.com,
pierre-louis.bossart@...el.com, muralidharan.sethuraman@...el.com,
Soumya.Prakash.Mishra@...el.com, sreehari.kancharla@...el.com
Subject: Re: [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA
interface
On Thu, Jan 13, 2022 at 06:06:16PM -0700, Ricardo Martinez wrote:
> From: Haijun Liu <haijun.liu@...iatek.com>
>
> Cross Layer DMA (CLDMA) Hardware interface (HIF) enables the control
> path of Host-Modem data transfers. CLDMA HIF layer provides a common
> interface to the Port Layer.
>
> CLDMA manages 8 independent RX/TX physical channels with data flow
> control in HW queues. CLDMA uses ring buffers of General Packet
> Descriptors (GPD) for TX/RX. GPDs can represent multiple or single
> data buffers (DB).
>
> CLDMA HIF initializes GPD rings, registers ISR handlers for CLDMA
> interrupts, and initializes CLDMA HW registers.
>
> CLDMA TX flow:
> 1. Port Layer write
> 2. Get DB address
> 3. Configure GPD
> 4. Triggering processing via HW register write
>
> CLDMA RX flow:
> 1. CLDMA HW sends a RX "done" to host
> 2. Driver starts thread to safely read GPD
> 3. DB is sent to Port layer
> 4. Create a new buffer for GPD ring
...
> + * Copyright (c) 2021, Intel Corporation.
2021-2022 (in all files)?
...
> +#include <linux/dev_printk.h>
> +#include <linux/device.h>
I believe the former is guaranteed to be included in the latter.
The rest of the headers in this file looks good to me.
...
> + if (dma_mapping_error(md_ctrl->dev, req->mapped_buff)) {
> + dev_err(md_ctrl->dev, "DMA mapping failed\n");
Can we first free resources and then print messages?
Printing messages is a slow path and freeing resources first is a good
(micro-)optimization.
> + dev_kfree_skb_any(req->skb);
> + req->skb = NULL;
> + req->mapped_buff = 0;
> + return -ENOMEM;
> + }
...
> +static int t7xx_cldma_rx_ring_init(struct cldma_ctrl *md_ctrl, struct cldma_ring *ring)
> +{
> + struct cldma_request *req, *first_req = NULL;
> + struct cldma_rgpd *prev_gpd, *gpd = NULL;
> + int i;
> +
> + for (i = 0; i < ring->length; i++) {
> + req = t7xx_alloc_rx_request(md_ctrl, ring->pkt_size);
> + if (!req) {
> + t7xx_cldma_ring_free(md_ctrl, ring, DMA_FROM_DEVICE);
> + return -ENOMEM;
> + }
> +
> + gpd = req->gpd;
> + t7xx_cldma_rgpd_set_data_ptr(gpd, req->mapped_buff);
> + gpd->data_allow_len = cpu_to_le16(ring->pkt_size);
> + gpd->gpd_flags = GPD_FLAGS_IOC | GPD_FLAGS_HWO;
> +
> + if (i)
> + t7xx_cldma_rgpd_set_next_ptr(prev_gpd, req->gpd_addr);
> + else
> + first_req = req;
> +
> + INIT_LIST_HEAD(&req->entry);
> + list_add_tail(&req->entry, &ring->gpd_ring);
> + prev_gpd = gpd;
> + }
> + if (first_req)
At which circumstances it is not defined? Only when ring->length == 0, right?
> + t7xx_cldma_rgpd_set_next_ptr(gpd, first_req->gpd_addr);
Looking into this, perhaps it makes sense to refactor this way:
1. Define list head pointer on the stack
2. Iterate over the ring->length and add elements to that list
3. Iterate over the list and set the DMA links
(t7xx_cldma_rgpd_set_next_ptr() calls)
4. Add this list to the main one.
> + return 0;
> +}
...
> +static int t7xx_cldma_tx_ring_init(struct cldma_ctrl *md_ctrl, struct cldma_ring *ring)
> +{
> + struct cldma_request *req, *first_req = NULL;
> + struct cldma_tgpd *tgpd, *prev_gpd;
> + int i;
> +
> + for (i = 0; i < ring->length; i++) {
> + req = t7xx_alloc_tx_request(md_ctrl);
> + if (!req) {
> + t7xx_cldma_ring_free(md_ctrl, ring, DMA_TO_DEVICE);
> + return -ENOMEM;
> + }
> +
> + tgpd = req->gpd;
> + tgpd->gpd_flags = GPD_FLAGS_IOC;
> +
> + if (!first_req)
> + first_req = req;
> + else
> + t7xx_cldma_tgpd_set_next_ptr(prev_gpd, req->gpd_addr);
> +
> + INIT_LIST_HEAD(&req->entry);
> + list_add_tail(&req->entry, &ring->gpd_ring);
> + prev_gpd = tgpd;
> + }
> +
> + if (first_req)
> + t7xx_cldma_tgpd_set_next_ptr(tgpd, first_req->gpd_addr);
> +
> + return 0;
Ditto.
> +}
--
With Best Regards,
Andy Shevchenko
Powered by blists - more mailing lists