[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHNKnsQO27Rv4jdGuSLaQFNxcac1Y-yTubCmVwqeeeBdRoKeVQ@mail.gmail.com>
Date: Tue, 26 Apr 2022 02:52:10 +0300
From: Sergey Ryazanov <ryazanov.s.a@...il.com>
To: Ricardo Martinez <ricardo.martinez@...ux.intel.com>
Cc: netdev@...r.kernel.org, linux-wireless@...r.kernel.org,
Jakub Kicinski <kuba@...nel.org>,
David Miller <davem@...emloft.net>,
Johannes Berg <johannes@...solutions.net>,
Loic Poulain <loic.poulain@...aro.org>,
M Chetan Kumar <m.chetan.kumar@...el.com>,
chandrashekar.devegowda@...el.com,
Intel Corporation <linuxwwan@...el.com>,
chiranjeevi.rapolu@...ux.intel.com,
Haijun Liu (刘海军)
<haijun.liu@...iatek.com>, amir.hanania@...el.com,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
dinesh.sharma@...el.com, eliot.lee@...el.com,
ilpo.johannes.jarvinen@...el.com, moises.veleta@...el.com,
pierre-louis.bossart@...el.com, muralidharan.sethuraman@...el.com,
Soumya.Prakash.Mishra@...el.com, sreehari.kancharla@...el.com,
madhusmita.sahu@...el.com
Subject: Re: [PATCH net-next v6 02/13] net: wwan: t7xx: Add control DMA interface
On Fri, Apr 8, 2022 at 1:37 AM Ricardo Martinez
<ricardo.martinez@...ux.intel.com> wrote:
> Cross Layer DMA (CLDMA) Hardware interface (HIF) enables the control
> path of Host-Modem data transfers. CLDMA HIF layer provides a common
> interface to the Port Layer.
>
> CLDMA manages 8 independent RX/TX physical channels with data flow
> control in HW queues. CLDMA uses ring buffers of General Packet
> Descriptors (GPD) for TX/RX. GPDs can represent multiple or single
> data buffers (DB).
>
> CLDMA HIF initializes GPD rings, registers ISR handlers for CLDMA
> interrupts, and initializes CLDMA HW registers.
>
> CLDMA TX flow:
> 1. Port Layer write
> 2. Get DB address
> 3. Configure GPD
> 4. Triggering processing via HW register write
>
> CLDMA RX flow:
> 1. CLDMA HW sends a RX "done" to host
> 2. Driver starts thread to safely read GPD
> 3. DB is sent to Port layer
> 4. Create a new buffer for GPD ring
>
> Note: This patch does not enable compilation since it has dependencies
> such as t7xx_pcie_mac_clear_int()/t7xx_pcie_mac_set_int() and
> struct t7xx_pci_dev which are added by the core patch.
>
> Signed-off-by: Haijun Liu <haijun.liu@...iatek.com>
> Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@...el.com>
> Co-developed-by: Ricardo Martinez <ricardo.martinez@...ux.intel.com>
> Signed-off-by: Ricardo Martinez <ricardo.martinez@...ux.intel.com>
>
> From a WWAN framework perspective:
> Reviewed-by: Loic Poulain <loic.poulain@...aro.org>
>
> Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@...ux.intel.com>
[skipped]
> diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
[skipped]
> +static int t7xx_cldma_gpd_rx_from_q(struct cldma_queue *queue, int budget, bool *over_budget)
> +{
> + struct cldma_ctrl *md_ctrl = queue->md_ctrl;
> + unsigned int hwo_polling_count = 0;
> + struct t7xx_cldma_hw *hw_info;
> + bool rx_not_done = true;
> + unsigned long flags;
> + int count = 0;
> +
> + hw_info = &md_ctrl->hw_info;
> +
> + do {
> + struct cldma_request *req;
> + struct cldma_rgpd *rgpd;
> + struct sk_buff *skb;
> + int ret;
> +
> + req = queue->tr_done;
> + if (!req)
> + return -ENODATA;
> +
> + rgpd = req->gpd;
> + if ((rgpd->gpd_flags & GPD_FLAGS_HWO) || !req->skb) {
> + dma_addr_t gpd_addr;
> +
> + if (!pci_device_is_present(to_pci_dev(md_ctrl->dev))) {
> + dev_err(md_ctrl->dev, "PCIe Link disconnected\n");
> + return -ENODEV;
> + }
> +
> + gpd_addr = ioread64(hw_info->ap_pdn_base + REG_CLDMA_DL_CURRENT_ADDRL_0 +
> + queue->index * sizeof(u64));
> + if (req->gpd_addr == gpd_addr || hwo_polling_count++ >= 100)
> + return 0;
> +
> + udelay(1);
> + continue;
> + }
> +
> + hwo_polling_count = 0;
> + skb = req->skb;
> +
> + if (req->mapped_buff) {
> + dma_unmap_single(md_ctrl->dev, req->mapped_buff,
> + t7xx_skb_data_area_size(skb), DMA_FROM_DEVICE);
> + req->mapped_buff = 0;
> + }
> +
> + skb->len = 0;
> + skb_reset_tail_pointer(skb);
> + skb_put(skb, le16_to_cpu(rgpd->data_buff_len));
> +
> + ret = md_ctrl->recv_skb(queue, skb);
> + if (ret < 0)
> + return ret;
The execution flow can not be broken here even in case of error. The
.recv_skb() callback consumes (frees) skb even if there is an error.
But the skb field in req (struct cldma_request) will keep the skb
pointer if the function returns from here. And this stale skb pointer
will cause a use-after-free or double-free error.
I have not dug too deeply into the CLDMA layer and can not suggest any
solution. But the error handling path needs to be rechecked.
> + req->skb = NULL;
> + t7xx_cldma_rgpd_set_data_ptr(rgpd, 0);
> +
> + spin_lock_irqsave(&queue->ring_lock, flags);
> + queue->tr_done = list_next_entry_circular(req, &queue->tr_ring->gpd_ring, entry);
> + spin_unlock_irqrestore(&queue->ring_lock, flags);
> + req = queue->rx_refill;
> +
> + ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, queue->tr_ring->pkt_size);
> + if (ret)
> + return ret;
> +
> + rgpd = req->gpd;
> + t7xx_cldma_rgpd_set_data_ptr(rgpd, req->mapped_buff);
> + rgpd->data_buff_len = 0;
> + rgpd->gpd_flags = GPD_FLAGS_IOC | GPD_FLAGS_HWO;
> +
> + spin_lock_irqsave(&queue->ring_lock, flags);
> + queue->rx_refill = list_next_entry_circular(req, &queue->tr_ring->gpd_ring, entry);
> + spin_unlock_irqrestore(&queue->ring_lock, flags);
> +
> + rx_not_done = ++count < budget || !need_resched();
> + } while (rx_not_done);
> +
> + *over_budget = true;
> + return 0;
> +}
--
Sergey
Powered by blists - more mailing lists