[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250915152331.0000246a@huawei.com>
Date: Mon, 15 Sep 2025 15:23:31 +0100
From: Jonathan Cameron <jonathan.cameron@...wei.com>
To: Nathan Lynch via B4 Relay <devnull+nathan.lynch.amd.com@...nel.org>
CC: <nathan.lynch@....com>, Vinod Koul <vkoul@...nel.org>, Wei Huang
<wei.huang2@....com>, Mario Limonciello <mario.limonciello@....com>, "Bjorn
Helgaas" <bhelgaas@...gle.com>, <linux-pci@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <dmaengine@...r.kernel.org>
Subject: Re: [PATCH RFC 09/13] dmaengine: sdxi: Add core device management
code
On Fri, 05 Sep 2025 13:48:32 -0500
Nathan Lynch via B4 Relay <devnull+nathan.lynch.amd.com@...nel.org> wrote:
> From: Nathan Lynch <nathan.lynch@....com>
>
> Add code that manages device initialization and exit and provides
> entry points for the PCI driver code to come.
I'd prefer a patch series that started with the PCI device and built up
functionality for the stuff found earlier + in this patch on top of it.
Doing that allows each patch to be fully tested and reviewed on it's own.
However not my driver or subsystem so up to others on whether
they care!
One request for more info inline.
>
> Co-developed-by: Wei Huang <wei.huang2@....com>
> Signed-off-by: Wei Huang <wei.huang2@....com>
> Signed-off-by: Nathan Lynch <nathan.lynch@....com>
> ---
> +
> +/* Refer to "Activation of the SDXI Function by Software". */
> +static int sdxi_fn_activate(struct sdxi_dev *sdxi)
> +{
> + const struct sdxi_dev_ops *ops = sdxi->dev_ops;
> + u64 cxt_l2;
> + u64 cap0;
> + u64 cap1;
> + u64 ctl2;
Combine these u64 declarations on one line.
> + int err;
> +
> + /*
> + * Clear any existing configuration from MMIO_CTL0 and ensure
> + * the function is in GSV_STOP state.
> + */
> + sdxi_write64(sdxi, SDXI_MMIO_CTL0, 0);
> + err = sdxi_dev_stop(sdxi);
> + if (err)
> + return err;
> +
> + /*
> + * 1.a. Discover limits and implemented features via MMIO_CAP0
> + * and MMIO_CAP1.
> + */
> + cap0 = sdxi_read64(sdxi, SDXI_MMIO_CAP0);
> +
> +
> +void sdxi_device_exit(struct sdxi_dev *sdxi)
> +{
> + sdxi_working_cxt_exit(sdxi->dma_cxt);
> +
> + /* Walk sdxi->cxt_array freeing any allocated rows. */
> + for (size_t i = 0; i < L2_TABLE_ENTRIES; ++i) {
> + if (!sdxi->cxt_array[i])
> + continue;
> + /* When a context is released its entry in the table should be NULL. */
> + for (size_t j = 0; j < L1_TABLE_ENTRIES; ++j) {
> + struct sdxi_cxt *cxt = sdxi->cxt_array[i][j];
> +
> + if (!cxt)
> + continue;
> + if (cxt->id != 0) /* admin context shutdown is last */
> + sdxi_working_cxt_exit(cxt);
> + sdxi->cxt_array[i][j] = NULL;
> + }
> + if (i != 0) /* another special case for admin cxt */
> + kfree(sdxi->cxt_array[i]);
> + }
> +
> + sdxi_working_cxt_exit(sdxi->admin_cxt);
> + kfree(sdxi->cxt_array[0]); /* ugh */
The constraints here need to be described a little more clearly.
> +
> + sdxi_stop(sdxi);
> + sdxi_error_exit(sdxi);
> + if (sdxi->dev_ops && sdxi->dev_ops->irq_exit)
> + sdxi->dev_ops->irq_exit(sdxi);
> +}
Powered by blists - more mailing lists