[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211207071939.GA70121@thinkpad>
Date: Tue, 7 Dec 2021 12:49:39 +0530
From: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
To: Jakub Kicinski <kuba@...nel.org>,
"David S . Miller" <davem@...emloft.net>
Cc: loic.poulain@...aro.org, hemantk@...eaurora.org,
bbhatt@...eaurora.org, linux-arm-msm@...r.kernel.org,
netdev@...r.kernel.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
mhi@...ts.linux.dev
Subject: Re: [PATCH v2] bus: mhi: core: Add an API for auto queueing buffers
for DL channel
On Tue, Dec 07, 2021 at 12:43:39PM +0530, Manivannan Sadhasivam wrote:
> Add a new API "mhi_prepare_for_transfer_autoqueue" for using with client
> drivers like QRTR to request MHI core to autoqueue buffers for the DL
> channel along with starting both UL and DL channels.
>
> So far, the "auto_queue" flag specified by the controller drivers in
> channel definition served this purpose but this will be removed at some
> point in future.
>
> Cc: netdev@...r.kernel.org
> Cc: Jakub Kicinski <kuba@...nel.org>
> Cc: David S. Miller <davem@...emloft.net>
> Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
> Co-developed-by: Loic Poulain <loic.poulain@...aro.org>
> Signed-off-by: Loic Poulain <loic.poulain@...aro.org>
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
> ---
>
> Changes in v2:
>
> * Rebased on top of 5.16-rc1
> * Fixed an issue reported by kernel test bot
> * CCed netdev folks and Greg
Dave, Jakub, this patch should go through the MHI tree. Since it touches the
QRTR driver, can you please give an ACK?
Thanks,
Mani
> * Slight change to the commit subject for reflecting "core" sub-directory
>
> drivers/bus/mhi/core/internal.h | 6 +++++-
> drivers/bus/mhi/core/main.c | 21 +++++++++++++++++----
> include/linux/mhi.h | 21 ++++++++++++++++-----
> net/qrtr/mhi.c | 2 +-
> 4 files changed, 39 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
> index 9d72b1d1e986..e2e10474a9d9 100644
> --- a/drivers/bus/mhi/core/internal.h
> +++ b/drivers/bus/mhi/core/internal.h
> @@ -682,8 +682,12 @@ void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl);
> void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
> struct image_info *img_info);
> void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl);
> +
> +/* Automatically allocate and queue inbound buffers */
> +#define MHI_CH_INBOUND_ALLOC_BUFS BIT(0)
> int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
> - struct mhi_chan *mhi_chan);
> + struct mhi_chan *mhi_chan, unsigned int flags);
> +
> int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
> struct mhi_chan *mhi_chan);
> void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
> diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
> index 930aba666b67..ffde617f93a3 100644
> --- a/drivers/bus/mhi/core/main.c
> +++ b/drivers/bus/mhi/core/main.c
> @@ -1430,7 +1430,7 @@ static void mhi_unprepare_channel(struct mhi_controller *mhi_cntrl,
> }
>
> int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
> - struct mhi_chan *mhi_chan)
> + struct mhi_chan *mhi_chan, unsigned int flags)
> {
> int ret = 0;
> struct device *dev = &mhi_chan->mhi_dev->dev;
> @@ -1455,6 +1455,9 @@ int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
> if (ret)
> goto error_pm_state;
>
> + if (mhi_chan->dir == DMA_FROM_DEVICE)
> + mhi_chan->pre_alloc = !!(flags & MHI_CH_INBOUND_ALLOC_BUFS);
> +
> /* Pre-allocate buffer for xfer ring */
> if (mhi_chan->pre_alloc) {
> int nr_el = get_nr_avail_ring_elements(mhi_cntrl,
> @@ -1610,8 +1613,7 @@ void mhi_reset_chan(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan)
> read_unlock_bh(&mhi_cntrl->pm_lock);
> }
>
> -/* Move channel to start state */
> -int mhi_prepare_for_transfer(struct mhi_device *mhi_dev)
> +static int __mhi_prepare_for_transfer(struct mhi_device *mhi_dev, unsigned int flags)
> {
> int ret, dir;
> struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
> @@ -1622,7 +1624,7 @@ int mhi_prepare_for_transfer(struct mhi_device *mhi_dev)
> if (!mhi_chan)
> continue;
>
> - ret = mhi_prepare_channel(mhi_cntrl, mhi_chan);
> + ret = mhi_prepare_channel(mhi_cntrl, mhi_chan, flags);
> if (ret)
> goto error_open_chan;
> }
> @@ -1640,8 +1642,19 @@ int mhi_prepare_for_transfer(struct mhi_device *mhi_dev)
>
> return ret;
> }
> +
> +int mhi_prepare_for_transfer(struct mhi_device *mhi_dev)
> +{
> + return __mhi_prepare_for_transfer(mhi_dev, 0);
> +}
> EXPORT_SYMBOL_GPL(mhi_prepare_for_transfer);
>
> +int mhi_prepare_for_transfer_autoqueue(struct mhi_device *mhi_dev)
> +{
> + return __mhi_prepare_for_transfer(mhi_dev, MHI_CH_INBOUND_ALLOC_BUFS);
> +}
> +EXPORT_SYMBOL_GPL(mhi_prepare_for_transfer_autoqueue);
> +
> void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev)
> {
> struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
> diff --git a/include/linux/mhi.h b/include/linux/mhi.h
> index 723985879035..271db1d6da85 100644
> --- a/include/linux/mhi.h
> +++ b/include/linux/mhi.h
> @@ -717,15 +717,26 @@ void mhi_device_put(struct mhi_device *mhi_dev);
>
> /**
> * mhi_prepare_for_transfer - Setup UL and DL channels for data transfer.
> - * Allocate and initialize the channel context and
> - * also issue the START channel command to both
> - * channels. Channels can be started only if both
> - * host and device execution environments match and
> - * channels are in a DISABLED state.
> * @mhi_dev: Device associated with the channels
> + *
> + * Allocate and initialize the channel context and also issue the START channel
> + * command to both channels. Channels can be started only if both host and
> + * device execution environments match and channels are in a DISABLED state.
> */
> int mhi_prepare_for_transfer(struct mhi_device *mhi_dev);
>
> +/**
> + * mhi_prepare_for_transfer_autoqueue - Setup UL and DL channels with auto queue
> + * buffers for DL traffic
> + * @mhi_dev: Device associated with the channels
> + *
> + * Allocate and initialize the channel context and also issue the START channel
> + * command to both channels. Channels can be started only if both host and
> + * device execution environments match and channels are in a DISABLED state.
> + * The MHI core will automatically allocate and queue buffers for the DL traffic.
> + */
> +int mhi_prepare_for_transfer_autoqueue(struct mhi_device *mhi_dev);
> +
> /**
> * mhi_unprepare_from_transfer - Reset UL and DL channels for data transfer.
> * Issue the RESET channel command and let the
> diff --git a/net/qrtr/mhi.c b/net/qrtr/mhi.c
> index fa611678af05..18196e1c8c2f 100644
> --- a/net/qrtr/mhi.c
> +++ b/net/qrtr/mhi.c
> @@ -79,7 +79,7 @@ static int qcom_mhi_qrtr_probe(struct mhi_device *mhi_dev,
> int rc;
>
> /* start channels */
> - rc = mhi_prepare_for_transfer(mhi_dev);
> + rc = mhi_prepare_for_transfer_autoqueue(mhi_dev);
> if (rc)
> return rc;
>
> --
> 2.25.1
>
Powered by blists - more mailing lists