[<prev] [next>] [day] [month] [year] [list]
Message-ID: <aefee6d1-da2c-d081-6bda-b9bd49e8c12f@codeaurora.org>
Date: Mon, 26 Oct 2020 18:18:57 -0700
From: Hemant Kumar <hemantk@...eaurora.org>
To: Loic Poulain <loic.poulain@...aro.org>
Cc: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>,
Jeffrey Hugo <jhugo@...eaurora.org>,
Bhaumik Bhatt <bbhatt@...eaurora.org>, netdev@...r.kernel.org
Subject: Re: [PATCH v9 4/4] bus: mhi: Add userspace client interface driver
Hi Loic,
On 10/26/20 10:34 AM, Loic Poulain wrote:
> Hi Hemant,
>
> That looks better IMHO, just small comments on locking.
>
[..]
> +static ssize_t mhi_uci_write(struct file *file,
> + const char __user *buf,
> + size_t count,
> + loff_t *offp)
> +{
> + struct uci_dev *udev = file->private_data;
> + struct mhi_device *mhi_dev = udev->mhi_dev;
> + struct device *dev = &mhi_dev->dev;
> + struct uci_chan *uchan = udev->uchan;
> + size_t bytes_xfered = 0;
> + int ret, nr_avail = 0;
> +
> + /* if ul channel is not supported return error */
> + if (!buf || !count || !mhi_dev->ul_chan)
> + return -EINVAL;
> +
> + dev_dbg(dev, "%s: to xfer: %zu bytes\n", __func__, count);
> +
> + mutex_lock(&uchan->write_lock);
>
>
> Maybe mutex_lock_interruptible is more appropriate here (same in read fops).
i agree, will return -EINTR if mutex_lock_interruptible returns < 0.
>
[..]
> +static ssize_t mhi_uci_read(struct file *file,
> + char __user *buf,
> + size_t count,
> + loff_t *ppos)
> +{
> + struct uci_dev *udev = file->private_data;
> + struct mhi_device *mhi_dev = udev->mhi_dev;
> + struct uci_chan *uchan = udev->uchan;
> + struct device *dev = &mhi_dev->dev;
> + struct uci_buf *ubuf;
> + size_t rx_buf_size;
> + char *ptr;
> + size_t to_copy;
> + int ret = 0;
> +
> + /* if dl channel is not supported return error */
> + if (!buf || !mhi_dev->dl_chan)
> + return -EINVAL;
> +
> + mutex_lock(&uchan->read_lock);
> + spin_lock_bh(&uchan->dl_pending_lock);
> + /* No data available to read, wait */
> + if (!uchan->cur_buf && list_empty(&uchan->dl_pending)) {
> + dev_dbg(dev, "No data available to read, waiting\n");
> +
> + spin_unlock_bh(&uchan->dl_pending_lock);
> + ret = wait_event_interruptible(uchan->dl_wq,
> + (!udev->enabled ||
> +
> !list_empty(&uchan->dl_pending)));
>
>
> If you need to protect dl_pending list against concurent access, you
> need to do it in wait_event as well. I would suggest to lookg at
> `wait_event_interruptible_lock_irq` function, that allows to pass a
> locked spinlock as parameter. That would be safer and prevent this
> lock/unlock dance.
When using this API difference is, first we take spin_lock_bh() and then
wait API is using spin_unlock_irq()/spin_lock_irq(). I am getting
"BUG: scheduling while atomic" when i use this way. When i changed
spin_lock_bh to spin_lock_irq then we got rid of the kernel BUG.
Thanks,
Hemant
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
Powered by blists - more mailing lists