[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c30c9c68-bfe1-0cc5-c511-218f7d1da92d@quicinc.com>
Date: Fri, 29 Sep 2023 09:22:19 -0600
From: Jeffrey Hugo <quic_jhugo@...cinc.com>
To: Qiang Yu <quic_qianyu@...cinc.com>, <mani@...nel.org>
CC: <mhi@...ts.linux.dev>, <linux-arm-msm@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <quic_cang@...cinc.com>,
<quic_mrana@...cinc.com>
Subject: Re: [PATCH v2 1/2] bus: mhi: host: Add spinlock to protect WP access
when queueing TREs
On 9/24/2023 9:10 PM, Qiang Yu wrote:
>
> On 9/22/2023 10:44 PM, Jeffrey Hugo wrote:
>> On 9/13/2023 2:47 AM, Qiang Yu wrote:
>>> From: Bhaumik Bhatt <bbhatt@...eaurora.org>
>>>
>>> Protect WP accesses such that multiple threads queueing buffers for
>>> incoming data do not race and access the same WP twice. Ensure read and
>>> write locks for the channel are not taken in succession by dropping the
>>> read lock from parse_xfer_event() such that a callback given to client
>>> can potentially queue buffers and acquire the write lock in that
>>> process.
>>> Any queueing of buffers should be done without channel read lock
>>> acquired
>>> as it can result in multiple locks and a soft lockup.
>>>
>>> Signed-off-by: Bhaumik Bhatt <bbhatt@...eaurora.org>
>>> Signed-off-by: Qiang Yu <quic_qianyu@...cinc.com>
>>> ---
>>> drivers/bus/mhi/host/main.c | 11 ++++++++++-
>>> 1 file changed, 10 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
>>> index dcf627b..13c4b89 100644
>>> --- a/drivers/bus/mhi/host/main.c
>>> +++ b/drivers/bus/mhi/host/main.c
>>> @@ -642,6 +642,7 @@ static int parse_xfer_event(struct mhi_controller
>>> *mhi_cntrl,
>>> mhi_del_ring_element(mhi_cntrl, tre_ring);
>>> local_rp = tre_ring->rp;
>>> + read_unlock_bh(&mhi_chan->lock);
>>
>> This doesn't work due to the write_lock_irqsave(&mhi_chan->lock,
>> flags); on line 591.
> Write_lock_irqsave(&mhi_chan->lock, flags) is used in case of ev_code >=
> MHI_EV_CC_OOB. We only read_lock/read_unlock the mhi_chan while ev_code
> < MHI_EV_CC_OOB.
Sorry. OOB != EOB
>>
>> I really don't like that we are unlocking the mhi_chan while still
>> using it. It opens up a window where the mhi_chan state can be
>> updated between here and the client using the callback to queue a buf.
>>
>> Perhaps we need a new lock that just protects the wp, and needs to be
>> only grabbed while mhi_chan->lock is held?
>
> Since we have employed mhi_chan lock to protect the channel and what we
> are concerned here is that client may queue buf to a disabled or stopped
> channel, can we check channel state after getting mhi_chan->lock like
> line 595.
>
> We can add the check after getting write lock in mhi_gen_tre() and after
> getting read lock again here.
I'm not sure that is sufficient. After you unlock to notify the client,
MHI is going to manipulate the packet count and runtime_pm without the
lock (648-652). It seems like that adds additional races which won't be
covered by the additional check you propose.
>
>>
>>> /* notify client */
>>> mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
>>> @@ -667,6 +668,7 @@ static int parse_xfer_event(struct
>>> mhi_controller *mhi_cntrl,
>>> kfree(buf_info->cb_buf);
>>> }
>>> }
>>> + read_lock_bh(&mhi_chan->lock);
>>> }
>>> break;
>>> } /* CC_EOT */
>>> @@ -1204,6 +1206,9 @@ int mhi_gen_tre(struct mhi_controller
>>> *mhi_cntrl, struct mhi_chan *mhi_chan,
>>> int eot, eob, chain, bei;
>>> int ret;
>>> + /* Protect accesses for reading and incrementing WP */
>>> + write_lock_bh(&mhi_chan->lock);
>>> +
>>> buf_ring = &mhi_chan->buf_ring;
>>> tre_ring = &mhi_chan->tre_ring;
>>> @@ -1221,8 +1226,10 @@ int mhi_gen_tre(struct mhi_controller
>>> *mhi_cntrl, struct mhi_chan *mhi_chan,
>>> if (!info->pre_mapped) {
>>> ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
>>> - if (ret)
>>> + if (ret) {
>>> + write_unlock_bh(&mhi_chan->lock);
>>> return ret;
>>> + }
>>> }
>>> eob = !!(flags & MHI_EOB);
>>> @@ -1239,6 +1246,8 @@ int mhi_gen_tre(struct mhi_controller
>>> *mhi_cntrl, struct mhi_chan *mhi_chan,
>>> mhi_add_ring_element(mhi_cntrl, tre_ring);
>>> mhi_add_ring_element(mhi_cntrl, buf_ring);
>>> + write_unlock_bh(&mhi_chan->lock);
>>> +
>>> return 0;
>>> }
>>
Powered by blists - more mailing lists