[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cb5cfffe-dbba-4da2-ac1f-de4a9a31f057@oss.qualcomm.com>
Date: Tue, 2 Dec 2025 10:54:12 -0800
From: Mayank Rana <mayank.rana@....qualcomm.com>
To: Krishna Chaitanya Chundru <krishna.chundru@....qualcomm.com>,
Manivannan Sadhasivam <mani@...nel.org>,
Jeff Hugo <jeff.hugo@....qualcomm.com>,
Carl Vanderlip <carl.vanderlip@....qualcomm.com>,
Oded Gabbay <ogabbay@...nel.org>, Jeff Johnson <jjohnson@...nel.org>,
Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller"
<davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>
Cc: mhi@...ts.linux.dev, linux-arm-msm@...r.kernel.org,
linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org,
linux-wireless@...r.kernel.org, ath11k@...ts.infradead.org,
ath12k@...ts.infradead.org, netdev@...r.kernel.org,
quic_vbadigan@...cinc.com, vivek.pernamitta@....qualcomm.com
Subject: Re: [PATCH 4/4] bus: mhi: Fix broken runtime PM design
On 12/1/2025 9:26 PM, Krishna Chaitanya Chundru wrote:
>
>
> On 12/2/2025 12:03 AM, Mayank Rana wrote:
>> Hi Krishna
>>
>> On 12/1/2025 4:43 AM, Krishna Chaitanya Chundru wrote:
>>> The current MHI runtime PM design is flawed, as the MHI core attempts to
>>> manage power references internally via mhi_queue() and related paths.
>>> This is problematic because the controller drivers do not have the
>>> knowledge of the client PM status due to the broken PM topology. So when
>>> they runtime suspend the controller, the client drivers could no longer
>>> function.
>>>
>>> To address this, in the new design, the client drivers reports their own
>>> runtime PM status now and the PM framework makes sure that the parent
>>> (controller driver) and other components up in the chain remain active.
>>> This leverages the standard parent-child PM relationship.
>>>
>>> Since MHI creates a mhi_dev device without an associated driver, we
>>> explicitly enable runtime PM on it and mark it with
>>> pm_runtime_no_callbacks() to indicate the PM core that no callbacks
>>> exist for this device. This is only needed for MHI controller, since
>>> the controller driver uses the bus device just like PCI device.
>>>
>>> Also Update the MHI core to explicitly manage runtime PM references in
>>> __mhi_device_get_sync() and mhi_device_put() to ensure the controller
>>> does not enter suspend while a client device is active.
>> Why does this needed here ?
>> Isn't it MHI client driver take care of allowing suspend ?
>> Do you think we should remove mhi_device_get_sync() and
>> mhi_device_put() API interfaces as well ? And let controller/client
>> driver takes care of calling get/sync directly ?
> These API's not only do runtime_get & put but as also do wake_get &
> wake_put which make sure endpoint also doesn't go M1 state.
ok here we are doing 2 different devices based pm runtime API usage in
this core MHI driver.
1. mhi_cntrl->dev
2. mhi_cntrl->mhi_dev->dev
Those are seperate devices, and here we are mixing those usage.
Is it correct or I am seeing differently ?
Regards,
Mayank
>>
>> How are you handling cases for M0 and M3 suspend ?
>> Do we need to tie runtime usage with M0 (pm_runtime_get) and M3
>> (pm_runtime_put) ?
> M3 are controlled by the controller driver, they usually do it as part
> of their runtime suspend
> and M0 as part of runtime resume.
> once the mhi driver gives pm_runtime_put() then only controller can go
> keep MHI in M3.
> So we can't tie MHI states pm_runtime_get/put.
Ok sounds good.
>>
>> Regards,
>> Mayank
>>
>>> Signed-off-by: Krishna Chaitanya Chundru
>>> <krishna.chundru@....qualcomm.com>
>>> ---
>>> drivers/bus/mhi/host/internal.h | 6 +++---
>>> drivers/bus/mhi/host/main.c | 28 ++++------------------------
>>> drivers/bus/mhi/host/pm.c | 18 ++++++++----------
>>> 3 files changed, 15 insertions(+), 37 deletions(-)
>>>
>>> diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/
>>> internal.h
>>> index
>>> 61e03298e898e6dd02d2a977cddc4c87b21e3a6c..d6a3168bb3ecc34eab1036c0e74f8d70cf422fed 100644
>>> --- a/drivers/bus/mhi/host/internal.h
>>> +++ b/drivers/bus/mhi/host/internal.h
>>> @@ -355,9 +355,9 @@ static inline bool mhi_is_active(struct
>>> mhi_controller *mhi_cntrl)
>>> static inline void mhi_trigger_resume(struct mhi_controller
>>> *mhi_cntrl)
>>> {
>>> pm_wakeup_event(&mhi_cntrl->mhi_dev->dev, 0);
>>> - pm_runtime_get(mhi_cntrl->cntrl_dev);
>>> - pm_runtime_mark_last_busy(mhi_cntrl->cntrl_dev);
>>> - pm_runtime_put(mhi_cntrl->cntrl_dev);
>>> + pm_runtime_get(&mhi_cntrl->mhi_dev->dev);
>>> + pm_runtime_mark_last_busy(&mhi_cntrl->mhi_dev->dev);
>>> + pm_runtime_put(&mhi_cntrl->mhi_dev->dev);
>>> }
>>> /* Register access methods */
>>> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
>>> index
>>> 7ac1162a0a81ae11245a2bbd9bf6fd6c0f86fbc1..85a9a5a62a6d3f92b0e9dc35b13fd867db89dd95 100644
>>> --- a/drivers/bus/mhi/host/main.c
>>> +++ b/drivers/bus/mhi/host/main.c
>>> @@ -427,6 +427,8 @@ void mhi_create_devices(struct mhi_controller
>>> *mhi_cntrl)
>>> if (ret)
>>> put_device(&mhi_dev->dev);
>>> }
>>> + pm_runtime_no_callbacks(&mhi_cntrl->mhi_dev->dev);
>>> + devm_pm_runtime_set_active_enabled(&mhi_cntrl->mhi_dev->dev);
>>> }
>>> irqreturn_t mhi_irq_handler(int irq_number, void *dev)
>>> @@ -658,12 +660,8 @@ static int parse_xfer_event(struct
>>> mhi_controller *mhi_cntrl,
>>> /* notify client */
>>> mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
>>> - if (mhi_chan->dir == DMA_TO_DEVICE) {
>>> + if (mhi_chan->dir == DMA_TO_DEVICE)
>>> atomic_dec(&mhi_cntrl->pending_pkts);
>>> - /* Release the reference got from mhi_queue() */
>>> - pm_runtime_mark_last_busy(mhi_cntrl->cntrl_dev);
>>> - pm_runtime_put(mhi_cntrl->cntrl_dev);
>>> - }
>>> /*
>>> * Recycle the buffer if buffer is pre-allocated,
>>> @@ -1152,12 +1150,6 @@ static int mhi_queue(struct mhi_device
>>> *mhi_dev, struct mhi_buf_info *buf_info,
>>> read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
>>> - /* Packet is queued, take a usage ref to exit M3 if necessary
>>> - * for host->device buffer, balanced put is done on buffer
>>> completion
>>> - * for device->host buffer, balanced put is after ringing the DB
>>> - */
>>> - pm_runtime_get(mhi_cntrl->cntrl_dev);
>>> -
>>> /* Assert dev_wake (to exit/prevent M1/M2)*/
>>> mhi_cntrl->wake_toggle(mhi_cntrl);
>>> @@ -1167,11 +1159,6 @@ static int mhi_queue(struct mhi_device
>>> *mhi_dev, struct mhi_buf_info *buf_info,
>>> if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
>>> mhi_ring_chan_db(mhi_cntrl, mhi_chan);
>>> - if (dir == DMA_FROM_DEVICE) {
>>> - pm_runtime_mark_last_busy(mhi_cntrl->cntrl_dev);
>>> - pm_runtime_put(mhi_cntrl->cntrl_dev);
>>> - }
>>> -
>>> read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
>>> return ret;
>>> @@ -1377,7 +1364,6 @@ static int mhi_update_channel_state(struct
>>> mhi_controller *mhi_cntrl,
>>> ret = mhi_device_get_sync(mhi_cntrl->mhi_dev);
>>> if (ret)
>>> return ret;
>>> - pm_runtime_get(mhi_cntrl->cntrl_dev);
>>> reinit_completion(&mhi_chan->completion);
>>> ret = mhi_send_cmd(mhi_cntrl, mhi_chan, cmd);
>>> @@ -1408,8 +1394,6 @@ static int mhi_update_channel_state(struct
>>> mhi_controller *mhi_cntrl,
>>> trace_mhi_channel_command_end(mhi_cntrl, mhi_chan, to_state,
>>> TPS("Updated"));
>>> exit_channel_update:
>>> - pm_runtime_mark_last_busy(mhi_cntrl->cntrl_dev);
>>> - pm_runtime_put(mhi_cntrl->cntrl_dev);
>>> mhi_device_put(mhi_cntrl->mhi_dev);
>>> return ret;
>>> @@ -1592,12 +1576,8 @@ static void mhi_reset_data_chan(struct
>>> mhi_controller *mhi_cntrl,
>>> while (tre_ring->rp != tre_ring->wp) {
>>> struct mhi_buf_info *buf_info = buf_ring->rp;
>>> - if (mhi_chan->dir == DMA_TO_DEVICE) {
>>> + if (mhi_chan->dir == DMA_TO_DEVICE)
>>> atomic_dec(&mhi_cntrl->pending_pkts);
>>> - /* Release the reference got from mhi_queue() */
>>> - pm_runtime_mark_last_busy(mhi_cntrl->cntrl_dev);
>>> - pm_runtime_put(mhi_cntrl->cntrl_dev);
>>> - }
>>> if (!buf_info->pre_mapped)
>>> mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
>>> diff --git a/drivers/bus/mhi/host/pm.c b/drivers/bus/mhi/host/pm.c
>>> index
>>> b4ef115189b505c3450ff0949ad2d09f3ed53386..fd690e8af693109ed8c55248db0ea153f9e69423 100644
>>> --- a/drivers/bus/mhi/host/pm.c
>>> +++ b/drivers/bus/mhi/host/pm.c
>>> @@ -429,6 +429,7 @@ static int mhi_pm_mission_mode_transition(struct
>>> mhi_controller *mhi_cntrl)
>>> if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
>>> ret = -EIO;
>>> + read_unlock_bh(&mhi_cntrl->pm_lock);
>>> goto error_mission_mode;
>>> }
>>> @@ -459,11 +460,9 @@ static int
>>> mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
>>> */
>>> mhi_create_devices(mhi_cntrl);
>>> - read_lock_bh(&mhi_cntrl->pm_lock);
>>> error_mission_mode:
>>> - mhi_cntrl->wake_put(mhi_cntrl, false);
>>> - read_unlock_bh(&mhi_cntrl->pm_lock);
>>> + mhi_device_put(mhi_cntrl->mhi_dev);
>>> return ret;
>>> }
>>> @@ -1038,9 +1037,11 @@ int __mhi_device_get_sync(struct
>>> mhi_controller *mhi_cntrl)
>>> read_unlock_bh(&mhi_cntrl->pm_lock);
>>> return -EIO;
>>> }
>>> + read_unlock_bh(&mhi_cntrl->pm_lock);
>>> +
>>> + pm_runtime_get_sync(&mhi_cntrl->mhi_dev->dev);
>>> + read_lock_bh(&mhi_cntrl->pm_lock);
>>> mhi_cntrl->wake_get(mhi_cntrl, true);
>>> - if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
>>> - mhi_trigger_resume(mhi_cntrl);
>>> read_unlock_bh(&mhi_cntrl->pm_lock);
>>> ret = wait_event_timeout(mhi_cntrl->state_event,
>>> @@ -1049,9 +1050,7 @@ int __mhi_device_get_sync(struct mhi_controller
>>> *mhi_cntrl)
>>> msecs_to_jiffies(mhi_cntrl->timeout_ms));
>>> if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
>>> - read_lock_bh(&mhi_cntrl->pm_lock);
>>> - mhi_cntrl->wake_put(mhi_cntrl, false);
>>> - read_unlock_bh(&mhi_cntrl->pm_lock);
>>> + mhi_device_put(mhi_cntrl->mhi_dev);
>>> return -EIO;
>>> }
>>> @@ -1339,11 +1338,10 @@ void mhi_device_put(struct mhi_device
>>> *mhi_dev)
>>> mhi_dev->dev_wake--;
>>> read_lock_bh(&mhi_cntrl->pm_lock);
>>> - if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
>>> - mhi_trigger_resume(mhi_cntrl);
>>> mhi_cntrl->wake_put(mhi_cntrl, false);
>>> read_unlock_bh(&mhi_cntrl->pm_lock);
>>> + pm_runtime_put(&mhi_cntrl->mhi_dev->dev);
>>> }
>>> EXPORT_SYMBOL_GPL(mhi_device_put);
>>>
>>
>
Powered by blists - more mailing lists