[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20240816144403.17756-1-akhilrajeev@nvidia.com>
Date: Fri, 16 Aug 2024 20:14:03 +0530
From: Akhil R <akhilrajeev@...dia.com>
To: <andriy.shevchenko@...el.com>
CC: <andi.shyti@...nel.org>, <apopple@...dia.com>, <digetx@...il.com>,
<jonathanh@...dia.com>, <ldewangan@...dia.com>, <leitao@...ian.org>,
<linux-i2c@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<linux-tegra@...r.kernel.org>, <paulmck@...nel.org>, <rmikey@...a.com>,
<thierry.reding@...il.com>, <akhilrajeev@...dia.com>
Subject: Re: [PATCH] [i2c-tegra] Do not mark ACPI devices as irq safe
>> I think these are two different goals here. This near term goal is just
>> fix the driver so it can use the pm_runtime_irq_safe() in a saner
>> way, avoiding calling mutexes inside spinlocks.
>>
>> Getting rid of the IRQ safe PM seems to me to be more a long term
>> desirable goal, and unfortunately I cannot afford doing it now.
>>
>> Laxman, what is your view on this topic?
>
> Yes, please, comment on this. We would like to get rid of the hack named "IRQ
> safe PM runtime".
>
Any thoughts on how would we handle atomic_xfers without pm_runtime_irq_safe()?
Would the below patch be a good way? I didn't test this though.
@@ -1373,10 +1373,15 @@ static int tegra_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[],
struct tegra_i2c_dev *i2c_dev = i2c_get_adapdata(adap);
int i, ret;
- ret = pm_runtime_get_sync(i2c_dev->dev);
+ if (i2c_dev->atomic_mode)
+ ret = tegra_i2c_runtime_resume(i2c_dev->dev);
+ else
+ ret = pm_runtime_get_sync(i2c_dev->dev);
+
if (ret < 0) {
dev_err(i2c_dev->dev, "runtime resume failed %d\n", ret);
- pm_runtime_put_noidle(i2c_dev->dev);
+ if (!i2c_dev->atomic_mode)
+ pm_runtime_put_noidle(i2c_dev->dev);
return ret;
}
@@ -1404,7 +1409,10 @@ static int tegra_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[],
break;
}
- pm_runtime_put(i2c_dev->dev);
+ if (i2c_dev->atomic_mode)
+ tegra_i2c_runtime_suspend(i2c_dev->dev);
+ else
+ pm_runtime_put(i2c_dev->dev);
return ret ?: i;
}
Powered by blists - more mailing lists