[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200522131031.GL2163848@ulmo>
Date: Fri, 22 May 2020 15:10:31 +0200
From: Thierry Reding <thierry.reding@...il.com>
To: Dan Carpenter <dan.carpenter@...cle.com>
Cc: "Rafael J. Wysocki" <rafael@...nel.org>, dinghao.liu@....edu.cn,
devel@...verdev.osuosl.org,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Len Brown <len.brown@...el.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Linux PM <linux-pm@...r.kernel.org>, Kangjie Lu <kjlu@....edu>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Jonathan Hunter <jonathanh@...dia.com>,
Pavel Machek <pavel@....cz>,
linux-tegra <linux-tegra@...r.kernel.org>,
Dmitry Osipenko <digetx@...il.com>,
Mauro Carvalho Chehab <mchehab@...nel.org>,
linux-media@...r.kernel.org
Subject: Re: Re: [PATCH] media: staging: tegra-vde: fix runtime pm imbalance
on error
On Thu, May 21, 2020 at 08:39:02PM +0300, Dan Carpenter wrote:
> On Thu, May 21, 2020 at 05:22:05PM +0200, Rafael J. Wysocki wrote:
> > On Thu, May 21, 2020 at 11:15 AM Dan Carpenter <dan.carpenter@...cle.com> wrote:
> > >
> > > On Thu, May 21, 2020 at 11:42:55AM +0800, dinghao.liu@....edu.cn wrote:
> > > > Hi, Dan,
> > > >
> > > > I agree the best solution is to fix __pm_runtime_resume(). But there are also
> > > > many cases that assume pm_runtime_get_sync() will change PM usage
> > > > counter on error. According to my static analysis results, the number of these
> > > > "right" cases are larger. Adjusting __pm_runtime_resume() directly will introduce
> > > > more new bugs. Therefore I think we should resolve the "bug" cases individually.
> > > >
> > >
> > > That's why I was saying that we may need to introduce a new replacement
> > > function for pm_runtime_get_sync() that works as expected.
> > >
> > > There is no reason why we have to live with the old behavior.
> >
> > What exactly do you mean by "the old behavior"?
>
> I'm suggesting we leave pm_runtime_get_sync() alone but we add a new
> function which called pm_runtime_get_sync_resume() which does something
> like this:
>
> static inline int pm_runtime_get_sync_resume(struct device *dev)
> {
> int ret;
>
> ret = __pm_runtime_resume(dev, RPM_GET_PUT);
> if (ret < 0) {
> pm_runtime_put(dev);
> return ret;
> }
> return 0;
> }
>
> I'm not sure if pm_runtime_put() is the correct thing to do? The other
> thing is that this always returns zero on success. I don't know that
> drivers ever care to differentiate between one and zero returns.
>
> Then if any of the caller expect that behavior we update them to use the
> new function.
Does that really have many benefits, though? I understand that this
would perhaps be easier to use because it is more in line with how other
functions operate. On the other hand, in some cases you may want to call
a different version of pm_runtime_put() on failure, as discussed in
other threads.
Even ignoring that issue, any existing callsites that are leaking the
reference would have to be updated to call the new function, which would
be pretty much the same amount of work as updating the callsites to fix
the leak, right?
So if instead we just fix up the leaks, we might have a case of an API
that doesn't work as some of us (myself included) expected it, but at
least it would be consistent. If we add another variant things become
fragmented and therefore even more complicated to use and review.
Thierry
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists