[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAE-0n53=AuYcBSTKkvDmNHpLMq7j4yTeMh5j80uN5dobqvC5ag@mail.gmail.com>
Date: Wed, 23 Jun 2021 01:31:13 -0700
From: Stephen Boyd <swboyd@...omium.org>
To: Ulf Hansson <ulf.hansson@...aro.org>
Cc: "Rafael J . Wysocki" <rjw@...ysocki.net>,
Kevin Hilman <khilman@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Len Brown <len.brown@...el.com>, Pavel Machek <pavel@....cz>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Linux PM <linux-pm@...r.kernel.org>
Subject: Re: [PATCH] PM: domains: Shrink locking area of the gpd_list_lock
Quoting Ulf Hansson (2021-06-22 09:27:09)
> On Mon, 21 Jun 2021 at 22:10, Stephen Boyd <swboyd@...omium.org> wrote:
> > diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> > index b6a782c31613..18063046961c 100644
> > --- a/drivers/base/power/domain.c
> > +++ b/drivers/base/power/domain.c
> > @@ -1984,8 +1984,8 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
> >
> > mutex_lock(&gpd_list_lock);
> > list_add(&genpd->gpd_list_node, &gpd_list);
> > - genpd_debug_add(genpd);
> > mutex_unlock(&gpd_list_lock);
> > + genpd_debug_add(genpd);
> >
> > return 0;
> > }
> > @@ -2162,9 +2162,11 @@ static int genpd_add_provider(struct device_node *np, genpd_xlate_t xlate,
> > cp->xlate = xlate;
> > fwnode_dev_initialized(&np->fwnode, true);
> >
> > + mutex_lock(&gpd_list_lock);
>
> By looking at the existing code, $subject patch makes the behavior
> consistent and fixes the problem that the locks must always be
> taken/released in the same order.
>
> However, as I have been looking at this before (but never got to the
> point of sending a patch), I am actually thinking that it would be
> better to decouple the two locks, instead of further combining them.
>
> In other words, we shouldn't lock/unlock the &gpd_list_lock here in
> this function. Of course, that also means we need to fixup the code in
> of_genpd_del_provider() accordingly.
Yes I was wondering why this list lock was used here at all. It seems to
be a substitute for calling genpd_lock()? I opted to just push the list
lock as far down as possible to fix the problem, which is holding it
over the calls into OPP.
If I've read the code correctly it serves no purpose to grab the
gpd_list_lock here in genpd_add_provider() because we grab the
of_genpd_mutex and that is protecting the of_genpd_providers list
everywhere else. Is that right? Put another way, This hunk of the patch
can be dropped and then your concern will be addressed and there isn't
anything more to do.
>
>
> > mutex_lock(&of_genpd_mutex);
> > list_add(&cp->link, &of_genpd_providers);
> > mutex_unlock(&of_genpd_mutex);
> > + mutex_unlock(&gpd_list_lock);
> > pr_debug("Added domain provider from %pOF\n", np);
> >
> > return 0;
> > @@ -2314,8 +2314,6 @@ int of_genpd_add_provider_onecell(struct device_node *np,
> > }
> > }
> >
> > - mutex_unlock(&gpd_list_lock);
> > -
> > return ret;
> > }
> > EXPORT_SYMBOL_GPL(of_genpd_add_provider_onecell);
>
> I will continue to have a look at this and provide some more comments
> asap, but overall the change is a step in the right direction.
>
> Possibly, we may even consider applying it as is and work on the
> things I pointed out above, as improvements on top. Let's see, give me
> a day or so.
>
Ok sure.
Powered by blists - more mailing lists