[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <159561988523.3847286.14763422711224252201@swboyd.mtv.corp.google.com>
Date: Fri, 24 Jul 2020 12:44:45 -0700
From: Stephen Boyd <swboyd@...omium.org>
To: Doug Anderson <dianders@...omium.org>
Cc: Andy Gross <agross@...nel.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
Maulik Shah <mkshah@...eaurora.org>,
Lina Iyer <ilina@...eaurora.org>
Subject: Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
Quoting Doug Anderson (2020-07-24 10:42:55)
> Hi,
>
> On Wed, Jul 22, 2020 at 6:01 PM Stephen Boyd <swboyd@...omium.org> wrote:
> > diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
> > index ef60e790a750..9a325bac58fe 100644
> > --- a/drivers/soc/qcom/rpmh-internal.h
> > +++ b/drivers/soc/qcom/rpmh-internal.h
> > @@ -118,6 +119,7 @@ struct rsc_drv {
> > struct tcs_group tcs[TCS_TYPE_NR];
> > DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR);
> > spinlock_t lock;
> > + wait_queue_head_t tcs_wait;
>
> nit: this structure has a kernel-doc comment above it describing the
> elements. Could you add yours?
Sure.
>
>
> > struct rpmh_ctrlr client;
> > };
> >
> > diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> > index 076fd27f3081..6c758b052c95 100644
> > --- a/drivers/soc/qcom/rpmh-rsc.c
> > +++ b/drivers/soc/qcom/rpmh-rsc.c
> > @@ -19,6 +19,7 @@
> > #include <linux/platform_device.h>
> > #include <linux/slab.h>
> > #include <linux/spinlock.h>
> > +#include <linux/wait.h>
> >
> > #include <soc/qcom/cmd-db.h>
> > #include <soc/qcom/tcs.h>
> > @@ -444,6 +445,7 @@ static irqreturn_t tcs_tx_done(int irq, void *p)
> > */
> > if (!drv->tcs[ACTIVE_TCS].num_tcs)
> > enable_tcs_irq(drv, i, false);
> > + wake_up(&drv->tcs_wait);
> > spin_unlock(&drv->lock);
>
> nit: I think it's slightly better to do the wake_up() after the
> spin_unlock(), no? The first thing the other task will do is to try
> to grab the spinlock and we might as well give it a chance of
> succeeding without looping. I don't see any reason why we'd need to
> be holding the lock while calling wake_up().
Right that's better.
>
>
> > if (req)
> > rpmh_tx_done(req, err);
> > @@ -562,44 +564,59 @@ static int find_free_tcs(struct tcs_group *tcs)
> > return -EBUSY;
> > }
> >
> > +static int claim_tcs_for_req(struct rsc_drv *drv, struct tcs_group *tcs,
> > + const struct tcs_request *msg)
>
> nit: I know this is a short function and kernel convention doesn't
> strictly require comments in front of all functions. However, every
> other function in this file has a comment and I had a really hard time
> dealing with the rpmh-rsc code before the comments. Could you add one
> for your function, even if it's short? One thing that would be nice
> to note is that the only error it returns is -EBUSY. See below.
Sure I'll write up some kernel-doc.
>
> > - if (ret)
> > - goto unlock;
> >
> > - ret = find_free_tcs(tcs);
> > - if (ret < 0)
> > - goto unlock;
> > - tcs_id = ret;
> > + wait_event_lock_irq(drv->tcs_wait,
> > + (tcs_id = claim_tcs_for_req(drv, tcs, msg)) >= 0,
>
> Even though claim_tcs_for_req() only returns 0 or -EBUSY today (IOW it
> never returns error codes other than -EBUSY), should we handle it? If
> we don't, claim_tcs_for_req() should be very clear that it shouldn't
> return any errors other than -EBUSY.
Do you mean you want to change it to be
(tcs_id = claim_tcs_for_req(drv, tcs, msg)) != -EBUSY
instead of >= 0? It should return the tcs_id that was claimed, not just
0 or -EBUSY.
Powered by blists - more mailing lists