[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAE=gft7gPS+hhnDP+uTn3is6s9=Nspbb4PL0bZ025Tq1Zpth8Q@mail.gmail.com>
Date: Tue, 4 Feb 2020 16:31:26 -0800
From: Evan Green <evgreen@...omium.org>
To: Maulik Shah <mkshah@...eaurora.org>
Cc: Bjorn Andersson <bjorn.andersson@...aro.org>,
Andy Gross <agross@...nel.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Stephen Boyd <swboyd@...omium.org>,
Doug Anderson <dianders@...omium.org>,
Rajendra Nayak <rnayak@...eaurora.org>,
Lina Iyer <ilina@...eaurora.org>, lsrao@...eaurora.org
Subject: Re: [PATCH 2/3] soc: qcom: rpmh: Update rpm_msgs offset address and
add list_del
On Mon, Feb 3, 2020 at 10:14 PM Maulik Shah <mkshah@...eaurora.org> wrote:
>
> rpm_msgs are copied in continuously allocated memory during write_batch.
> Update request pointer to correctly point to designated area for rpm_msgs.
>
> While at this also add missing list_del before freeing rpm_msgs.
>
> Signed-off-by: Maulik Shah <mkshah@...eaurora.org>
> ---
> drivers/soc/qcom/rpmh.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> index c3d6f00..04c7805 100644
> --- a/drivers/soc/qcom/rpmh.c
> +++ b/drivers/soc/qcom/rpmh.c
> @@ -65,7 +65,7 @@ struct cache_req {
> struct batch_cache_req {
> struct list_head list;
> int count;
> - struct rpmh_request rpm_msgs[];
> + struct rpmh_request *rpm_msgs;
> };
>
> static struct rpmh_ctrlr *get_rpmh_ctrlr(const struct device *dev)
> @@ -327,8 +327,10 @@ static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
> unsigned long flags;
>
> spin_lock_irqsave(&ctrlr->cache_lock, flags);
> - list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
> + list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list) {
> + list_del(&req->list);
> kfree(req);
> + }
> INIT_LIST_HEAD(&ctrlr->batch_cache);
Hm, I don't get it. list_for_each_entry_safe ensures you can traverse
the list while freeing it behind you. ctrlr->batch_cache is now a
bogus list, but is re-inited with the lock held. From my reading,
there doesn't seem to be anything wrong with the current code. Can you
elaborate on the bug you found?
> spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
> }
> @@ -377,10 +379,11 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
> return -ENOMEM;
>
> req = ptr;
> + rpm_msgs = ptr + sizeof(*req);
> compls = ptr + sizeof(*req) + count * sizeof(*rpm_msgs);
>
> req->count = count;
> - rpm_msgs = req->rpm_msgs;
> + req->rpm_msgs = rpm_msgs;
I don't really understand what this is fixing either, can you explain?
Powered by blists - more mailing lists