lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAD=FV=U8zaZCTrZvtipSmaQL8NGg+4aNpyP=KhQ5EYioKovnYA@mail.gmail.com>
Date:   Thu, 5 Mar 2020 14:20:38 -0800
From:   Doug Anderson <dianders@...omium.org>
To:     Maulik Shah <mkshah@...eaurora.org>
Cc:     Stephen Boyd <swboyd@...omium.org>,
        Matthias Kaehlcke <mka@...omium.org>,
        Evan Green <evgreen@...omium.org>,
        Bjorn Andersson <bjorn.andersson@...aro.org>,
        LKML <linux-kernel@...r.kernel.org>,
        linux-arm-msm <linux-arm-msm@...r.kernel.org>,
        Andy Gross <agross@...nel.org>,
        Rajendra Nayak <rnayak@...eaurora.org>,
        Lina Iyer <ilina@...eaurora.org>, lsrao@...eaurora.org
Subject: Re: [PATCH v12 4/4] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes
 before flushing new data

Hi,

On Thu, Mar 5, 2020 at 9:07 AM Maulik Shah <mkshah@...eaurora.org> wrote:
>
> TCSes have previously programmed data when rpmh_flush is called.
> This can cause old data to trigger along with newly flushed.
>
> Fix this by cleaning SLEEP and WAKE TCSes before new data is flushed.
>
> Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
> Signed-off-by: Maulik Shah <mkshah@...eaurora.org>
> ---
>  drivers/soc/qcom/rpmh.c | 5 +++++
>  1 file changed, 5 insertions(+)
>
> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> index 1951f6a..63364ce 100644
> --- a/drivers/soc/qcom/rpmh.c
> +++ b/drivers/soc/qcom/rpmh.c
> @@ -472,6 +472,11 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>                 return 0;
>         }
>
> +       /* Invalidate the TCSes first to avoid stale data */
> +       do {
> +               ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
> +       } while (ret == -EAGAIN);
> +
>         /* First flush the cached batch requests */
>         ret = flush_batch(ctrlr);
>         if (ret)

I think you should make this patch 3/4 instead of 4/4, and then:

1. In this patch remove the call to rpmh_rsc_invalidate() in
rpmh_invalidate().  You've already marked things "dirty" in
invalidate_batch() so no need to actually program the hardware--it'll
happen in the flush.

2. In patch 4/4 (the flushing patch) add a call to rpmh_flush() to
rpmh_invalidate() if you're in non-OSI mode.  Presumably you'll need a
spinlock around the rpmh_flush() call?


The end result of that will be that rpmh_invalidate() will properly
leave the non-batch sleep/wake sets programmed.


-Doug

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ