[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAE-0n51Ut296M2ZetuzXGpX32pS11bbWzfcbaFfqNxgSjzafJw@mail.gmail.com>
Date: Thu, 7 Sep 2023 13:11:17 -0700
From: Stephen Boyd <swboyd@...omium.org>
To: Mika Westerberg <mika.westerberg@...ux.intel.com>
Cc: Hans de Goede <hdegoede@...hat.com>,
Mark Gross <markgross@...nel.org>,
linux-kernel@...r.kernel.org, patches@...ts.linux.dev,
platform-driver-x86@...r.kernel.org,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
Kuppuswamy Sathyanarayanan
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
Prashant Malani <pmalani@...omium.org>
Subject: Re: [PATCH v2 1/3] platform/x86: intel_scu_ipc: Check status after
timeout in busy_loop()
Quoting Mika Westerberg (2023-09-06 22:35:13)
> On Wed, Sep 06, 2023 at 11:09:41AM -0700, Stephen Boyd wrote:
> > It's possible for the polling loop in busy_loop() to get scheduled away
> > for a long time.
> >
> > status = ipc_read_status(scu); // status = IPC_STATUS_BUSY
> > <long time scheduled away>
> > if (!(status & IPC_STATUS_BUSY))
> >
> > If this happens, then the status bit could change while the task is
> > scheduled away and this function would never read the status again after
> > timing out. Instead, the function will return -ETIMEDOUT when it's
> > possible that scheduling didn't work out and the status bit was cleared.
> > Bit polling code should always check the bit being polled one more time
> > after the timeout in case this happens.
> >
> > Fix this by reading the status once more after the while loop breaks.
> >
> > Cc: Prashant Malani <pmalani@...omium.org>
> > Cc: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
> > Fixes: e7b7ab3847c9 ("platform/x86: intel_scu_ipc: Sleeping is fine when polling")
> > Signed-off-by: Stephen Boyd <swboyd@...omium.org>
> > ---
> >
> > This is sufficiently busy so I didn't add any tags from previous round.
> >
> > drivers/platform/x86/intel_scu_ipc.c | 11 +++++++----
> > 1 file changed, 7 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/platform/x86/intel_scu_ipc.c b/drivers/platform/x86/intel_scu_ipc.c
> > index 6851d10d6582..b2a2de22b8ff 100644
> > --- a/drivers/platform/x86/intel_scu_ipc.c
> > +++ b/drivers/platform/x86/intel_scu_ipc.c
> > @@ -232,18 +232,21 @@ static inline u32 ipc_data_readl(struct intel_scu_ipc_dev *scu, u32 offset)
> > static inline int busy_loop(struct intel_scu_ipc_dev *scu)
> > {
> > unsigned long end = jiffies + IPC_TIMEOUT;
> > + u32 status;
> >
> > do {
> > - u32 status;
> > -
> > status = ipc_read_status(scu);
> > if (!(status & IPC_STATUS_BUSY))
> > - return (status & IPC_STATUS_ERR) ? -EIO : 0;
> > + goto not_busy;
> >
> > usleep_range(50, 100);
> > } while (time_before(jiffies, end));
> >
> > - return -ETIMEDOUT;
> > + status = ipc_read_status(scu);
>
> Does the issue happen again if we get scheduled away here for a long
> time? ;-)
Given the smiley I'll assume you're making a joke. But to clarify, the
issue can't happen again because we've already waited at least
IPC_TIMEOUT jiffies, maybe quite a bit more, so if we get scheduled away
again it's a non-issue. If the status is still busy here then it's a
timeout guaranteed.
>
> Regardless, I'm fine with this as is but if you make any changes, I
> would prefer see readl_busy_timeout() used here instead (as was in the
> previous version).
We can't use readl_busy_timeout() (you mean readl_poll_timeout() right?)
because that implements the timeout with timekeeping and we don't know
if this is called from suspend paths after timekeeping is suspended or
from early boot paths where timekeeping isn't started.
We could use readl_poll_timeout_atomic() and then the usleep would be
changed to udelay. Not sure that is acceptable though to delay 50
microseconds vs. intentionally schedule away like the usleep call is
doing.
Powered by blists - more mailing lists