[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAE-0n52wAqsmm4cs6JX2W2G10VxjLzocXVmF9c_GC+52Fi4djQ@mail.gmail.com>
Date: Tue, 5 Sep 2023 17:24:29 -0500
From: Stephen Boyd <swboyd@...omium.org>
To: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
Cc: Mika Westerberg <mika.westerberg@...ux.intel.com>,
Hans de Goede <hdegoede@...hat.com>,
Mark Gross <markgross@...nel.org>,
linux-kernel@...r.kernel.org, patches@...ts.linux.dev,
platform-driver-x86@...r.kernel.org,
Kuppuswamy Sathyanarayanan
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
Prashant Malani <pmalani@...omium.org>
Subject: Re: [PATCH 1/3] platform/x86: intel_scu_ipc: Check status after
timeouts in busy_loop()
Quoting Andy Shevchenko (2023-08-31 06:53:14)
> On Wed, Aug 30, 2023 at 06:14:01PM -0700, Stephen Boyd wrote:
> > It's possible for the polling loop in busy_loop() to get scheduled away
> > for a long time.
> >
> > status = ipc_read_status(scu);
> > <long time scheduled away>
> > if (!(status & IPC_STATUS_BUSY))
> >
> > If this happens, then the status bit could change and this function
> > would never test it again after checking the jiffies against the timeout
> > limit. Polling code should check the condition one more time after the
> > timeout in case this happens.
> >
> > The read_poll_timeout() helper implements this logic, and is shorter, so
> > simply use that helper here.
>
> I don't remember by heart, but on some older Intel hardware this might have
> been called during early stages where ktime() is not functional yet.
>
> Is this still a case here?
I have no idea if that happens in early stages. What about
suspend/resume though? I suppose timekeeping could be suspended in that
case, so we can't really check anything with ktime.
I can rework this patch to simply recheck the busy bit so that we don't
have to figure out if the code is called early or from suspend paths.
Powered by blists - more mailing lists