[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Zsb9CRoQUHEQKT_V@black.fi.intel.com>
Date: Thu, 22 Aug 2024 11:55:37 +0300
From: Raag Jadav <raag.jadav@...el.com>
To: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
Cc: ukleinek@...nel.org, mika.westerberg@...ux.intel.com,
jarkko.nikula@...ux.intel.com, linux-pwm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1] pwm: lpss: wait_for_update() before configuring pwm
On Tue, Aug 20, 2024 at 12:56:04PM +0300, Andy Shevchenko wrote:
> On Tue, Aug 20, 2024 at 08:50:20AM +0300, Raag Jadav wrote:
> > On Mon, Aug 19, 2024 at 11:21:51AM +0300, Andy Shevchenko wrote:
> > > On Mon, Aug 19, 2024 at 01:34:12PM +0530, Raag Jadav wrote:
> > > > Wait for SW_UPDATE bit to clear before configuring pwm channel instead of
> > >
> > > PWM
> > >
> > > > failing right away, which will reduce failure rates on early access.
> > >
> > > So, what is the problem this patch solves (or is trying to solve)?
> >
> > Less failures with less code, so just a minor improvement.
>
> It's not an equivalent code as I mentioned below.
> So, if it's just a "cleanup", I do not think we want it as code works now and
> have no penalties.
>
> > > Second, there are two important behavioural changes:
> > > - error code change (it's visible to user space);
> >
> > This function is already used in this path just a few lines below.
>
> Yes, I know, but it is used in a slightly different context.
>
> > > - an additional, quite a long by the way, timeout.
> > >
> > > Second one does worry me a lot as it might add these 0.5s to the boot time
> > > or so per PWM in question.
> >
> > On the contrary, having a working set of PWMs would be a relatively
> > rewarding experience IMHO.
>
> I'm not sure what this patch tries to fix. Was something not working before?
> Was something become different on real hardware that makes this patch worth to
> apply? None of these questions has been answered in the commit message.
>
> So, as long as this is considered a pure cleanup, here is formal NAK from me as
> this IP block is not stateless and may lead to freezes. Hence the rule of thumb
> "do not touch the working things".
Fair enough. However, I was able to test it on Merrifield, Bay Trail and Broxton
(both PCI and platform counterparts) without such problems, if it is helpful in
any way.
Raag
Powered by blists - more mailing lists