[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y4coL74qQX80TNaT@wendy>
Date: Wed, 30 Nov 2022 09:53:51 +0000
From: Conor Dooley <conor.dooley@...rochip.com>
To: Conor Dooley <conor@...nel.org>, <u.kleine-koenig@...gutronix.de>
CC: Uwe Kleine-König
<u.kleine-koenig@...gutronix.de>,
Thierry Reding <thierry.reding@...il.com>,
Daire McNamara <daire.mcnamara@...rochip.com>,
<linux-kernel@...r.kernel.org>, <linux-pwm@...r.kernel.org>,
<linux-riscv@...ts.infradead.org>
Subject: Re: [PATCH v12 1/2] pwm: add microchip soft ip corePWM driver
Hey Uwe,
On Mon, Nov 21, 2022 at 03:29:39PM +0000, Conor Dooley wrote:
> On Thu, Nov 17, 2022 at 10:03:13PM +0000, Conor Dooley wrote:
> > On Thu, Nov 17, 2022 at 10:04:33PM +0100, Uwe Kleine-König wrote:
> > > On Thu, Nov 17, 2022 at 05:38:26PM +0000, Conor Dooley wrote:
> > > > On Thu, Nov 17, 2022 at 05:49:50PM +0100, Uwe Kleine-König wrote:
> > > > > Hello Conor,
> > > >
> > > > Hello Uwe,
> > > >
> > > > > On Thu, Nov 10, 2022 at 09:35:12AM +0000, Conor Dooley wrote:
> > > > > > [...]
> > > > > > +
> > > > > > +static void mchp_core_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm,
> > > > > > + bool enable, u64 period)
> > > > > > +{
> > > > > > + struct mchp_core_pwm_chip *mchp_core_pwm = to_mchp_core_pwm(chip);
> > > > > > + u8 channel_enable, reg_offset, shift;
> > > > > > +
> > > > > > + /*
> > > > > > + * There are two adjacent 8 bit control regs, the lower reg controls
> > > > > > + * 0-7 and the upper reg 8-15. Check if the pwm is in the upper reg
> > > > > > + * and if so, offset by the bus width.
> > > > > > + */
> > > > > > + reg_offset = MCHPCOREPWM_EN(pwm->hwpwm >> 3);
> > > > > > + shift = pwm->hwpwm & 7;
> > > > > > +
> > > > > > + channel_enable = readb_relaxed(mchp_core_pwm->base + reg_offset);
> > > > > > + channel_enable &= ~(1 << shift);
> > > > > > + channel_enable |= (enable << shift);
> > > > > > +
> > > > > > + writel_relaxed(channel_enable, mchp_core_pwm->base + reg_offset);
> > > > > > + mchp_core_pwm->channel_enabled &= ~BIT(pwm->hwpwm);
> > > > > > + mchp_core_pwm->channel_enabled |= enable << pwm->hwpwm;
> > > > > > +
> > > > > > + /*
> > > > > > + * Notify the block to update the waveform from the shadow registers.
> > > > > > + * The updated values will not appear on the bus until they have been
> > > > > > + * applied to the waveform at the beginning of the next period. We must
> > > > > > + * write these registers and wait for them to be applied before
> > > > > > + * considering the channel enabled.
> > > > > > + * If the delay is under 1 us, sleep for at least 1 us anyway.
> > > > > > + */
> > > > > > + if (mchp_core_pwm->sync_update_mask & (1 << pwm->hwpwm)) {
> > > > > > + u64 delay;
> > > > > > +
> > > > > > + delay = div_u64(period, 1000u) ? : 1u;
> > > > > > + writel_relaxed(1U, mchp_core_pwm->base + MCHPCOREPWM_SYNC_UPD);
> > > > > > + usleep_range(delay, delay * 2);
> > > > > > + }
> > > > >
> > > > > In some cases the delay could be prevented. e.g. when going from one
> > > > > disabled state to another. If you don't want to complicate the driver
> > > > > here, maybe point it out in a comment at least?
> > > >
> > > > Maybe this is my naivity talking, but I'd rather wait. Is there not the
> > > > chance that we re-enter pwm_apply() before the update has actually gone
> > > > through?
> > >
> > > My idea was to do something like that:
> > >
> > > int mchp_core_pwm_apply(....)
> > > {
> > > if (mchp_core_pwm->sync_update_mask & (1 << pwm->hwpwm)) {
> > > /*
> > > * We're still waiting for an update, don't
> > > * interfer until it's completed.
> > > */
> > > while (readl_relaxed(mchp_core_pwm->base + MCHPCOREPWM_SYNC_UPD)) {
> > > cpu_relax();
> > > if (waited_unreasonably_long())
> > > return -ETIMEOUT;
> > > }
> > > }
> > >
> > > update_period_and_duty(...);
> > > return 0;
> > > }
>
> So I was doing some fiddling, and the following works reasonably well:
> if (mchp_core_pwm->sync_update_mask & (1 << pwm->hwpwm)) {
> u32 delay = MCHPCOREPWM_TIMEOUT_US;
> u32 sync_upd;
> int ret;
>
> writel_relaxed(1u, mchp_core_pwm->base + MCHPCOREPWM_SYNC_UPD);
>
> ret = read_poll_timeout(readl, sync_upd, !sync_upd, delay/100, delay,
> false, mchp_core_pwm->base + MCHPCOREPWM_SYNC_UPD);
> if (ret)
> dev_dbg(mchp_core_pwm->chip.dev,
> "timed out waiting for shadow register sync\n");
> }
>
> but...
>
> > > This way you don't have to wait at all if the calls to pwm_apply() are
> > > infrequent. Of course this only works this way, if you can determine if
> > > there is a pending update.
> >
> > Ah I think I get what you mean now about waiting for completion &
> > reading the bit. I don't know off the top of my head if that bit is
> > readable. Docs say that they're R/W but I don't know if that means that
> > an AXI read works or if the value is actually readable. I'll try
> > something like this if I can.
>
> ...it does not implement what I think you suggested & comes with the
> drawback of inconsistent behaviour depending on whether the timeout is
> hit or not.
>
> Instead, waiting in apply(), as you suggested, & get_state() looks to be the
> better option, using the same sort of logic as above, say:
> static int mchp_core_pwm_wait_for_sync_update(struct mchp_core_pwm_chip *mchp_core_pwm,
> unsigned int channel)
> {
> int ret;
>
> /*
> * If a shadow register is used for this PWM channel, and iff there is
> * a pending update to the waveform, we must wait for it to be applied
> * before attempting to read its state, as reading the registers yields
> * the currently implemented settings, the new ones are only readable
> * once the current period has ended.
> *
> * Rather large delays are possible, in the seconds, so to avoid waiting
> * around for **too** long - cap the wait at 100 ms.
> */
> if (mchp_core_pwm->sync_update_mask & (1 << channel)) {
> u32 delay = MCHPCOREPWM_TIMEOUT_US;
> u32 sync_upd;
>
> writel_relaxed(1u, mchp_core_pwm->base + MCHPCOREPWM_SYNC_UPD);
>
> ret = read_poll_timeout(readl, sync_upd, !sync_upd, delay/100, delay,
> false, mchp_core_pwm->base + MCHPCOREPWM_SYNC_UPD);
> if (ret)
> return -ETIMEDOUT;
> }
>
> return 0;
> }
>
> I think that strikes a good balance? We return quickly & don't blocker
> the caller, but simultaneously try to prevent them from either trying to
> apply new settings or get the current settings until the last request
> has gone though?
>
> get_state() returns void though, is it valid behaviour to wait for the
> timeout there?
> I had a check in the core code and found some places where the call in
> looks like:
> struct pwm_state s1, s2;
> chip->ops->get_state(chip, pwm, &s1);
> In this case, exiting early would leave us with a completely wrong
> idead of the state, if it was to time out.
>
> Either way, it seems like either way we would be misleading the caller
> of get_state() - perhaps the way around that is to do the wait & then
> just carry on with get_state()?
> In that scenario, you'd get the new settings where possible and the old ones
> otherwise.
> Returning if the timeout is hit would give you the new settings where possible
> & otherwise you'd get whatever was passed to get_state().
> I'm not really sure which of those two situations would be preferred?
Apologies for bumping this, I was wondering if any thoughts on the
above? I'm not sure which is the lesser evil here (or if I have
misunderstood something).
Thanks,
Conor.
Powered by blists - more mailing lists