[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <99b6d095-c60d-2bb4-4f87-d3fcb7e1b135@seco.com>
Date: Tue, 29 Jun 2021 18:26:46 -0400
From: Sean Anderson <sean.anderson@...o.com>
To: Uwe Kleine-König <u.kleine-koenig@...gutronix.de>
Cc: linux-pwm@...r.kernel.org, devicetree@...r.kernel.org,
linux-kernel@...r.kernel.org,
Emil Lenngren <emil.lenngren@...il.com>,
michal.simek@...inx.com, Alvaro Gamez <alvaro.gamez@...ent.com>,
Thierry Reding <thierry.reding@...il.com>,
kernel@...gutronix.de, linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH v4 3/3] pwm: Add support for Xilinx AXI Timer
On 6/29/21 6:21 PM, Sean Anderson wrote:
>
>
> On 6/29/21 4:51 PM, Uwe Kleine-König wrote:
>> Hello Sean,
>>
>> On Tue, Jun 29, 2021 at 02:01:31PM -0400, Sean Anderson wrote:
>>> On 6/29/21 4:31 AM, Uwe Kleine-König wrote:
>>> > Hello Sean,
>>> >
>>> > On Mon, Jun 28, 2021 at 01:41:43PM -0400, Sean Anderson wrote:
>>> >> On 6/28/21 1:20 PM, Uwe Kleine-König wrote:
>>> >> > On Mon, Jun 28, 2021 at 12:35:19PM -0400, Sean Anderson wrote:
>>> >> >> On 6/28/21 12:24 PM, Uwe Kleine-König wrote:
>>> >> >> > On Mon, Jun 28, 2021 at 11:50:33AM -0400, Sean Anderson wrote:
>>> >> >> > > On 6/27/21 2:19 PM, Uwe Kleine-König wrote:
>>> >> >> > > > On Fri, Jun 25, 2021 at 01:46:26PM -0400, Sean Anderson wrote:
>>> >> >> > > IMO, this is the best way to prevent surprising results in the API.
>>> >> >> >
>>> >> >> > I think it's not possible in practise to refuse "near" misses and every
>>> >> >> > definition of "near" is in some case ridiculous. Also if you consider
>>> >> >> > the pwm_round_state() case you don't want to refuse any request to tell
>>> >> >> > as much as possible about your controller's capabilities. And then it's
>>> >> >> > straight forward to let apply behave in the same way to keep complexity
>>> >> >> > low.
>>> >> >> >
>>> >> >> > > The real issue here is that it is impossible to determine the correct
>>> >> >> > > way to round the PWM a priori, and in particular, without considering
>>> >> >> > > both duty_cycle and period. If a consumer requests very small
>>> >> >> > > period/duty cycle which we cannot produce, how should it be rounded?
>>> >> >> >
>>> >> >> > Yeah, because there is no obviously right one, I picked one that is as
>>> >> >> > wrong as the other possibilities but is easy to work with.
>>> >> >> >
>>> >> >> > > Should we just set TLR0=1 and TLR1=0 to give them 66% duty cycle with
>>> >> >> > > the least period? Or should we try and increase the period to better
>>> >> >> > > approximate the % duty cycle? And both of these decisions must be made
>>> >> >> > > knowing both parameters. We cannot (for example) just always round up,
>>> >> >> > > since we may produce a configuration with TLR0 == TLR1, which would
>>> >> >> > > produce 0% duty cycle instead of whatever was requested. Rounding rate
>>> >> >> > > will introduce significant complexity into the driver. Most of the time
>>> >> >> > > if a consumer requests an invalid rate, it is due to misconfiguration
>>> >> >> > > which is best solved by fixing the configuration.
>>> >> >> >
>>> >> >> > In the first step pick the biggest period not bigger than the requested
>>> >> >> > and then pick the biggest duty cycle that is not bigger than the
>>> >> >> > requested and that can be set with the just picked period. That is the
>>> >> >> > behaviour that all new drivers should do. This is somewhat arbitrary but
>>> >> >> > after quite some thought the most sensible in my eyes.
>>> >> >>
>>> >> >> And if there are no periods smaller than the requested period?
>>> >> >
>>> >> > Then return -ERANGE.
>>> >>
>>> >> Ok, so instead of
>>> >>
>>> >> if (cycles < 2 || cycles > priv->max + 2)
>>> >> return -ERANGE;
>>> >>
>>> >> you would prefer
>>> >>
>>> >> if (cycles < 2)
>>> >> return -ERANGE;
>>> >> else if (cycles > priv->max + 2)
>>> >> cycles = priv->max;
>>> >
>>> > The actual calculation is a bit harder to handle TCSR_UDT = 0 but in
>>> > principle, yes, but see below.
>>> >
>>> >> But if we do the above clamping for TLR0, then we have to recalculate
>>> >> the duty cycle for TLR1. Which I guess means doing something like
>>> >>
>>> >> ret = xilinx_timer_tlr_period(priv, &tlr0, tcsr0, state->period);
>>> >> if (ret)
>>> >> return ret;
>>> >>
>>> >> state->duty_cycle = mult_frac(state->duty_cycle,
>>> >> xilinx_timer_get_period(priv, tlr0, tcsr0),
>>> >> state->period);
>>> >>
>>> >> ret = xilinx_timer_tlr_period(priv, &tlr1, tcsr1, state->duty_cycle);
>>> >> if (ret)
>>> >> return ret;
>>> >
>>> > No, you need something like:
>>> >
>>> > /*
>>> > * The multiplication cannot overflow as both priv_max and
>>> > * NSEC_PER_SEC fit into an u32.
>>> > */
>>> > max_period = div64_ul((u64)priv->max * NSEC_PER_SEC, clkrate);
>>> >
>>> > /* cap period to the maximal possible value */
>>> > if (state->period > max_period)
>>> > period = max_period;
>>> > else
>>> > period = state->period;
>>> >
>>> > /* cap duty_cycle to the maximal possible value */
>>> > if (state->duty_cycle > max_period)
>>> > duty_cycle = max_period;
>>> > else
>>> > duty_cycle = state->duty_cycle;
>>>
>>> These caps may increase the % duty cycle.
>>
>> Correct.
>>
>> For some usecases keeping the relative duty cycle might be better, for
>> others it might not. I'm still convinced that in general my solution
>> makes sense, is computationally cheaper and easier to work with.
>
> Can you please describe one of those use cases? Every PWM user I looked
> (grepping for pwm_apply_state and pwm_config) set the duty cycle as a
> percentage of the period, and not as an absolute time. Keeping the high
> time the same while changing the duty cycle runs contrary to the
> assumptions of all of those users.
>
>>>
>>> > period_cycles = period * clkrate / NSEC_PER_SEC;
>>> >
>>> > if (period_cycles < 2)
>>> > return -ERANGE;
>>> >
>>> > duty_cycles = duty_cycle * clkrate / NSEC_PER_SEC;
>>> >
>>> > /*
>>> > * The hardware cannot emit a 100% relative duty cycle, if
>>> > * duty_cycle >= period_cycles is programmed the hardware emits
>>> > * a 0% relative duty cycle.
>>> > */
>>> > if (duty_cycle == period_cycles)
>>> > duty_cycles = period_cycles - 1;
>>> >
>>> > /*
>>> > * The hardware cannot emit a duty_cycle of one clk step, so
>>> > * emit 0 instead.
>>> > */
>>> > if (duty_cycles < 2)
>>> > duty_cycles = period_cycles;
>>>
>>> Of course, the above may result in 100% duty cycle being rounded down to
>>> 0%. I feel like that is too big of a jump to ignore. Perhaps if we
>>> cannot return -ERANGE we should at least dev_warn.
>>
>> You did it again. You picked one single case that you consider bad but
>> didn't provide a constructive way to make it better.
>
> Sure I did. I suggested that we warn. Something like
>
> if (duty_cycles == period_cycles)
> if (--duty_cycles < 2)
> dev_warn(chip->dev, "Rounding 100%% duty cycle down to 0%%; pick a longer period\n");
>
> or
>
> if (period_cycles < 2)
> return -ERANGE;
> else if (period_cycles < 10)
> dev_notice(chip->dev,
> "very short period of %u cycles; duty cycle may be rounded to 0%%\n",
> period_cycles);
>
> Because 90% of the time, if a user requests such a short period it is
> due to a typo or something similar. And if they really are doing it
> intentionally, then they should just set duty_cycle=0.
>
>> Assume there was already a pwm_round_state function (that returns the
>> state that pwm_apply_state would implement for a given request) Consider
>> a consumer that wants say a 50% relative duty together with a small
>> period. So it first might call:
>>
>> ret = pwm_round_rate(pwm, { .period = 20, .duty_cycle = 20, ... }, &rounded_state)
>>
>> to find out if .period = 20 can be implemented with the given PWM. If
>> this returns rounded state as:
>>
>> .period = 20
>> .duty_cycle = 0
>>
>> this says quite a lot about the pwm if the driver implements my policy.
>> (i.e.: The driver can do 20ns, but the biggest duty_cycle is only 0).
>> If however it returns -ERANGE this means (assuming the driver implements
>> the policy I try to convice you to be the right one) it means: The
>> hardware cannot implement 20 ns (or something smaller) and so the next
>> call probably tries 40 ns.
>>
>> With your suggested semantic -ERANGE might mean:
>>
>> - The driver doesn't support .period = 20 ns
>> (Follow up questions: What period should be tried next? 10 ns? 40
>> ns? What if this returns -ERANGE again?)
>> - The driver supports .period = 20 ns, but the biggest possible
>> duty_cycle is "too different from 20 ns to ignore".
>>
>> Then how should the search continue?
>
> round_rate does not have to use the same logic as apply_state. That is,
>
> ret = pwm_round_rate(pwm, { .period = 20, .duty_cycle = 20, ... }, &rounded_state)
>
> could be rounded to
>
> { .period = 20, .duty_cycle = 0 }
>
> but calling
>
> ret = pwm_apply_state(pwm, { .period = 20, .duty_cycle = 20, ... })
>
> could return -ERANGE. However, calling
>
> ret = pwm_apply_state(pwm, { .period = 20, .duty_cycle = 0, ... })
>
> should work just fine, as the caller clearly knows what they are getting
> into. IMO this is the best way to allow hypothetical round_rate users to
> find out the edges of the PWM while still protecting existing users.
>
> It's perfectly fine to round
>
> { .period = 150, .duty_cycle = 75 }
>
> to
>
> { .period = 100, .duty_cycle = 75 }
>
> in round_rate. But doing the same thing for apply_state would be very
> surprising to every existing PWM user.
>
> IMO the following invariant should hold
>
> apply_state(round_rate(x))
> assert(x == get_state())
this should be
apply_state(round_rate(x))
assert(round_rate(x) == get_state())
>
> but the following should not necessarily hold
>
> apply_state(x)
> assert(round_rate(x) == get_state())
>
> Of course, where it is reasonable to round down, we should do so. But
> where the result may be surprising, then the caller should specify the
> rounded state specifically. It is better to fail loudly and noisily than
> to silently accept garbage.
>
> --Sean
>
>>
>> So: As soon as there is a pwm_round_rate function this can be catched
>> and then it's important that the drivers adhere to the expected policy.
>> Implementing this is a big thing and believe me I already spend quite
>> some brain cycles into it. Once the core is extended accordingly I will
>> be happy about each driver already doing the right thing.
>>
>> Best regards
>> Uwe
>>
Powered by blists - more mailing lists