lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 9 Jul 2022 16:56:16 +0000
From:   <Conor.Dooley@...rochip.com>
To:     <u.kleine-koenig@...gutronix.de>, <Conor.Dooley@...rochip.com>
CC:     <thierry.reding@...il.com>, <lee.jones@...aro.org>,
        <robh+dt@...nel.org>, <krzysztof.kozlowski+dt@...aro.org>,
        <Daire.McNamara@...rochip.com>, <devicetree@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <linux-pwm@...r.kernel.org>,
        <linux-riscv@...ts.infradead.org>
Subject: Re: [PATCH v5 3/4] pwm: add microchip soft ip corePWM driver



On 09/07/2022 17:37, Uwe Kleine-König wrote:
> Hello Conor,
> 
> On Sat, Jul 09, 2022 at 04:21:46PM +0000, Conor.Dooley@...rochip.com wrote:
>> On 09/07/2022 17:02, Uwe Kleine-König wrote:
>>> On Fri, Jul 08, 2022 at 03:39:22PM +0100, Conor Dooley wrote:
>>>> Add a driver that supports the Microchip FPGA "soft" PWM IP core.
>>>>
>>>> Signed-off-by: Conor Dooley <conor.dooley@...rochip.com>
>> ---8<---
>>>> +	mchp_core_pwm_apply_duty(chip, pwm, state, prescale, period_steps);
>>>> +
>>>> +	/*
>>>> +	 * Notify the block to update the waveform from the shadow registers.
>>>> +	 * The updated values will not appear on the bus until they have been
>>>> +	 * applied to the waveform at the beginning of the next period. We must
>>>> +	 * write these registers and wait for them to be applied before calling
>>>> +	 * enable().
>>>> +	 */
>>>> +	if (mchp_core_pwm->sync_update_mask & (1 << pwm->hwpwm)) {
>>>> +		writel_relaxed(1U, mchp_core_pwm->base + MCHPCOREPWM_SYNC_UPD);
>>>> +		usleep_range(state->period, state->period * 2);
>>>> +	}
>>>> +
>>>> +	spin_unlock(&mchp_core_pwm->lock);
>>>> +
>>>> +	mchp_core_pwm_enable(chip, pwm, true);
>>>
>>> I already asked in the last round: Do you really need to write the
>>> SYNC_UPD register twice? I would expect that you don't?!
>>
>> Sorry, I thought that I had replied to this on Friday, didn't
>> meant to ignore you.
>>
>> Yes, I do need to keep that - otherwise there are problems when
>> turning on the PWM channel for the first time.
> 
> How unintuitive and unfortunate. I wonder if there is an upside of this
> approach that I'm missing.

Simplicity of the sythesised design maybe?
Given one of the things the manual talks about is the LUT savings by
turning off shadowing it would not surprise me.

> 
>> Before writing to the enable registers, we need to make sure that
>> the values have been applied since both pos and neg edge registers
>> come out of reset set to 0x0.
> 
> I always like to understand the hardware, can you explain the problems
> in more details?

The TLDR is if I don't, the channel gets enabled w/ the 50% duty cycle
problem. From glancing at the RTL the other day, it looks like it is
down to the how the if statement that decides the output level is
ordered.

Depending on how bored I get tonight/tomorrow, I'll give you a better
answer then or during the week.

> 
>>> Also the locking looks fishy here. It would be simpler (and maybe even
>>> more robust, didn't think deeply about it) to assume in
>>> mchp_core_pwm_enable() that the caller holds the lock. Then you only
>>> grab the lock once during .apply() and nothing strange can happen in the
>>> gap.
>>>
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +static void mchp_core_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
>>>> +				    struct pwm_state *state)
>>>> +{
>>>> +	struct mchp_core_pwm_chip *mchp_core_pwm = to_mchp_core_pwm(chip);
>>>> +	u8 prescale, period_steps, duty_steps;
>>>> +	u8 posedge, negedge;
>>>> +	u16 channel_enabled;
>>>> +
>>>
>>> I'd take the lock here to be sure to get a consistent return value.
>>>
>>>> +	channel_enabled = (((u16)readb_relaxed(mchp_core_pwm->base + MCHPCOREPWM_EN(1)) << 8) |
>>>> +		readb_relaxed(mchp_core_pwm->base + MCHPCOREPWM_EN(0)));
>>>
>>> micro optimisation: You're reading two register values here, but only use
>>> one. Shadowing the enabled registers in mchp_core_pwm might also be an
>>> idea.
>>>
>>>> +	if (channel_enabled & 1 << pwm->hwpwm)
>>>
>>> I'm always unsure about the associativity of & and <<, so I would have
>>> written that as
>>>
>>> 	if (channel_enabled & (1 << pwm->hwpwm))
>>>
>>> I just tested that for the umpteens time and your code is fine, so this
>>> is only for human readers like me.
>>>
>>>> +		state->enabled = true;
>>>> +	else
>>>> +		state->enabled = false;
>>>> +
>>>> +	prescale = PREG_TO_VAL(readb_relaxed(mchp_core_pwm->base + MCHPCOREPWM_PRESCALE));
>>>> +
>>>> +	posedge = readb_relaxed(mchp_core_pwm->base + MCHPCOREPWM_POSEDGE(pwm->hwpwm));
>>>> +	negedge = readb_relaxed(mchp_core_pwm->base + MCHPCOREPWM_NEGEDGE(pwm->hwpwm));
>>>> +
>>>> +	duty_steps = abs((s16)posedge - (s16)negedge);
>>>
>>> If duty_steps == 0 the returned result is wrong. I suggest to fix that,
>>> at least mention the problem in a comment.
>>
>> Ah right yeah, I didn't update this after changing the other logic. Sorry.
>>
>>>
>>>> +	state->duty_cycle = duty_steps * prescale * NSEC_PER_SEC;
>>>
>>> Can this overflow?
>>>
>>>> +	do_div(state->duty_cycle, clk_get_rate(mchp_core_pwm->clk));
>>>
>>> What is the typical return value of clk_get_rate(mchp_core_pwm->clk)?
>>
>> It's gonna be less than 600M
> 
> An exact value would be interesting, then when I spot a rounding problem
> I could give you a test case to double check.

Yeah, I am not sure what the maximum clock rates allowed in the FPGA fabric
are off the top of my head. 600 M was a sane limit b/c that's what the core
complex runs at. Said it more to say that it wouldn't be >NS_PER_SEC

>>> You need to round up here. Did you test your driver with PWM_DEBUG on?
>>> During test please do for a few fixed periods:
>>>
>>> 	for duty_cycle in [0 .. period]:
>>> 		pwm_apply(mypwm, {.period = period, .duty_cycle = duty_cycle, .enabled = true, ...})
>>>
>>> 	for duty_cycle in [period .. 0]:
>>> 		pwm_apply(mypwm, {.period = period, .duty_cycle = duty_cycle, .enabled = true, ...})
>>>
>>> and check there is no output claiming a miscalculation.
>>
>> I ran the stuff you gave me last time, doing something similar w/ a
>> shell loop. Got no reported miscalculations.
> 
> I'm surprise, I would have expected that my test recipe would find such
> an issue. Could you follow my arguing about the rounding direction?
> There always the possibility that I'm wrong, too.

I'll take another look & get back to you.
Thanks for the review.
Conor.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ