lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <686434fb.050a0220.efc3e.909b@mx.google.com>
Date: Tue, 1 Jul 2025 21:20:24 +0200
From: Christian Marangi <ansuelsmth@...il.com>
To: Uwe Kleine-König <ukleinek@...nel.org>
Cc: linux-kernel@...r.kernel.org, linux-pwm@...r.kernel.org,
	Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
	Benjamin Larsson <benjamin.larsson@...exis.eu>,
	AngeloGioacchino Del Regno <angelogioacchino.delregno@...labora.com>,
	Lorenzo Bianconi <lorenzo@...nel.org>
Subject: Re: [PATCH v19] pwm: airoha: Add support for EN7581 SoC

On Tue, Jul 01, 2025 at 09:40:03AM +0200, Uwe Kleine-König wrote:
> Hello Christian,
> 
> On Mon, Jun 30, 2025 at 01:44:39PM +0200, Christian Marangi wrote:
> > +struct airoha_pwm_bucket {
> > +	int used;
> > +	u32 period_ticks;
> > +	u32 duty_ticks;
> > +};
> > +
> > +struct airoha_pwm {
> > +	struct regmap *regmap;
> > +	/* Global mutex to protect bucket used counter */
> > +	struct mutex mutex;
> 
> I think you don't need that mutex. There is a chip lock already used by
> the core and that is held during .get_state() and .apply() serializing
> these calls (and more).
> 
> > +	DECLARE_BITMAP(initialized, AIROHA_PWM_MAX_CHANNELS);
> > +
> > +	struct airoha_pwm_bucket buckets[AIROHA_PWM_NUM_BUCKETS];
> > +
> > +	/* Cache bucket used by each pwm channel */
> > +	u8 channel_bucket[AIROHA_PWM_MAX_CHANNELS];
> > +};
> > [...]
> > +static int airoha_pwm_apply_bucket_config(struct airoha_pwm *pc, int bucket,
> > +					  u32 duty_ticks, u32 period_ticks)
> > +{
> > +	u32 mask, shift, val;
> > +	u64 offset;
> > +	int ret;
> > +
> > +	offset = bucket;
> > +	shift = do_div(offset, AIROHA_PWM_BUCKET_PER_CYCLE_CFG);
> 
> Do you really need offset as a 64 bit variable? At least on 32 bit archs
> 
> 	offset = bucket / AIROHA_PWM_BUCKET_PER_CYCLE_CFG;
> 	shift = AIROHA_PWM_REG_CYCLE_CFG_SHIFT(bucket % AIROHA_PWM_BUCKET_PER_CYCLE_CFG);
> 
> should be cheaper. Also can bucket better be an unsigned value (to make
> it obvious that no strange things happen with the division)?
>

No for this u32 is ok.

> > +	shift = AIROHA_PWM_REG_CYCLE_CFG_SHIFT(shift);
> > +
> > +	/* Configure frequency divisor */
> > +	mask = AIROHA_PWM_WAVE_GEN_CYCLE << shift;
> > +	val = FIELD_PREP(AIROHA_PWM_WAVE_GEN_CYCLE, period_ticks) << shift;
> > +	ret = regmap_update_bits(pc->regmap, AIROHA_PWM_REG_CYCLE_CFG_VALUE(offset),
> > +				 mask, val);
> > +	if (ret)
> > +		return ret;
> > +
> > +	offset = bucket;
> > +	shift = do_div(offset, AIROHA_PWM_BUCKET_PER_FLASH_PROD);
> > +	shift = AIROHA_PWM_REG_GPIO_FLASH_PRD_SHIFT(shift);
> > +
> > +	/* Configure duty cycle */
> > +	mask = AIROHA_PWM_GPIO_FLASH_PRD_HIGH << shift;
> > +	val = FIELD_PREP(AIROHA_PWM_GPIO_FLASH_PRD_HIGH, duty_ticks) << shift;
> > +	ret = regmap_update_bits(pc->regmap, AIROHA_PWM_REG_GPIO_FLASH_PRD_SET(offset),
> > +				 mask, val);
> > +	if (ret)
> > +		return ret;
> > +
> > +	mask = AIROHA_PWM_GPIO_FLASH_PRD_LOW << shift;
> > +	val = FIELD_PREP(AIROHA_PWM_GPIO_FLASH_PRD_LOW,
> > +			 AIROHA_PWM_DUTY_FULL - duty_ticks) << shift;
> > +	return regmap_update_bits(pc->regmap, AIROHA_PWM_REG_GPIO_FLASH_PRD_SET(offset),
> > +				  mask, val);
> 
> Strange hardware, why do you have to configure both the high and the low
> relative duty? What happens if AIROHA_PWM_GPIO_FLASH_PRD_LOW +
> AIROHA_PWM_GPIO_FLASH_PRD_HIGH != AIROHA_PWM_DUTY_FULL?
> 

>From documentation it gets rejected and configured bucket doesn't work.

> > +}
> > +
> > [...]
> > +static int airoha_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
> > +			    const struct pwm_state *state)
> > +{
> > +	struct airoha_pwm *pc = pwmchip_get_drvdata(chip);
> > +	u32 period_ticks, duty_ticks;
> > +	u32 period_ns, duty_ns;
> > +
> > +	if (!state->enabled) {
> > +		airoha_pwm_disable(pc, pwm);
> > +		return 0;
> > +	}
> > +
> > +	/* Only normal polarity is supported */
> > +	if (state->polarity == PWM_POLARITY_INVERSED)
> > +		return -EINVAL;
> > +
> > +	/* Exit early if period is less than minimum supported */
> > +	if (state->period < AIROHA_PWM_PERIOD_TICK_NS)
> > +		return -EINVAL;
> > +
> > +	/* Clamp period to MAX supported value */
> > +	if (state->period > AIROHA_PWM_PERIOD_MAX_NS)
> > +		period_ns = AIROHA_PWM_PERIOD_MAX_NS;
> > +	else
> > +		period_ns = state->period;
> > +
> > +	/* Validate duty to configured period */
> > +	if (state->duty_cycle > period_ns)
> > +		duty_ns = period_ns;
> > +	else
> > +		duty_ns = state->duty_cycle;
> > +
> > +	/*
> > +	 * Period goes at 4ns step, normalize it to check if we can
> > +	 * share a generator.
> > +	 */
> > +	period_ns = rounddown(period_ns, AIROHA_PWM_PERIOD_TICK_NS);
> > +
> > +	/*
> > +	 * Duty is divided in 255 segment, normalize it to check if we
> > +	 * can share a generator.
> > +	 */
> > +	duty_ns = DIV_U64_ROUND_UP(duty_ns * AIROHA_PWM_DUTY_FULL,
> > +				   AIROHA_PWM_DUTY_FULL);
> 
> This looks bogus. This is just duty_ns = duty_ns, or what do I miss?
> Also duty_ns is an u32 and AIROHA_PWM_DUTY_FULL an int, so there is no
> need for a 64 bit division.
> 

duty_ns * 255 goes beyond max u32.

225000000000.

Some revision ago it was asked to round also the duty_ns. And this is
really to round_up duty in 255 segment.

Consider that this have the ROUND_UP part that is different than doing
simple stuff like "duty * 255/255"

Hope we can solve these 2 problem since I think it's the last changes.

> > +	/* Convert ns to ticks */
> > +	period_ticks = airoha_pwm_get_period_ticks_from_ns(period_ns);
> > +	duty_ticks = airoha_pwm_get_duty_ticks_from_ns(period_ns, duty_ns);
> > +
> > +	return airoha_pwm_config(pc, pwm, period_ticks, duty_ticks);
> > +}
> 
> Best regards
> Uwe



-- 
	Ansuel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ