lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 26 Jan 2022 20:14:21 +0500
From:   Nikita Travkin <nikita@...n.ru>
To:     Stephen Boyd <sboyd@...nel.org>
Cc:     linus.walleij@...aro.org, mturquette@...libre.com,
        bjorn.andersson@...aro.org, agross@...nel.org, tdas@...eaurora.org,
        svarbanov@...sol.com, linux-arm-msm@...r.kernel.org,
        linux-clk@...r.kernel.org, linux-gpio@...r.kernel.org,
        linux-kernel@...r.kernel.org, ~postmarketos/upstreaming@...ts.sr.ht
Subject: Re: [PATCH 1/4] clk: qcom: clk-rcg2: Fail Duty-Cycle configuration if
 MND divider is not enabled.

Stephen Boyd писал(а) 11.01.2022 01:14:
> Quoting Nikita Travkin (2022-01-07 23:25:19)
>> Hi,
>>
>> Stephen Boyd писал(а) 08.01.2022 05:52:
>> > Quoting Nikita Travkin (2021-12-09 08:37:17)
>> I'm adding this error here primarily to bring attention of the
>> user (e.g. developer enabling some peripheral that needs
>> duty cycle control) who might have to change their clock tree
>> to make this control effective. So, assuming that if someone
>> sets the duty cycle to 50% then they might set it to some other
>> value later, it makes sense to fail the first call anyway.
>>
>> If you think there are some other possibilities for this call
>> to happen specifically with 50% duty cycle (e.g. some
>> preparations or cleanups in the clk subsystem or some drivers
>> that I'm not aware of) then I can make an exemption in the check
>> for that.
>>
> 
> I don't see anywhere in clk_set_duty_cycle() where it would bail out
> early if the duty cycle was set to what it already is. The default for
> these clks is 50%, so I worry that some driver may try to set the duty
> cycle to 50% and then fail now. Either we need to check the duty cycle
> in the core before calling down into the driver or we need to check it
> here in the driver. Can you send a patch to check the current duty cycle
> in the core before calling down into the clk ops?

Hi, sorry for a rather delayed response,
I spent a bit of time looking at how to make the clk core be
careful with ineffective duty-cycle calls and I can't find a
nice way to do this... My idea was something like this:

static int clk_core_set_duty_cycle_nolock(struct clk_core *core,
					  struct clk_duty *duty)
{	/* ... */

	/* Update core->duty values */
	clk_core_update_duty_cycle_nolock(core);

	if ( /* duty doesn't match core->duty */ ) {
		ret = core->ops->set_duty_cycle(core->hw, duty);
	/* ... */
}

However there seem to be drawbacks to any variant of the
comparison that I could come up with:

Naive one would be to do
    if (duty->num != core->duty->num || duty->den != core->duty->den)
but it won't correctly compare e.g. 1/2 and 10/20.

Other idea was to do
    if (duty->den / duty->num != core->duty->den / core->duty->num)
but it will likely fail with very close values (e.g. 100/500 and 101/500)

I briefly thought of some more sophisticated math but I don't
like the idea of complicating this too far.

I briefly grepped the kernel sources for duty-cycle related methods
and I saw only one user of the clk_set_duty_cycle:
    sound/soc/meson/axg-tdm-interface.c
Notably it sets the cycle to 1/2 in some cases, though it seems to
be tied to the drivers/clk/meson/sclk-div.c clock driver by being
the blocks of the same SoC.

Thinking of it a bit more, I saw another approach to the problem
I want to solve: Since I just want to make developers aware of the
hardware quirk, maybe I don't need to fail the set but just put a
WARN or even WARN_ONCE there? This way the behavior will be unchanged.

Thanks,
Nikita

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ