lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6b48ad6a-2b45-455f-1ba8-3d90deb516f4@oss.nxp.com>
Date: Wed, 26 Mar 2025 15:31:10 +0200
From: Ghennadi Procopciuc <ghennadi.procopciuc@....nxp.com>
To: Daniel Lezcano <daniel.lezcano@...aro.org>, tglx@...utronix.de
Cc: linux-kernel@...r.kernel.org, Thomas Fossati <thomas.fossati@...aro.org>,
 Larisa Grigore <Larisa.Grigore@....com>,
 Ghennadi Procopciuc <ghennadi.procopciuc@....com>,
 Maxime Coquelin <mcoquelin.stm32@...il.com>,
 Alexandre Torgue <alexandre.torgue@...s.st.com>,
 "moderated list:ARM/STM32 ARCHITECTURE"
 <linux-stm32@...md-mailman.stormreply.com>,
 "moderated list:ARM/STM32 ARCHITECTURE"
 <linux-arm-kernel@...ts.infradead.org>, NXP S32 Linux Team <s32@....com>
Subject: Re: [PATCH 2/2] clocksource/drivers/nxp-timer: Add the System Timer
 Module for the s32g platform

On 3/26/2025 12:31 PM, Daniel Lezcano wrote:
> On 26/03/2025 10:57, Ghennadi Procopciuc wrote:
>> On 3/26/2025 11:19 AM, Daniel Lezcano wrote:
>>> On 26/03/2025 08:44, Ghennadi Procopciuc wrote:
>>>> On 3/25/2025 3:54 PM, Daniel Lezcano wrote:
>>>>> On 25/03/2025 14:21, Ghennadi Procopciuc wrote:
>>>>>> On 3/25/2025 2:51 PM, Daniel Lezcano wrote:
>>>>>> [ ... ]
>>>>>>>>>>>>> +static int __init nxp_stm_timer_probe(struct platform_device
>>>>>>>>>>>>> *pdev)
>>>>>>>>>>>>> +{
>>>>>>>>>>>>> +    struct device *dev = &pdev->dev;
>>>>>>>>>>>>> +    struct device_node *np = dev->of_node;
>>>>>>>>>>>>> +    struct stm_instances *stm_instances;
>>>>>>>>>>>>> +    const char *name = of_node_full_name(np);
>>>>>>>>>>>>> +    void __iomem *base;
>>>>>>>>>>>>> +    int irq, ret;
>>>>>>>>>>>>> +    struct clk *clk;
>>>>>>>>>>>>> +
>>>>>>>>>>>>> +    stm_instances =
>>>>>>>>>>>>> (typeof(stm_instances))of_device_get_match_data(dev);
>>>>>>>>>>>>> +    if (!stm_instances) {
>>>>>>>>>>>>> +        dev_err(dev, "No STM instances associated with a
>>>>>>>>>>>>> cpu");
>>>>>>>>>>>>> +        return -EINVAL;
>>>>>>>>>>>>> +    }
>>>>>>>>>>>>> +
>>>>>>>>>>>>> +    base = devm_of_iomap(dev, np, 0, NULL);
>>>>>>>>>>>>> +    if (IS_ERR(base)) {
>>>>>>>>>>>>> +        dev_err(dev, "Failed to iomap %pOFn\n", np);
>>>>>>>>>>>>> +        return PTR_ERR(base);
>>>>>>>>>>>>> +    }
>>>>>>>>>>>>> +
>>>>>>>>>>>>> +    irq = irq_of_parse_and_map(np, 0);
>>>>>>>>>>>>> +    if (irq <= 0) {
>>>>>>>>>>>>> +        dev_err(dev, "Failed to parse and map IRQ: %d\n",
>>>>>>>>>>>>> irq);
>>>>>>>>>>>>> +        return -EINVAL;
>>>>>>>>>>>>> +    }
>>>>>>>>>>>>
>>>>>>>>>>>>       From commit description:
>>>>>>>>>>>>
>>>>>>>>>>>>> The first probed STM is used as a clocksource, the second
>>>>>>>>>>>>> will be
>>>>>>>>>>>>> the
>>>>>>>>>>>>> broadcast timer and the rest are used as a clockevent with the
>>>>>>>>>>>>> affinity set to a CPU.
>>>>>>>>>>>>
>>>>>>>>>>>> Why is the interrupt mandatory when the node is probed as a
>>>>>>>>>>>> clocksource?
>>>>>>>>>>>
>>>>>>>>>>> It relies on the DT description and it does not hurt to have a
>>>>>>>>>>> consistent code path for clockevent / clocksource even if the
>>>>>>>>>>> interrupt
>>>>>>>>>>> is not requested for the clocksource later.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> If so, in my opinion, it would make sense to use the same STM
>>>>>>>>>> instance
>>>>>>>>>> for both the clocksource and broadcast clockevent, as both
>>>>>>>>>> functions
>>>>>>>>>> can
>>>>>>>>>> be accommodated within a single STM instance, which will help
>>>>>>>>>> reduce
>>>>>>>>>> the
>>>>>>>>>> number of STM instances used.
>>>>>>>>>
>>>>>>>>> The broadcast timer is stopped when it is unused which is the
>>>>>>>>> case for
>>>>>>>>> the s32 because there are no idle state powering down the local
>>>>>>>>> timers.
>>>>>>>>> They have to have be separated.
>>>>>>>>>
>>>>>>>>
>>>>>>>> This wouldn't be the case if the STM is kept running/counting
>>>>>>>> during
>>>>>>>> the
>>>>>>>> clock event setup, with only the clock event interrupt being
>>>>>>>> disabled
>>>>>>>> (CCR.CEN).
>>>>>>>
>>>>>>> Are you asking to use two different channels for the same STM
>>>>>>> instance,
>>>>>>> one for the clocksource and one for the clockevent ?
>>>>>>>
>>>>>>
>>>>>> I suggested using the CNT register to obtain the count for the clock
>>>>>> source, while using one of the STM channels for the clock event.
>>>>>
>>>>> Ah, ok.
>>>>>
>>>>> I think it is preferable to keep them separated to keep the code
>>>>> modular. Given the number of STM on the platform, it does not hurt
>>>>>
>>>>
>>>> The S32G2 and S32G3 are SoCs featuring a diverse set of cores. Linux is
>>>> expected to run on Cortex-A53 cores, while other software stacks will
>>>> operate on Cortex-M cores. The number of STM instances has been
>>>> sized to
>>>> include at most one instance per core. Allocating six instances (1
>>>> clock
>>>> source, 1 broadcast clock event, and 4 clock events for all A53 cores)
>>>> to Linux on the S32G2 leaves the M7 software stacks without adequate
>>>> STM
>>>> coverage.
>>>
>>> Given this description I'm wondering why one STM has to be used as a
>>> clocksource if the STM_07 is already a TS one. And also, we can get rid
>>> of the broadcast timer as there is no idle state forcing a switch to an
>>> always-on timer.
>>
>> Indeed, STM_07 can be used as a system clock source, but Linux should
>> not assume ownership of it.
>>
>>>
>>> IIUC, on the S32g2 there are 4 x Cortex-A53 and 3 x Cortex-M3, that
>>> means we need 7 timers.
>>>
>>> The system has 7 timers + a special one for timestamp.
>>>
>>> So if I follow the reasoning, the broadcast timer should not be used
>>> otherwise one cortex-M3 will end up without a timer.
>>>
>>
>> On the S32G2, there are eight STM timers and STM_TS. Therefore, there
>> should be enough instances to accommodate 4xA53 and 3xM7 cores.
>>
>>> That leads us to one clocksource for the first per CPU timer initialized
>>> or alternatively one STM instance == 1 clocksource and 1 clockevent
>>> which makes things simpler
>>>
>> I'm not sure I understood the entire description. As I see it, two STM
>> instances should be used to accommodate one clock source, a broadcast
>> clock event, and four clock events—one per core.
> 
> What I meant is to do something simpler:
> 
> -----------------
> 
> cat /proc/interrupts
> 
>  16:       7891          0          0          0    GICv3  56 Level
> stm@...1c000
>  17:          0       5326          0          0    GICv3  57 Level
> stm@...20000
>  18:          3          0      16668          0    GICv3  58 Level
> stm@...24000
>  19:          0          0          0       5152    GICv3  59 Level
> stm@...28000
> 
> ------------------
> 
> cat /sys/devices/system/clockevents/clockevent*/current_device
> 
> stm@...1c000
> stm@...20000
> stm@...24000
> stm@...28000
> 
> ------------------
> 
> cat /sys/devices/system/clocksource/clocksource0/available_clocksource
> 
> stm@...1c000 stm@...20000 stm@...24000 stm@...28000 arch_sys_counter
> 
> 
> 
> On the S32G2: 4 STM instances used for one clocksource and one clockevent
> 
> On the S32G3: 8 STM instances used for one clocksource and one clockevent
> 
> 
> Specific broadcast timer is not needed as the s32g system.
> 
> 
> The resulting timer driver code is much simpler.
> 

Okay, it makes sense from a complexity standpoint, even though it
doubles the number of STM modules used, while keeping the required
number of STM modules aligned with the number of cores.

> 
>> E.g.
>> STM_0
>>     - clocksource (based on CNT)
>>     - broadcast clock event (channel 0)
>>
>> STM_1
>>     - Cortex-A53 0 (channel 0)
>>     - Cortex-A53 1 (channel 1)
>>     - Cortex-A53 2 (channel 2)
>>     - Cortex-A53 3 (channel 3)
>>
> 
> 



-- 
Regards,
Ghennadi


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ