lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5597644b-267d-40d0-aa33-a8a665cebd70@kernel.org>
Date: Sat, 12 Jul 2025 19:42:46 +0200
From: Krzysztof Kozlowski <krzk@...nel.org>
To: Peter Griffin <peter.griffin@...aro.org>,
 André Draszik <andre.draszik@...aro.org>,
 Tudor Ambarus <tudor.ambarus@...aro.org>,
 Alim Akhtar <alim.akhtar@...sung.com>
Cc: William Mcvicker <willmcvicker@...gle.com>,
 Krzysztof Kozlowski <krzysztof.kozlowski@...aro.org>,
 linux-arm-kernel@...ts.infradead.org, linux-samsung-soc@...r.kernel.org,
 linux-kernel@...r.kernel.org, kernel-team@...roid.com, sudeep.holla@....com
Subject: Re: [PATCH v6] soc: samsung: exynos-pmu: Enable CPU Idle for gs101

On 11/07/2025 15:50, Peter Griffin wrote:
>  
>  #include <linux/soc/samsung/exynos-regs-pmu.h>
> @@ -35,6 +37,14 @@ struct exynos_pmu_context {
>  	const struct exynos_pmu_data *pmu_data;
>  	struct regmap *pmureg;
>  	struct regmap *pmuintrgen;
> +	/*
> +	 * Serialization lock for CPU hot plug and cpuidle ACPM hint
> +	 * programming. Also protects the in_hotplug flag.
> +	 */
> +	raw_spinlock_t cpupm_lock;
> +	bool *in_hotplug;

This should be bitmap - more obvious code.

> +	atomic_t sys_suspended;
> +	atomic_t sys_rebooting;
>  };
>  
>  void __iomem *pmu_base_addr;
> @@ -221,6 +231,15 @@ static const struct regmap_config regmap_smccfg = {
>  	.reg_read = tensor_sec_reg_read,
>  	.reg_write = tensor_sec_reg_write,
>  	.reg_update_bits = tensor_sec_update_bits,
> +	.use_raw_spinlock = true,
> +};
> +
> +static const struct regmap_config regmap_pmu_intr = {
> +	.name = "pmu_intr_gen",
> +	.reg_bits = 32,
> +	.reg_stride = 4,
> +	.val_bits = 32,
> +	.use_raw_spinlock = true,
>  };
>  
>  static const struct exynos_pmu_data gs101_pmu_data = {
> @@ -330,13 +349,19 @@ struct regmap *exynos_get_pmu_regmap_by_phandle(struct device_node *np,
>  EXPORT_SYMBOL_GPL(exynos_get_pmu_regmap_by_phandle);
>  


...

> +/* Called from CPU PM notifier (CPUIdle code path) with IRQs disabled */
> +static int gs101_cpu_pmu_offline(void)
> +{
> +	int cpu;
> +
> +	raw_spin_lock(&pmu_context->cpupm_lock);
> +	cpu = smp_processor_id();
> +
> +	if (pmu_context->in_hotplug[cpu]) {
> +		raw_spin_unlock(&pmu_context->cpupm_lock);
> +		return NOTIFY_BAD;
> +	}
> +
> +	__gs101_cpu_pmu_offline(cpu);
> +	raw_spin_unlock(&pmu_context->cpupm_lock);
> +
> +	return NOTIFY_OK;
> +}
> +
> +/* Called from CPU hot plug callback with IRQs enabled */
> +static int gs101_cpuhp_pmu_offline(unsigned int cpu)
> +{
> +	unsigned long flags;
> +
> +	raw_spin_lock_irqsave(&pmu_context->cpupm_lock, flags);
> +	/*
> +	 * Mark this CPU as entering hotplug. So as not to confuse
> +	 * ACPM the CPU entering hotplug should not enter C2 idle state.
> +	 */
> +	pmu_context->in_hotplug[cpu] = true;
> +	__gs101_cpu_pmu_offline(cpu);
> +
> +	raw_spin_unlock_irqrestore(&pmu_context->cpupm_lock, flags);
> +
> +	return 0;
> +}
> +
> +static int gs101_cpu_pm_notify_callback(struct notifier_block *self,
> +					unsigned long action, void *v)
> +{
> +	switch (action) {
> +	case CPU_PM_ENTER:
> +		/*
> +		 * Ignore CPU_PM_ENTER event in reboot or
> +		 * suspend sequence.
> +		 */
> +
> +		if (atomic_read(&pmu_context->sys_suspended) ||
> +		    atomic_read(&pmu_context->sys_rebooting))

I don't get exactly why you need here atomics. You don't have here
barriers, so ordering is not kept (non-RMW atomics are unordered), so
maybe ordering was not the problem to be solved here. But then you don't
use these at all as RMW and this is even explicitly described in atomic doc!

"Therefore, if you find yourself only using the Non-RMW operations of
atomic_t, you do not in fact need atomic_t at all and are doing it wrong."

And it is right. READ/WRITE_ONCE gives you the same.

The question is whether you need ordering or barriers in general
(atomics don't give you these) - you have here control dependency
if-else, however it is immediately followed with gs101_cpu_pmu_offline()
which will use spin-lock (so memory barrier).

Basically you should have here comment explaining why there is no
barrier - you rely on barrier from spin lock in next calls.

And if my reasoning is correct, then you should use just READ/WRITE_ONCE.


> +			return NOTIFY_OK;
> +
> +		return gs101_cpu_pmu_offline();
> +
> +	case CPU_PM_EXIT:
> +
> +		if (atomic_read(&pmu_context->sys_rebooting))
> +			return NOTIFY_OK;
> +
> +		return gs101_cpu_pmu_online();
> +	}
> +
> +	return NOTIFY_OK;
> +}

The rest looked fine.


Best regards,
Krzysztof

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ