lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c2f0b869-826e-3d95-159f-5867ccb75a08@codeaurora.org>
Date:   Thu, 28 Mar 2019 12:26:30 +0530
From:   Mukesh Ojha <mojha@...eaurora.org>
To:     Lingutla Chandrasekhar <clingutla@...eaurora.org>,
        gregkh@...uxfoundation.org, quentin.perret@....com,
        sudeep.holla@....com, dietmar.eggemann@....com
Cc:     juri.lelli@...il.com, catalin.marinas@....com,
        jeremy.linton@....com, linux-kernel@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH v2] arch_topology: Make cpu_capacity sysfs node as
 ready-only

Thanks for making the change suggested.

Should not this be v3.

Please add version detail properly including what changes you made in 
which version after ---,  that makes the patch easy to review.

Thanks.
Mukesh


On 3/28/2019 10:17 AM, Lingutla Chandrasekhar wrote:
> If user updates any cpu's cpu_capacity, then the new value is going to
> be applied to all its online sibling cpus. But this need not to be correct
> always, as sibling cpus (in ARM, same micro architecture cpus) would have
> different cpu_capacity with different performance characteristics.
> So, updating the user supplied cpu_capacity to all cpu siblings
> is not correct.
>
> And another problem is, current code assumes that 'all cpus in a cluster
> or with same package_id (core_siblings), would have same cpu_capacity'.
> But with commit '5bdd2b3f0f8 ("arm64: topology: add support to remove
> cpu topology sibling masks")', when a cpu hotplugged out, the cpu
> information gets cleared in its sibling cpus. So, user supplied
> cpu_capacity would be applied to only online sibling cpus at the time.
> After that, if any cpu hotplugged in, it would have different cpu_capacity
> than its siblings, which breaks the above assumption.
>
> So, instead of mucking around the core sibling mask for user supplied
> value, use device-tree to set cpu capacity. And make the cpu_capacity
> node as read-only to know the asymmetry between cpus in the system.
> While at it, remove cpu_scale_mutex usage, which used for sysfs write
> protection.
>
> Tested-by: Dietmar Eggemann <dietmar.eggemann@....com>
> Tested-by: Quentin Perret <quentin.perret@....com>
> Reviewed-by: Quentin Perret <quentin.perret@....com>
> Acked-by: Sudeep Holla <sudeep.holla@....com>
> Signed-off-by: Lingutla Chandrasekhar <clingutla@...eaurora.org>
>
> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> index edfcf8d982e4..1739d7e1952a 100644
> --- a/drivers/base/arch_topology.c
> +++ b/drivers/base/arch_topology.c
> @@ -7,7 +7,6 @@
>    */
>   
>   #include <linux/acpi.h>
> -#include <linux/arch_topology.h>
>   #include <linux/cpu.h>
>   #include <linux/cpufreq.h>
>   #include <linux/device.h>
> @@ -31,7 +30,6 @@ void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq,
>   		per_cpu(freq_scale, i) = scale;
>   }
>   
> -static DEFINE_MUTEX(cpu_scale_mutex);
>   DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE;
>   
>   void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity)
> @@ -51,37 +49,7 @@ static ssize_t cpu_capacity_show(struct device *dev,
>   static void update_topology_flags_workfn(struct work_struct *work);
>   static DECLARE_WORK(update_topology_flags_work, update_topology_flags_workfn);
>   
> -static ssize_t cpu_capacity_store(struct device *dev,
> -				  struct device_attribute *attr,
> -				  const char *buf,
> -				  size_t count)
> -{
> -	struct cpu *cpu = container_of(dev, struct cpu, dev);
> -	int this_cpu = cpu->dev.id;
> -	int i;
> -	unsigned long new_capacity;
> -	ssize_t ret;
> -
> -	if (!count)
> -		return 0;
> -
> -	ret = kstrtoul(buf, 0, &new_capacity);
> -	if (ret)
> -		return ret;
> -	if (new_capacity > SCHED_CAPACITY_SCALE)
> -		return -EINVAL;
> -
> -	mutex_lock(&cpu_scale_mutex);
> -	for_each_cpu(i, &cpu_topology[this_cpu].core_sibling)
> -		topology_set_cpu_scale(i, new_capacity);
> -	mutex_unlock(&cpu_scale_mutex);
> -
> -	schedule_work(&update_topology_flags_work);
> -
> -	return count;
> -}
> -
> -static DEVICE_ATTR_RW(cpu_capacity);
> +static DEVICE_ATTR_RO(cpu_capacity);
>   
>   static int register_cpu_capacity_sysctl(void)
>   {
> @@ -141,7 +109,6 @@ void topology_normalize_cpu_scale(void)
>   		return;
>   
>   	pr_debug("cpu_capacity: capacity_scale=%u\n", capacity_scale);
> -	mutex_lock(&cpu_scale_mutex);
>   	for_each_possible_cpu(cpu) {
>   		pr_debug("cpu_capacity: cpu=%d raw_capacity=%u\n",
>   			 cpu, raw_capacity[cpu]);
> @@ -151,7 +118,6 @@ void topology_normalize_cpu_scale(void)
>   		pr_debug("cpu_capacity: CPU%d cpu_capacity=%lu\n",
>   			cpu, topology_get_cpu_scale(NULL, cpu));
>   	}
> -	mutex_unlock(&cpu_scale_mutex);
>   }
>   
>   bool __init topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ