lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a085b3ce-f34d-1263-10fd-afbc6a2eb6ce@codeaurora.org>
Date:   Mon, 1 Apr 2019 13:47:52 +0530
From:   Mukesh Ojha <mojha@...eaurora.org>
To:     Lingutla Chandrasekhar <clingutla@...eaurora.org>,
        gregkh@...uxfoundation.org, quentin.perret@....com,
        sudeep.holla@....com, dietmar.eggemann@....com,
        juri.lelli@...il.com
Cc:     linux-arm-kernel@...ts.infradead.org, catalin.marinas@....com,
        linux-kernel@...r.kernel.org, jeremy.linton@....com
Subject: Re: [PATCH v3] arch_topology: Make cpu_capacity sysfs node as
 ready-only

s/ready/read

On 4/1/2019 9:54 AM, Lingutla Chandrasekhar wrote:
> If user updates any cpu's cpu_capacity, then the new value is going to
> be applied to all its online sibling cpus. But this need not to be correct
> always, as sibling cpus (in ARM, same micro architecture cpus) would have
> different cpu_capacity with different performance characteristics.
> So, updating the user supplied cpu_capacity to all cpu siblings
> is not correct.
>
> And another problem is, current code assumes that 'all cpus in a cluster
> or with same package_id (core_siblings), would have same cpu_capacity'.
> But with commit '5bdd2b3f0f8 ("arm64: topology: add support to remove
> cpu topology sibling masks")', when a cpu hotplugged out, the cpu
> information gets cleared in its sibling cpus. So, user supplied
> cpu_capacity would be applied to only online sibling cpus at the time.
> After that, if any cpu hotplugged in, it would have different cpu_capacity
> than its siblings, which breaks the above assumption.
>
> So, instead of mucking around the core sibling mask for user supplied
> value, use device-tree to set cpu capacity. And make the cpu_capacity
> node as read-only to know the asymmetry between cpus in the system.
> While at it, remove cpu_scale_mutex usage, which used for sysfs write
> protection.
>
> Tested-by: Dietmar Eggemann <dietmar.eggemann@....com>
> Tested-by: Quentin Perret <quentin.perret@....com>
> Reviewed-by: Quentin Perret <quentin.perret@....com>
> Acked-by: Sudeep Holla <sudeep.holla@....com>
> Signed-off-by: Lingutla Chandrasekhar <clingutla@...eaurora.org>
>
> ---
>
> Changes from v2:
>     - Corrected spelling mistakes in commit text.
> Changes from v1:
>     - Removed cpu_scale_mutex usage, suggested by Dietmar Eggemann.
> Changes from v0:
>     - Instead of iterating all possible cpus to update cpu capacity,
>       removed write capability for the sysfs node.
>
> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> index edfcf8d982e4..1739d7e1952a 100644
> --- a/drivers/base/arch_topology.c
> +++ b/drivers/base/arch_topology.c
> @@ -7,7 +7,6 @@
>    */
>   
>   #include <linux/acpi.h>
> -#include <linux/arch_topology.h>
>   #include <linux/cpu.h>
>   #include <linux/cpufreq.h>
>   #include <linux/device.h>
> @@ -31,7 +30,6 @@ void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq,
>   		per_cpu(freq_scale, i) = scale;
>   }
>   
> -static DEFINE_MUTEX(cpu_scale_mutex);
>   DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE;
>   
>   void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity)
> @@ -51,37 +49,7 @@ static ssize_t cpu_capacity_show(struct device *dev,
>   static void update_topology_flags_workfn(struct work_struct *work);
>   static DECLARE_WORK(update_topology_flags_work, update_topology_flags_workfn);
>   
> -static ssize_t cpu_capacity_store(struct device *dev,
> -				  struct device_attribute *attr,
> -				  const char *buf,
> -				  size_t count)
> -{
> -	struct cpu *cpu = container_of(dev, struct cpu, dev);
> -	int this_cpu = cpu->dev.id;
> -	int i;
> -	unsigned long new_capacity;
> -	ssize_t ret;
> -
> -	if (!count)
> -		return 0;
> -
> -	ret = kstrtoul(buf, 0, &new_capacity);
> -	if (ret)
> -		return ret;
> -	if (new_capacity > SCHED_CAPACITY_SCALE)
> -		return -EINVAL;
> -
> -	mutex_lock(&cpu_scale_mutex);
> -	for_each_cpu(i, &cpu_topology[this_cpu].core_sibling)
> -		topology_set_cpu_scale(i, new_capacity);
> -	mutex_unlock(&cpu_scale_mutex);
> -
> -	schedule_work(&update_topology_flags_work);
> -
> -	return count;
> -}
> -
> -static DEVICE_ATTR_RW(cpu_capacity);
> +static DEVICE_ATTR_RO(cpu_capacity);
>   
>   static int register_cpu_capacity_sysctl(void)
>   {
> @@ -141,7 +109,6 @@ void topology_normalize_cpu_scale(void)
>   		return;
>   
>   	pr_debug("cpu_capacity: capacity_scale=%u\n", capacity_scale);
> -	mutex_lock(&cpu_scale_mutex);
>   	for_each_possible_cpu(cpu) {
>   		pr_debug("cpu_capacity: cpu=%d raw_capacity=%u\n",
>   			 cpu, raw_capacity[cpu]);
> @@ -151,7 +118,6 @@ void topology_normalize_cpu_scale(void)
>   		pr_debug("cpu_capacity: CPU%d cpu_capacity=%lu\n",
>   			cpu, topology_get_cpu_scale(NULL, cpu));
>   	}
> -	mutex_unlock(&cpu_scale_mutex);
>   }
>   
>   bool __init topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ