lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170712131336.wbfefpnoj6ygzg7n@e106622-lin>
Date:   Wed, 12 Jul 2017 14:13:36 +0100
From:   Juri Lelli <juri.lelli@....com>
To:     Byungchul Park <byungchul.park@....com>
Cc:     peterz@...radead.org, mingo@...nel.org,
        linux-kernel@...r.kernel.org, juri.lelli@...il.com,
        rostedt@...dmis.org, bristot@...hat.com, kernel-team@....com
Subject: Re: [PATCH v5 1/4] sched/deadline: Make find_later_rq() choose a
 closer cpu in topology

Hi,

On 23/05/17 11:00, Byungchul Park wrote:
> When cpudl_find() returns any among free_cpus, the cpu might not be
> closer than others, considering sched domain. For example:
> 
>    this_cpu: 15
>    free_cpus: 0, 1,..., 14 (== later_mask)
>    best_cpu: 0
> 
>    topology:
> 
>    0 --+
>        +--+
>    1 --+  |
>           +-- ... --+
>    2 --+  |         |
>        +--+         |
>    3 --+            |
> 
>    ...             ...
> 
>    12 --+           |
>         +--+        |
>    13 --+  |        |
>            +-- ... -+
>    14 --+  |
>         +--+
>    15 --+
> 
> In this case, it would be best to select 14 since it's a free cpu and
> closest to 15(this_cpu). However, currently the code select 0(best_cpu)
> even though that's just any among free_cpus. Fix it.
> 
> Signed-off-by: Byungchul Park <byungchul.park@....com>

The patch looks essentially all right to me: makes sense and it aligns
behavior with RT.

However...

> ---
>  kernel/sched/deadline.c | 27 ++++++++++++++-------------
>  1 file changed, 14 insertions(+), 13 deletions(-)
> 
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index a2ce590..9d997d9 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -1324,7 +1324,7 @@ static int find_later_rq(struct task_struct *task)
>  	struct sched_domain *sd;
>  	struct cpumask *later_mask = this_cpu_cpumask_var_ptr(local_cpu_mask_dl);
>  	int this_cpu = smp_processor_id();
> -	int best_cpu, cpu = task_cpu(task);
> +	int cpu = task_cpu(task);
>  
>  	/* Make sure the mask is initialized first */
>  	if (unlikely(!later_mask))
> @@ -1337,17 +1337,14 @@ static int find_later_rq(struct task_struct *task)
>  	 * We have to consider system topology and task affinity
>  	 * first, then we can look for a suitable cpu.
>  	 */
> -	best_cpu = cpudl_find(&task_rq(task)->rd->cpudl,
> -			task, later_mask);
> -	if (best_cpu == -1)
> +	if (cpudl_find(&task_rq(task)->rd->cpudl, task, later_mask) == -1)
>  		return -1;
>  
>  	/*
> -	 * If we are here, some target has been found,
> -	 * the most suitable of which is cached in best_cpu.
> -	 * This is, among the runqueues where the current tasks
> -	 * have later deadlines than the task's one, the rq
> -	 * with the latest possible one.
> +	 * If we are here, some targets have been found, including
> +	 * the most suitable which is, among the runqueues where the
> +	 * current tasks have later deadlines than the task's one, the
> +	 * rq with the latest possible one.
>  	 *
>  	 * Now we check how well this matches with task's
>  	 * affinity and system topology.
> @@ -1367,6 +1364,7 @@ static int find_later_rq(struct task_struct *task)
>  	rcu_read_lock();
>  	for_each_domain(cpu, sd) {
>  		if (sd->flags & SD_WAKE_AFFINE) {

This is orthogonal to the proposed change, but I'm wondering if it make
sense to do the following only for SD_WAKE_AFFINE domains. The
consideration applies to RT as well, actually. Also, find_later_rq gets
called when trying to push tasks away as well and in that case checking
for this flag seems inappropriate? Peter, Steve?

Thanks,

- Juri

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ