lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 17 Feb 2020 14:40:42 +0530
From:   Pavan Kondeti <>
To:     Qais Yousef <>
Cc:     Ingo Molnar <>,
        Peter Zijlstra <>,
        Steven Rostedt <>,
        Dietmar Eggemann <>,
        Juri Lelli <>,
        Vincent Guittot <>,
        Ben Segall <>, Mel Gorman <>,
Subject: Re: [PATCH 2/3] sched/rt: allow pulling unfitting task

Hi Qais,

On Fri, Feb 14, 2020 at 04:39:48PM +0000, Qais Yousef wrote:
> When implemented RT Capacity Awareness; the logic was done such that if
> a task was running on a fitting CPU, then it was sticky and we would try
> our best to keep it there.
> But as Steve suggested, to adhere to the strict priority rules of RT
> class; allow pulling an RT task to unfitting CPU to ensure it gets a
> chance to run ASAP. When doing so, mark the queue as overloaded to give
> the system a chance to push the task to a better fitting CPU when a
> chance arises.
> Suggested-by: Steven Rostedt <>
> Signed-off-by: Qais Yousef <>
> ---
>  kernel/sched/rt.c | 16 +++++++++++++---
>  1 file changed, 13 insertions(+), 3 deletions(-)
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index 4043abe45459..0c8bac134d3a 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1646,10 +1646,20 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p)
>  static int pick_rt_task(struct rq *rq, struct task_struct *p, int cpu)
>  {
> -	if (!task_running(rq, p) &&
> -	    cpumask_test_cpu(cpu, p->cpus_ptr) &&
> -	    rt_task_fits_capacity(p, cpu))
> +	if (!task_running(rq, p) && cpumask_test_cpu(cpu, p->cpus_ptr)) {
> +
> +		/*
> +		 * If the CPU doesn't fit the task, allow pulling but mark the
> +		 * rq as overloaded so that we can push it again to a more
> +		 * suitable CPU ASAP.
> +		 */
> +		if (!rt_task_fits_capacity(p, cpu)) {
> +			rt_set_overload(rq);
> +			rq->rt.overloaded = 1;
> +		}
> +

Here rq is source rq from which the task is being pulled. I can't understand
how marking overload condition on source_rq help. Because overload condition
gets cleared in the task dequeue path. i.e dec_rt_tasks->dec_rt_migration->

Also, the overload condition with nr_running=1 may not work as expected unless
we track this overload condition (due to unfit) separately. Because a task
can be pushed only when it is NOT running. So a task running on silver will
continue to run there until it wakes up next time or another high prio task
gets queued here (due to affinity).

btw, Are you testing this path by disabling RT_PUSH_IPI feature? I ask this
because, This feature gets turned on by default in our b.L platforms and
RT task migrations happens by the busy CPU pushing the tasks. Or are there
any cases where we can run into pick_rt_task() even when RT_PUSH_IPI is


Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.

Powered by blists - more mailing lists