[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADjb_WTYPqenF_BhuiDyLduxpaHWCigg-jxAE3FYKTNkWvVz=Q@mail.gmail.com>
Date: Wed, 4 May 2022 19:14:52 +0800
From: Chen Yu <yu.chen.surf@...il.com>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Ingo Molnar <mingo@...hat.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
parth@...ux.ibm.com, qais.yousef@....com, chris.hyser@...cle.com,
pkondeti@...eaurora.org,
Valentin Schneider <valentin.schneider@....com>,
patrick.bellasi@...bug.net, David.Laight@...lab.com,
Paul Turner <pjt@...gle.com>, Pavel Machek <pavel@....cz>,
tj@...nel.org, dhaval.giani@...cle.com, qperret@...gle.com,
Tim Chen <tim.c.chen@...ux.intel.com>,
Chen Yu <yu.c.chen@...el.com>
Subject: Re: [RFC 5/6] sched/fair: Take into account latency nice at wakeup
On Sat, Mar 12, 2022 at 7:11 AM Vincent Guittot
<vincent.guittot@...aro.org> wrote:
>
> Take into account the nice latency priority of a thread when deciding to
> preempt the current running thread. We don't want to provide more CPU
> bandwidth to a thread but reorder the scheduling to run latency sensitive
> task first whenever possible.
>
---------->8-------------------
> #endif /* CONFIG_SMP */
>
> +static long wakeup_latency_gran(int latency_weight)
> +{
> + long thresh = sysctl_sched_latency;
If I understood correctly, this is to consider the latency weight and
'shrink/expand'
current task's time slice thus to facilitate preemption. And may I
know why don't we use
__sched_period() but to use sysctl_sched_latency directly? Is it
possible the rq has
more than 8(sched_nr_latency) tasks thus the period is longer than
sysctl_sched_latency?
Thanks,
Chenyu
> +
> + if (!latency_weight)
> + return 0;
> +
> + if (sched_feat(GENTLE_FAIR_SLEEPERS))
> + thresh >>= 1;
> +
> + /*
> + * Clamp the delta to stay in the scheduler period range
> + * [-sysctl_sched_latency:sysctl_sched_latency]
> + */
> + latency_weight = clamp_t(long, latency_weight,
> + -1 * NICE_LATENCY_WEIGHT_MAX,
> + NICE_LATENCY_WEIGHT_MAX);
> +
> + return (thresh * latency_weight) >> NICE_LATENCY_SHIFT;
> +}
> +
Powered by blists - more mailing lists