[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5f60ad7e-6809-e314-53e5-aa081dbffff5@huawei.com>
Date: Sat, 2 Apr 2022 11:32:54 +0800
From: "zhangsong (J)" <zhangsong34@...wei.com>
To: Vincent Guittot <vincent.guittot@...aro.org>
CC: <peterz@...radead.org>, <mingo@...hat.com>,
<juri.lelli@...hat.com>, <dietmar.eggemann@....com>,
<rostedt@...dmis.org>, <bsegall@...gle.com>, <mgorman@...e.de>,
<bristot@...hat.com>, <linux-kernel@...r.kernel.org>,
zhangsong <zhangsong34@...il.com>
Subject: Re: [PATCH] sched/fair: Allow non-idle task to preempt idle task
directly
在 2022/4/1 21:09, Vincent Guittot 写道:
> On Fri, 1 Apr 2022 at 11:13, zhangsong <zhangsong34@...wei.com> wrote:
>> From: zhangsong <zhangsong34@...il.com>
>>
>> In check_preempt_tick(), the sched idle task may exectue at least
>> `sysctl_sched_min_granularity` time but any other cfs tasks cannot
>> preempt it. So it is nessesary to ignore the `sysctl_sched_min_granularity`
>> resctriction for sched idle task preemption.
> Could you explain why you need to remove this condition for sched_idle ?
> sched_idle tasks are already preempted at wakeup by others. And they
> run while others are runnable only if they has not run for a very long
> time compares to other. The ideal_runtime of a sched_idle task is
> capped to 750us min to ensure a minimum progress. But this will happen
> not more than once every 256ms and most probably even less often.
Thanks for your reply!I think that sched idle task is treated offline
task, and sched normal task is treated online task. To reduce latency of
online tasks and the interference from offline tasks, it is no need to
let offline task occupy any CPU time.
>
>> Signed-off-by: zhangsong <zhangsong34@...il.com>
>> ---
>> kernel/sched/fair.c | 10 +++++++++-
>> 1 file changed, 9 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index d4bd299d6..edcb33440 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -4477,6 +4477,15 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
>> struct sched_entity *se;
>> s64 delta;
>>
>> + se = __pick_first_entity(cfs_rq);
>> +
>> + if ((cfs_rq->last && se_is_idle(cfs_rq->last) - se_is_idle(curr) < 0) ||
>> + (cfs_rq->next && se_is_idle(cfs_rq->last) - se_is_idle(curr) < 0) ||
>> + se_is_idle(se) - se_is_idle(curr) < 0) {
>> + resched_curr(rq_of(cfs_rq));
>> + return;
> Why all these complex conditions ?
> if (se_is_idle(curr)) should be enough
>
I think that if se/next/last is not idle and curr is idle, current
cfs_rq should resched and curr can be preempt by others.
>> + }
>> +
>> ideal_runtime = sched_slice(cfs_rq, curr);
>> delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
>> if (delta_exec > ideal_runtime) {
>> @@ -4497,7 +4506,6 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
>> if (delta_exec < sysctl_sched_min_granularity)
>> return;
>>
>> - se = __pick_first_entity(cfs_rq);
>> delta = curr->vruntime - se->vruntime;
>>
>> if (delta < 0)
>> --
>> 2.27.0
>>
> .
Powered by blists - more mailing lists