[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230426140324.GB1377058@hirez.programming.kicks-ass.net>
Date: Wed, 26 Apr 2023 16:03:24 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Chen Yu <yu.c.chen@...el.com>
Cc: Vincent Guittot <vincent.guittot@...aro.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Tim Chen <tim.c.chen@...el.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
K Prateek Nayak <kprateek.nayak@....com>,
Abel Wu <wuyun.abel@...edance.com>,
Yicong Yang <yangyicong@...ilicon.com>,
"Gautham R . Shenoy" <gautham.shenoy@....com>,
Honglei Wang <wanghonglei@...ichuxing.com>,
Len Brown <len.brown@...el.com>,
Chen Yu <yu.chen.surf@...il.com>,
Tianchen Ding <dtcccc@...ux.alibaba.com>,
Joel Fernandes <joel@...lfernandes.org>,
Josh Don <joshdon@...gle.com>, Hillf Danton <hdanton@...a.com>,
kernel test robot <yujie.liu@...el.com>,
Arjan Van De Ven <arjan.van.de.ven@...el.com>,
Aaron Lu <aaron.lu@...el.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v7 2/2] sched/fair: Introduce SIS_CURRENT to wake up
short task on current CPU
On Sat, Apr 22, 2023 at 12:08:18AM +0800, Chen Yu wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 4af5799b90fc..46c1321c0407 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6501,6 +6501,46 @@ static int wake_wide(struct task_struct *p)
> return 1;
> }
>
> +/*
> + * Wake up the task on current CPU, if the following conditions are met:
> + *
> + * 1. waker A is the only running task on this_cpu
> + * 3. A is a short duration task (waker will fall asleep soon)
> + * 4. wakee B is a short duration task (impact of B on A is minor)
> + * 5. A and B wake up each other alternately
> + */
> +static bool
> +wake_on_current(int this_cpu, struct task_struct *p)
> +{
> + if (!sched_feat(SIS_CURRENT))
> + return false;
> +
> + if (cpu_rq(this_cpu)->nr_running > 1)
> + return false;
> +
> + /*
> + * If a task switches in and then voluntarily relinquishes the
> + * CPU quickly, it is regarded as a short duration task. In that
> + * way, the short waker is likely to relinquish the CPU soon, which
> + * provides room for the wakee. Meanwhile, a short wakee would bring
> + * minor impact to the target rq. Put the short waker and wakee together
> + * bring benefit to cache-share task pairs and avoid migration overhead.
> + */
> + if (!current->se.dur_avg || ((current->se.dur_avg * 8) >= sysctl_sched_min_granularity))
> + return false;
> +
> + if (!p->se.dur_avg || ((p->se.dur_avg * 8) >= sysctl_sched_min_granularity))
> + return false;
> +
> + if (current->wakee_flips || p->wakee_flips)
> + return false;
> +
> + if (current->last_wakee != p || p->last_wakee != current)
> + return false;
> +
> + return true;
> +}
So I was going to play with this and found I needed to change things up
since these sysctl's no longer exist in my EEVDF branch.
And while I can easily do
's/sysctl_sched_min_granularity/sysctl_sched_base_slice/', it did make
me wonder if that's the right value to use.
min_gran/base_slice is related to how long we want a task to run before
switching, but that is not related to how long it needs to run to
establish a cache footprint.
Would not sched_migration_cost be a better measure to compare against?
That is also used in task_hot() to prevent migrations.
Powered by blists - more mailing lists