lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 19 Mar 2023 15:56:43 +0800 From: wuchi <wuchi.zero@...il.com> To: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com, vincent.guittot@...aro.org, dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com Cc: linux-kernel@...r.kernel.org Subject: [PATCH] sched/core: Reduce cost of sched_move_task when config autogroup Some sched_move_task calls of autogroup is useless when the task_struct->sched_task_group isn't changed because of task_group of cpu_cgroup overlay task_group of autogroup. The overlay key codes are as follows: sched_cgroup_fork->autogroup_task_group->task_wants_autogroup sched_change_group->autogroup_task_group->autogroup_task_group sched_move_task eg: task A belongs to cpu_cgroup0 and autogroup0, it will always to cpu_cgroup0 when doing exit. So there is no need to do {de|en}queue. The call graph is as follow. do_exit sched_autogroup_exit_task sched_move_task dequeue_task sched_change_group A.sched_task_group = sched_get_task_group enqueue_task So do some check before dequeue task in sched_move_task. Signed-off-by: wuchi <wuchi.zero@...il.com> --- kernel/sched/core.c | 29 +++++++++++++++++++++++++++-- 1 file changed, 27 insertions(+), 2 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a380f34789a2..acc9a0e391f4 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -10330,7 +10330,7 @@ void sched_release_group(struct task_group *tg) spin_unlock_irqrestore(&task_group_lock, flags); } -static void sched_change_group(struct task_struct *tsk) +static struct task_group *sched_get_task_group(struct task_struct *tsk) { struct task_group *tg; @@ -10342,7 +10342,28 @@ static void sched_change_group(struct task_struct *tsk) tg = container_of(task_css_check(tsk, cpu_cgrp_id, true), struct task_group, css); tg = autogroup_task_group(tsk, tg); - tsk->sched_task_group = tg; + + return tg; +} + +static bool sched_task_group_changed(struct task_struct *tsk) +{ + /* + * Some sched_move_task calls of autogroup is useless when the + * task_struct->sched_task_group isn't changed because of task_group + * of cpu_cgroup overlay task_group of autogroup. so do some check + * before dequeue task in sched_move_task. + */ +#ifdef CONFIG_SCHED_AUTOGROUP + return sched_get_task_group(tsk) != tsk->sched_task_group; +#else + return true; +#endif /* CONFIG_SCHED_AUTOGROUP */ +} + +static void sched_change_group(struct task_struct *tsk) +{ + tsk->sched_task_group = sched_get_task_group(tsk); #ifdef CONFIG_FAIR_GROUP_SCHED if (tsk->sched_class->task_change_group) @@ -10369,6 +10390,9 @@ void sched_move_task(struct task_struct *tsk) rq = task_rq_lock(tsk, &rf); update_rq_clock(rq); + if (!sched_task_group_changed(tsk)) + goto unlock; + running = task_current(rq, tsk); queued = task_on_rq_queued(tsk); @@ -10391,6 +10415,7 @@ void sched_move_task(struct task_struct *tsk) resched_curr(rq); } +unlock: task_rq_unlock(rq, tsk, &rf); } -- 2.20.1
Powered by blists - more mailing lists