lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220709151353.32883-1-zhouchengming@bytedance.com>
Date:   Sat,  9 Jul 2022 23:13:45 +0800
From:   Chengming Zhou <zhouchengming@...edance.com>
To:     mingo@...hat.com, peterz@...radead.org, vincent.guittot@...aro.org,
        dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
        vschneid@...hat.com
Cc:     linux-kernel@...r.kernel.org,
        Chengming Zhou <zhouchengming@...edance.com>
Subject: [PATCH 0/8] sched: task load tracking optimization and cleanup

Hi all,

This patch series is optimization and cleanup for task load tracking
when task migrate CPU/cgroup or switched_from/to_fair().

There are three types of detach/attach_entity_load_avg (except fork and
exit case) for a fair task:
1. task migrate CPU (on_rq migrate or wake_up migrate)
2. task migrate cgroup (detach then attach)
3. task switched_from/to_fair (detach later attach)

patch1 optimize the on_rq migrate CPU case by combine detach into dequeue,
so we don't need to do detach_entity_cfs_rq() in migrate_task_rq_fair()
any more.

patch3 cleanup the migrate cgroup case by remove cpu_cgrp_subsys->fork(),
since we already do the same thing in sched_cgroup_fork().

patch1-3 have been reviewed earlier, but conflicts with the current tip
tree, so include them here as a patchset. Sorry for the inconvenience.

patch5-6 fix load tracking for new forked !fair task and when task
switched_from_fair().

After these changes, the task sched_avg last_update_time is reset to 0
when migrate from CPU/cgroup or switched_from_fair(), to save updated
sched_avg for next attach.

patch7-8 delete superfluous set_task_rq_fair() and SKIP_AGE_LOAD.

Chengming Zhou (8):
  sched/fair: combine detach into dequeue when migrating task
  sched/fair: update comments in enqueue/dequeue_entity()
  sched/fair: remove redundant cpu_cgrp_subsys->fork()
  sched/fair: reset sched_avg last_update_time before set_task_rq()
  sched/fair: fix load tracking for new forked !fair task
  sched/fair: stop load tracking when task switched_from_fair()
  sched/fair: delete superfluous set_task_rq_fair()
  sched/fair: delete superfluous SKIP_AGE_LOAD

 kernel/sched/core.c     |  27 ++------
 kernel/sched/fair.c     | 139 ++++++++++------------------------------
 kernel/sched/features.h |   1 -
 kernel/sched/sched.h    |  14 +---
 4 files changed, 41 insertions(+), 140 deletions(-)

-- 
2.36.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ