lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230713132306.GA13342@lorien.usersys.redhat.com>
Date:   Thu, 13 Jul 2023 09:23:06 -0400
From:   Phil Auld <pauld@...hat.com>
To:     Benjamin Segall <bsegall@...gle.com>
Cc:     linux-kernel@...r.kernel.org, Juri Lelli <juri.lelli@...hat.com>,
        Ingo Molnar <mingo@...hat.com>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Valentin Schneider <vschneid@...hat.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Mel Gorman <mgorman@...e.de>,
        Frederic Weisbecker <frederic@...nel.org>,
        Tejun Heo <tj@...nel.org>
Subject: Re: [PATCH v2 1/2] sched, cgroup: Restore meaning to
 hierarchical_quota

On Wed, Jul 12, 2023 at 03:09:31PM -0700 Benjamin Segall wrote:
> Phil Auld <pauld@...hat.com> writes:
> 
> > In cgroupv2 cfs_b->hierarchical_quota is set to -1 for all task
> > groups due to the previous fix simply taking the min.  It should
> > reflect a limit imposed at that level or by an ancestor. Even
> > though cgroupv2 does not require child quota to be less than or
> > equal to that of its ancestors the task group will still be
> > constrained by such a quota so this should be shown here. Cgroupv1
> > continues to set this correctly.
> >
> > In both cases, add initialization when a new task group is created
> > based on the current parent's value (or RUNTIME_INF in the case of
> > root_task_group). Otherwise, the field is wrong until a quota is
> > changed after creation and __cfs_schedulable() is called.
> >
> > Fixes: c53593e5cb69 ("sched, cgroup: Don't reject lower cpu.max on ancestors")
> > Signed-off-by: Phil Auld <pauld@...hat.com>
> > Reviewed-by: Ben Segall <bsegall@...gle.com>
> > Cc: Ingo Molnar <mingo@...hat.com>
> > Cc: Peter Zijlstra <peterz@...radead.org>
> > Cc: Vincent Guittot <vincent.guittot@...aro.org>
> > Cc: Juri Lelli <juri.lelli@...hat.com>
> > Cc: Dietmar Eggemann <dietmar.eggemann@....com>
> > Cc: Valentin Schneider <vschneid@...hat.com>
> > Cc: Ben Segall <bsegall@...gle.com>
> > Cc: Frederic Weisbecker <frederic@...nel.org>
> > Cc: Tejun Heo <tj@...nel.org>
> > ---
> >
> > v2: Improve comment about how setting hierarchical_quota correctly
> >
> > helps the scheduler. Remove extra parens.
> >  kernel/sched/core.c  | 13 +++++++++----
> >  kernel/sched/fair.c  |  7 ++++---
> >  kernel/sched/sched.h |  2 +-
> >  3 files changed, 14 insertions(+), 8 deletions(-)
> >
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index a68d1276bab0..f80697a79baf 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -9904,7 +9904,7 @@ void __init sched_init(void)
> >  		ptr += nr_cpu_ids * sizeof(void **);
> >  
> >  		root_task_group.shares = ROOT_TASK_GROUP_LOAD;
> > -		init_cfs_bandwidth(&root_task_group.cfs_bandwidth);
> > +		init_cfs_bandwidth(&root_task_group.cfs_bandwidth, NULL);
> >  #endif /* CONFIG_FAIR_GROUP_SCHED */
> >  #ifdef CONFIG_RT_GROUP_SCHED
> >  		root_task_group.rt_se = (struct sched_rt_entity **)ptr;
> > @@ -11038,11 +11038,16 @@ static int tg_cfs_schedulable_down(struct task_group *tg, void *data)
> >  
> >  		/*
> >  		 * Ensure max(child_quota) <= parent_quota.  On cgroup2,
> > -		 * always take the min.  On cgroup1, only inherit when no
> > -		 * limit is set:
> > +		 * always take the non-RUNTIME_INF min.  On cgroup1, only
> > +		 * inherit when no limit is set. In cgroup2 this is used
> > +		 * by the scheduler to determine if a given CFS task has a
> > +		 * bandwidth constraint at some higher level.
> >  		 */
> 
> It's still used for determining this on cgroup1 (and the cgroup1 code
> still works for that), right?
>

It would, except that the enforcement of child quota <= parent quota
means that cfs_rq->runtime_enabled will be set and we'll hit that first
on cgroup1.  So we don't really use it for this determination in cgroup1.

But I could generalize that comment if you want.


Thanks,
Phil


> >  		if (cgroup_subsys_on_dfl(cpu_cgrp_subsys)) {
> > -			quota = min(quota, parent_quota);
> > +			if (quota == RUNTIME_INF)
> > +				quota = parent_quota;
> > +			else if (parent_quota != RUNTIME_INF)
> > +				quota = min(quota, parent_quota);
> >  		} else {
> >  			if (quota == RUNTIME_INF)
> >  				quota = parent_quota;
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 373ff5f55884..d9b3d4617e16 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6005,13 +6005,14 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
> >  	return idle ? HRTIMER_NORESTART : HRTIMER_RESTART;
> >  }
> >  
> > -void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
> > +void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b, struct cfs_bandwidth *parent)
> >  {
> >  	raw_spin_lock_init(&cfs_b->lock);
> >  	cfs_b->runtime = 0;
> >  	cfs_b->quota = RUNTIME_INF;
> >  	cfs_b->period = ns_to_ktime(default_cfs_period());
> >  	cfs_b->burst = 0;
> > +	cfs_b->hierarchical_quota = parent ? parent->hierarchical_quota : RUNTIME_INF;
> >  
> >  	INIT_LIST_HEAD(&cfs_b->throttled_cfs_rq);
> >  	hrtimer_init(&cfs_b->period_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
> > @@ -6168,7 +6169,7 @@ static inline int throttled_lb_pair(struct task_group *tg,
> >  	return 0;
> >  }
> >  
> > -void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b) {}
> > +void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b, struct cfs_bandwidth *parent) {}
> >  
> >  #ifdef CONFIG_FAIR_GROUP_SCHED
> >  static void init_cfs_rq_runtime(struct cfs_rq *cfs_rq) {}
> > @@ -12373,7 +12374,7 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent)
> >  
> >  	tg->shares = NICE_0_LOAD;
> >  
> > -	init_cfs_bandwidth(tg_cfs_bandwidth(tg));
> > +	init_cfs_bandwidth(tg_cfs_bandwidth(tg), tg_cfs_bandwidth(parent));
> >  
> >  	for_each_possible_cpu(i) {
> >  		cfs_rq = kzalloc_node(sizeof(struct cfs_rq),
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index ec7b3e0a2b20..63822c9238cc 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -460,7 +460,7 @@ extern void unregister_fair_sched_group(struct task_group *tg);
> >  extern void init_tg_cfs_entry(struct task_group *tg, struct cfs_rq *cfs_rq,
> >  			struct sched_entity *se, int cpu,
> >  			struct sched_entity *parent);
> > -extern void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b);
> > +extern void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b, struct cfs_bandwidth *parent);
> >  
> >  extern void __refill_cfs_bandwidth_runtime(struct cfs_bandwidth *cfs_b);
> >  extern void start_cfs_bandwidth(struct cfs_bandwidth *cfs_b);
> 

-- 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ