lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BYAPR02MB44888C0FE86220485A38363694819@BYAPR02MB4488.namprd02.prod.outlook.com>
Date:   Tue, 5 Jul 2022 23:49:50 +0000
From:   David Chen <david.chen@...anix.com>
To:     Zhang Qiao <zhangqiao22@...wei.com>,
        Vincent Guittot <vincent.guittot@...aro.org>
CC:     "mingo@...hat.com" <mingo@...hat.com>,
        "peterz@...radead.org" <peterz@...radead.org>,
        "juri.lelli@...hat.com" <juri.lelli@...hat.com>,
        "dietmar.eggemann@....com" <dietmar.eggemann@....com>,
        "rostedt@...dmis.org" <rostedt@...dmis.org>,
        "bsegall@...gle.com" <bsegall@...gle.com>,
        "mgorman@...e.de" <mgorman@...e.de>,
        "bristot@...hat.com" <bristot@...hat.com>,
        "vschneid@...hat.com" <vschneid@...hat.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] sched/fair: fix case with reduced capacity CPU



> -----Original Message-----
> From: Zhang Qiao <zhangqiao22@...wei.com>
> Sent: Tuesday, July 5, 2022 1:23 AM
> To: Vincent Guittot <vincent.guittot@...aro.org>
> Cc: mingo@...hat.com; peterz@...radead.org; juri.lelli@...hat.com; dietmar.eggemann@....com; rostedt@...dmis.org;
> bsegall@...gle.com; mgorman@...e.de; bristot@...hat.com; vschneid@...hat.com; linux-kernel@...r.kernel.org; David Chen
> <david.chen@...anix.com>
> Subject: Re: [PATCH] sched/fair: fix case with reduced capacity CPU
> 
> 
> 
> 在 2022/7/2 12:52, Vincent Guittot 写道:
> > The capacity of the CPU available for CFS tasks can be reduced because of
> > other activities running on the latter. In such case, it's worth trying to
> > move CFS tasks on a CPU with more available capacity.
> >
> > The rework of the load balance has filterd the case when the CPU is
> > classified to be fully busy but its capacity is reduced.
> >
> > Check if CPU's capacity is reduced while gathering load balance statistics
> > and classify it group_misfit_task instead of group_fully_busy so we can
> > try to move the load on another CPU.
> >
> > Reported-by: David Chen <david.chen@...anix.com>
> > Reported-by: Zhang Qiao <zhangqiao22@...wei.com>
> > Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> > ---
> >
> > David, Zhang,
> >
> > I haven't put your tested-by because I have reworked and cleaned the patch to
> > cover more cases.
> >
> > Could you run some tests with this version ?
> 
> I tested with this version, it is ok.
> 
> Tested-by: Zhang Qiao <zhangqiao22@...wei.com>
> 
> Thanks

This version works fine with me.
Tested-by: David Chen <david.chen@...anix.com>

Thanks

> 
> >
> > Thanks
> >
> >  kernel/sched/fair.c | 50 ++++++++++++++++++++++++++++++++++++---------
> >  1 file changed, 40 insertions(+), 10 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index a78d2e3b9d49..126b82ef4279 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -8798,6 +8798,19 @@ sched_asym(struct lb_env *env, struct sd_lb_stats *sds,  struct sg_lb_stats *sgs
> >  	return sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu);
> >  }
> >
> > +static inline bool
> > +sched_reduced_capacity(struct rq *rq, struct sched_domain *sd)
> > +{
> > +	/*
> > +	 * When there is more than 1 task, the group_overloaded case already
> > +	 * takes care of cpu with reduced capacity
> > +	 */
> > +	if (rq->cfs.h_nr_running != 1)
> > +		return false;
> > +
> > +	return check_cpu_capacity(rq, sd);
> > +}
> > +
> >  /**
> >   * update_sg_lb_stats - Update sched_group's statistics for load balancing.
> >   * @env: The load balancing environment.
> > @@ -8820,8 +8833,9 @@ static inline void update_sg_lb_stats(struct lb_env *env,
> >
> >  	for_each_cpu_and(i, sched_group_span(group), env->cpus) {
> >  		struct rq *rq = cpu_rq(i);
> > +		unsigned long load = cpu_load(rq);
> >
> > -		sgs->group_load += cpu_load(rq);
> > +		sgs->group_load += load;
> >  		sgs->group_util += cpu_util_cfs(i);
> >  		sgs->group_runnable += cpu_runnable(rq);
> >  		sgs->sum_h_nr_running += rq->cfs.h_nr_running;
> > @@ -8851,11 +8865,17 @@ static inline void update_sg_lb_stats(struct lb_env *env,
> >  		if (local_group)
> >  			continue;
> >
> > -		/* Check for a misfit task on the cpu */
> > -		if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
> > -		    sgs->group_misfit_task_load < rq->misfit_task_load) {
> > -			sgs->group_misfit_task_load = rq->misfit_task_load;
> > -			*sg_status |= SG_OVERLOAD;
> > +		if (env->sd->flags & SD_ASYM_CPUCAPACITY) {
> > +			/* Check for a misfit task on the cpu */
> > +			if (sgs->group_misfit_task_load < rq->misfit_task_load) {
> > +				sgs->group_misfit_task_load = rq->misfit_task_load;
> > +				*sg_status |= SG_OVERLOAD;
> > +			}
> > +		} else if ((env->idle != CPU_NOT_IDLE) &&
> > +			   sched_reduced_capacity(rq, env->sd) &&
> > +			   (sgs->group_misfit_task_load < load)) {
> > +			/* Check for a task running on a CPU with reduced capacity */
> > +			sgs->group_misfit_task_load = load;
> >  		}
> >  	}
> >
> > @@ -8908,7 +8928,8 @@ static bool update_sd_pick_busiest(struct lb_env *env,
> >  	 * CPUs in the group should either be possible to resolve
> >  	 * internally or be covered by avg_load imbalance (eventually).
> >  	 */
> > -	if (sgs->group_type == group_misfit_task &&
> > +	if ((env->sd->flags & SD_ASYM_CPUCAPACITY) &&
> > +	    (sgs->group_type == group_misfit_task) &&
> >  	    (!capacity_greater(capacity_of(env->dst_cpu), sg->sgc->max_capacity) ||
> >  	     sds->local_stat.group_type != group_has_spare))
> >  		return false;
> > @@ -9517,9 +9538,18 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
> >  	busiest = &sds->busiest_stat;
> >
> >  	if (busiest->group_type == group_misfit_task) {
> > -		/* Set imbalance to allow misfit tasks to be balanced. */
> > -		env->migration_type = migrate_misfit;
> > -		env->imbalance = 1;
> > +		if (env->sd->flags & SD_ASYM_CPUCAPACITY) {
> > +			/* Set imbalance to allow misfit tasks to be balanced. */
> > +			env->migration_type = migrate_misfit;
> > +			env->imbalance = 1;
> > +		} else {
> > +			/*
> > +			 * Set load imbalance to allow moving task from cpu
> > +			 * with reduced capacity
> > +			 */
> > +			env->migration_type = migrate_load;
> > +			env->imbalance = busiest->group_misfit_task_load;
> > +		}
> >  		return;
> >  	}
> >
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ