lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 24 Jan 2020 09:51:25 +0000
From:   Quentin Perret <qperret@...gle.com>
To:     Qais Yousef <qais.yousef@....com>
Cc:     Wei Wang <wvw@...gle.com>, wei.vince.wang@...il.com,
        dietmar.eggemann@....com, valentin.schneider@....com,
        chris.redpath@....com, Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] [RFC] sched: restrict iowait boost for boosted task only

On Friday 24 Jan 2020 at 02:52:39 (+0000), Qais Yousef wrote:
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index ba749f579714..221c0fbcea0e 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -5220,8 +5220,10 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> >  	 * If in_iowait is set, the code below may not trigger any cpufreq
> >  	 * utilization updates, so do it here explicitly with the IOWAIT flag
> >  	 * passed.
> > +	 * If CONFIG_ENERGY_MODEL and CONFIG_UCLAMP_TASK are both configured,
> > +	 * only boost for tasks set with non-null min-clamp.
> >  	 */
> > -	if (p->in_iowait)
> > +	if (iowait_boosted(p))
> >  		cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT);
> >  
> >  	for_each_sched_entity(se) {
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 280a3c735935..a13d49d835ed 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -2341,6 +2341,18 @@ static inline unsigned int uclamp_util(struct rq *rq, unsigned int util)
> >  }
> >  #endif /* CONFIG_UCLAMP_TASK */
> >  
> > +#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_UCLAMP_TASK)
> 
> Why depend on CONFIG_ENERGY_MODEL? Laptops are battery powered and might not
> have this option enabled. Do we really need energy model here?

+1

> > +static inline bool iowait_boosted(struct task_struct *p)
> > +{
> > +	return p->in_iowait && uclamp_eff_value(p, UCLAMP_MIN) > 0;
> 
> I think this is overloading the usage of util clamp. You're basically using
> cpu.uclamp.min to temporarily switch iowait boost on/off.
> 
> Isn't it better to add a new cgroup attribute to toggle this feature?
> 
> The problem does seem generic enough and could benefit other battery-powered
> devices outside of the Android world. I don't think the dependency on uclamp &&
> energy model are necessary to solve this.

I think using uclamp is not a bad idea here, but perhaps we could do
things differently. As of today the iowait boost escapes the clamping
mechanism, so one option would be to change that. That would let us set
a low max clamp in the 'background' cgroup, which in turns would limit
the frequency request for those tasks even if they're IO-intensive.

That'll have to be done at the RQ level, but figuring out what's the
current max clamp on the rq should be doable from sugov I think.

Wei: would that work for your use case ?

Also, the iowait boost really should be per-task and not per-cpu, so it
can be taken into account during wake-up balance on big.LITTLE. That
might also help on SMP if a task doing a lot of IO migrates, as the
boost would migrate with it. But that's perhaps for later ...

Thanks,
Quentin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ