lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210224140750.00004e38.zbestahu@gmail.com>
Date:   Wed, 24 Feb 2021 14:07:50 +0800
From:   Yue Hu <zbestahu@...il.com>
To:     Viresh Kumar <viresh.kumar@...aro.org>
Cc:     rjw@...ysocki.net, mingo@...hat.com, peterz@...radead.org,
        juri.lelli@...hat.com, vincent.guittot@...aro.org,
        linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org,
        huyue2@...ong.com, zbestahu@....com
Subject: Re: [PATCH] cpufreq: schedutil: Call sugov_update_next_freq()
 before check to fast_switch_enabled

On Wed, 24 Feb 2021 11:32:36 +0530
Viresh Kumar <viresh.kumar@...aro.org> wrote:

> On 24-02-21, 13:42, Yue Hu wrote:
> > From: Yue Hu <huyue2@...ong.com>
> > 
> > Note that sugov_update_next_freq() may return false, that means the
> > caller sugov_fast_switch() will do nothing except fast switch check.
> > 
> > Similarly, sugov_deferred_update() also has unnecessary operations
> > of raw_spin_{lock,unlock} in sugov_update_single_freq() for that case.
> > 
> > So, let's call sugov_update_next_freq() before the fast switch check
> > to avoid unnecessary behaviors above. Update the related interface
> > definitions accordingly.
> > 
> > Signed-off-by: Yue Hu <huyue2@...ong.com>
> > ---
> >  kernel/sched/cpufreq_schedutil.c | 28 ++++++++++++++--------------
> >  1 file changed, 14 insertions(+), 14 deletions(-)
> > 
> > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> > index 41e498b..d23e5be 100644
> > --- a/kernel/sched/cpufreq_schedutil.c
> > +++ b/kernel/sched/cpufreq_schedutil.c
> > @@ -114,19 +114,13 @@ static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time,
> >  	return true;
> >  }
> >  
> > -static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
> > -			      unsigned int next_freq)
> > +static void sugov_fast_switch(struct sugov_policy *sg_policy, unsigned int next_freq)
> >  {
> > -	if (sugov_update_next_freq(sg_policy, time, next_freq))
> > -		cpufreq_driver_fast_switch(sg_policy->policy, next_freq);
> > +	cpufreq_driver_fast_switch(sg_policy->policy, next_freq);  
> 
> I will call this directly instead, no need of the wrapper anymore.

To fix it in next version.

Thank you.

> 
> >  }
> >  
> > -static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time,
> > -				  unsigned int next_freq)
> > +static void sugov_deferred_update(struct sugov_policy *sg_policy)
> >  {
> > -	if (!sugov_update_next_freq(sg_policy, time, next_freq))
> > -		return;
> > -
> >  	if (!sg_policy->work_in_progress) {
> >  		sg_policy->work_in_progress = true;
> >  		irq_work_queue(&sg_policy->irq_work);
> > @@ -368,16 +362,19 @@ static void sugov_update_single_freq(struct update_util_data *hook, u64 time,
> >  		sg_policy->cached_raw_freq = cached_freq;
> >  	}
> >  
> > +	if (!sugov_update_next_freq(sg_policy, time, next_f))
> > +		return;
> > +
> >  	/*
> >  	 * This code runs under rq->lock for the target CPU, so it won't run
> >  	 * concurrently on two different CPUs for the same target and it is not
> >  	 * necessary to acquire the lock in the fast switch case.
> >  	 */
> >  	if (sg_policy->policy->fast_switch_enabled) {
> > -		sugov_fast_switch(sg_policy, time, next_f);
> > +		sugov_fast_switch(sg_policy, next_f);
> >  	} else {
> >  		raw_spin_lock(&sg_policy->update_lock);
> > -		sugov_deferred_update(sg_policy, time, next_f);
> > +		sugov_deferred_update(sg_policy);
> >  		raw_spin_unlock(&sg_policy->update_lock);
> >  	}
> >  }
> > @@ -456,12 +453,15 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time)
> >  	if (sugov_should_update_freq(sg_policy, time)) {
> >  		next_f = sugov_next_freq_shared(sg_cpu, time);
> >  
> > +		if (!sugov_update_next_freq(sg_policy, time, next_f))
> > +			goto unlock;
> > +
> >  		if (sg_policy->policy->fast_switch_enabled)
> > -			sugov_fast_switch(sg_policy, time, next_f);
> > +			sugov_fast_switch(sg_policy, next_f);
> >  		else
> > -			sugov_deferred_update(sg_policy, time, next_f);
> > +			sugov_deferred_update(sg_policy);
> >  	}
> > -
> > +unlock:
> >  	raw_spin_unlock(&sg_policy->update_lock);
> >  }  
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ