lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 16 May 2018 10:55:13 -0700
From:   Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     tglx@...utronix.de, mingo@...hat.com, bp@...e.de, lenb@...nel.org,
        rjw@...ysocki.net, mgorman@...hsingularity.net, x86@...nel.org,
        linux-pm@...r.kernel.org, viresh.kumar@...aro.org,
        juri.lelli@....com, linux-kernel@...r.kernel.org
Subject: Re: [RFC/RFT] [PATCH 05/10] cpufreq: intel_pstate: HWP boost
 performance on IO Wake

On Wed, 2018-05-16 at 09:37 +0200, Peter Zijlstra wrote:
> On Tue, May 15, 2018 at 09:49:06PM -0700, Srinivas Pandruvada wrote:

[...]

> > 
@@ -258,6 +261,8 @@ struct cpudata {
> >  	s16 epp_saved;
> >  	u64 hwp_req_cached;
> >  	call_single_data_t csd;
> > +	bool hwp_boost_active;
> > +	u64 last_io_update;
> 
> This structure has abysmal layout; you should look at that.
> Also, mandatory complaint about using _Bool in composites.
> 
I will take care about this in next version.

[...]

> > +/* Default: This will roughly around P1 on SKX */
> > +#define BOOST_PSTATE_THRESHOLD	(SCHED_CAPACITY_SCALE / 2)
> 
> Yuck.. why the need to hardcode this? Can't you simply read the P1
> value
> for the part at hand?
Done later in the series. So need to reorder.

[..]
> 
+	if (cpu->iowait_boost) {
> > +		cpu->hwp_boost_active = true;
> > +		if (smp_processor_id() == cpu->cpu)
> > +			intel_pstate_hwp_boost_up(cpu);
> > +		else
> > +			smp_call_function_single_async(cpu->cpu,
> > &cpu->csd);
> > +	}
> >  }
> 
> Hurmph, this looks like you're starting to duplicate the schedutil
> iowait logic. Why didn't you copy the gradual boosting thing?
I tried what we implemented in intel_pstate in legacy mode, which gave
the best performance for servers (ramp up faster and slow ramp down).
This caused regression on some workloads, as each time we can call
HWP_REQUEST, we spend 1200+ cycles in issuing MSR. So keeping it at max
for timeout. Even keeping at P1 didn't help in power numbers.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ