lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZtBvJk4MMCcF2SI8@gpd3>
Date: Thu, 29 Aug 2024 14:52:54 +0200
From: Andrea Righi <andrea.righi@...ux.dev>
To: "Gautham R. Shenoy" <gautham.shenoy@....com>
Cc: Mario Limonciello <mario.limonciello@....com>,
	Mario Limonciello <superm1@...nel.org>,
	Borislav Petkov <bp@...en8.de>, Perry Yuan <perry.yuan@....com>,
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>,
	"Rafael J . Wysocki" <rafael@...nel.org>,
	"open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <linux-kernel@...r.kernel.org>,
	"open list:ACPI" <linux-acpi@...r.kernel.org>,
	"open list:CPU FREQUENCY SCALING FRAMEWORK" <linux-pm@...r.kernel.org>,
	changwoo@...lia.com, David Vernet <void@...ifault.com>,
	Tejun Heo <tj@...nel.org>
Subject: Re: [PATCH 8/8] cpufreq: amd-pstate: Drop some uses of
 cpudata->hw_prefcore

On Wed, Aug 28, 2024 at 08:27:44PM +0530, Gautham R. Shenoy wrote:
> Hello Andrea,
> 
> On Wed, Aug 28, 2024 at 08:20:50AM +0200, Andrea Righi wrote:
> > On Wed, Aug 28, 2024 at 10:38:45AM +0530, Gautham R. Shenoy wrote:
> > ...
> > > > I had thought this was a malfunction in the behavior that it reflected the
> > > > current status, not the hardware /capability/.
> > > > 
> > > > Which one makes more sense for userspace?  In my mind the most likely
> > > > consumer of this information would be something a sched_ext based userspace
> > > > scheduler.  They would need to know whether the scheduler was using
> > > > preferred cores; not whether the hardware supported it.
> > > 
> > > The commandline parameter currently impacts only the fair sched-class
> > > tasks since the preference information gets used only during
> > > load-balancing.
> > > 
> > > IMO, the same should continue with sched-ext, i.e. if the user has
> > > explicitly disabled prefcore support via commandline, the no sched-ext
> > > scheduler should use the preference information to make task placement
> > > decisions. However, I would like to see what the sched-ext folks have
> > > to say. Adding some of them to the Cc list.
> > 
> > IMHO it makes more sense to reflect the real state of prefcore support
> > from a "system" perspective, more than a "hardware" perspective, so if
> > it's disabled via boot command line it should show disabled.
> > 
> > From a user-space scheduler perspective we should be fine either way, as
> > long as the ABI is clearly documented, since we also have access to
> > /proc/cmdline and we would be able to figure out if the user has
> > disabled it via cmdline (however, the preference is still to report the
> > actual system status).
> 
> Thank you for confirming this.
> 
> > 
> > Question: having prefcore enabled affects also the value of
> > scaling_max_freq? Like an `lscpu -e`, for example, would show a higher
> > max frequency for the specific preferred cores? (this is another useful
> > information from a sched_ext scheduler perspective).
> 
> Since the scaling_max_freq is computed based on the boost-numerator,
> at least from this patchset, the numerator would be the same across
> all kinds of cores, and thus the scaling_max_freq reported will be the
> same across all the cores.

I see, so IIUC from user-space the most reliable way to detect the
fastest cores is to check amd_pstate_highest_perf / amd_pstate_max_freq,
right? I'm trying to figure out a way to abstract and generalize the
concept of "fast cores" in sched_ext.

Also, is this something that has changed recently? I see this on an
AMD Ryzen Threadripper PRO 7975WX 32-Cores running a 6.8 kernel:

$ uname -r
6.8.0-40-generic

$ grep . /sys/devices/system/cpu/cpu*/cpufreq/cpuinfo_max_freq
/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu1/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu2/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu3/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu4/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu5/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu6/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu7/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu8/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu9/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu10/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu11/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu12/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu13/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu14/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu15/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu16/cpufreq/cpuinfo_max_freq:6161000
/sys/devices/system/cpu/cpu17/cpufreq/cpuinfo_max_freq:6321000
/sys/devices/system/cpu/cpu18/cpufreq/cpuinfo_max_freq:6001000
/sys/devices/system/cpu/cpu19/cpufreq/cpuinfo_max_freq:6646000
/sys/devices/system/cpu/cpu20/cpufreq/cpuinfo_max_freq:5837000
/sys/devices/system/cpu/cpu21/cpufreq/cpuinfo_max_freq:6482000
/sys/devices/system/cpu/cpu22/cpufreq/cpuinfo_max_freq:5517000
/sys/devices/system/cpu/cpu23/cpufreq/cpuinfo_max_freq:5677000
/sys/devices/system/cpu/cpu24/cpufreq/cpuinfo_max_freq:6966000
/sys/devices/system/cpu/cpu25/cpufreq/cpuinfo_max_freq:7775000
/sys/devices/system/cpu/cpu26/cpufreq/cpuinfo_max_freq:6806000
/sys/devices/system/cpu/cpu27/cpufreq/cpuinfo_max_freq:7775000
/sys/devices/system/cpu/cpu28/cpufreq/cpuinfo_max_freq:7130000
/sys/devices/system/cpu/cpu29/cpufreq/cpuinfo_max_freq:7451000
/sys/devices/system/cpu/cpu30/cpufreq/cpuinfo_max_freq:7290000
/sys/devices/system/cpu/cpu31/cpufreq/cpuinfo_max_freq:7611000
/sys/devices/system/cpu/cpu32/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu33/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu34/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu35/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu36/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu37/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu38/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu39/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu40/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu41/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu42/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu43/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu44/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu45/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu46/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu47/cpufreq/cpuinfo_max_freq:5352000
/sys/devices/system/cpu/cpu48/cpufreq/cpuinfo_max_freq:6161000
/sys/devices/system/cpu/cpu49/cpufreq/cpuinfo_max_freq:6321000
/sys/devices/system/cpu/cpu50/cpufreq/cpuinfo_max_freq:6001000
/sys/devices/system/cpu/cpu51/cpufreq/cpuinfo_max_freq:6646000
/sys/devices/system/cpu/cpu52/cpufreq/cpuinfo_max_freq:5837000
/sys/devices/system/cpu/cpu53/cpufreq/cpuinfo_max_freq:6482000
/sys/devices/system/cpu/cpu54/cpufreq/cpuinfo_max_freq:5517000
/sys/devices/system/cpu/cpu55/cpufreq/cpuinfo_max_freq:5677000
/sys/devices/system/cpu/cpu56/cpufreq/cpuinfo_max_freq:6966000
/sys/devices/system/cpu/cpu57/cpufreq/cpuinfo_max_freq:7775000
/sys/devices/system/cpu/cpu58/cpufreq/cpuinfo_max_freq:6806000
/sys/devices/system/cpu/cpu59/cpufreq/cpuinfo_max_freq:7775000
/sys/devices/system/cpu/cpu60/cpufreq/cpuinfo_max_freq:7130000
/sys/devices/system/cpu/cpu61/cpufreq/cpuinfo_max_freq:7451000
/sys/devices/system/cpu/cpu62/cpufreq/cpuinfo_max_freq:7290000
/sys/devices/system/cpu/cpu63/cpufreq/cpuinfo_max_freq:7611000

Thanks,
-Andrea

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ