[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250106170455.GB22191@noisy.programming.kicks-ass.net>
Date: Mon, 6 Jan 2025 18:04:55 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Doug Smythies <dsmythies@...us.net>
Cc: linux-kernel@...r.kernel.org, vincent.guittot@...aro.org
Subject: Re: [REGRESSION] Re: [PATCH 00/24] Complete EEVDF
On Mon, Jan 06, 2025 at 05:59:32PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 06, 2025 at 07:01:34AM -0800, Doug Smythies wrote:
>
> > > What is the easiest 100% load you're seeing this with?
> >
> > Lately, and specifically to be able to tell others, I have been using:
> >
> > yes > /dev/null &
> >
> > On my Intel i5-10600K, with 6 cores and 2 threads per core, 12 CPUs,
> > I run 12 of those work loads.
>
> On my headless ivb-ep 2 sockets, 10 cores each and 2 threads per core, I
> do:
>
> for ((i=0; i<40; i++)) ; do yes > /dev/null & done
> tools/power/x86/turbostat/turbostat --quiet --Summary --show Busy%,Bzy_MHz,IRQ,PkgWatt,PkgTmp,TSC_MHz --interval 1
>
> But no so far, nada :-( I've tried with full preemption and voluntary,
> HZ=1000.
>
And just as I send this, I see these happen:
100.00 3100 2793 40302 71 195.22
100.00 3100 2618 40459 72 183.58
100.00 3100 2993 46215 71 209.21
100.00 3100 2789 40467 71 195.19
99.92 3100 2798 40589 71 195.76
100.00 3100 2793 40397 72 195.46
...
100.00 3100 2844 41906 71 199.43
100.00 3100 2779 40468 71 194.51
99.96 3100 2320 40933 71 163.23
100.00 3100 3529 61823 72 245.70
100.00 3100 2793 40493 72 195.45
100.00 3100 2793 40462 72 195.56
They look like funny little blips. Nowhere near as bad as you had
though.
Powered by blists - more mailing lists