[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6c9d110538d431ce3f14577815a94be491eaa719.camel@gmx.de>
Date: Sun, 18 Jul 2021 14:09:44 +0200
From: Mike Galbraith <efault@....de>
To: Vlastimil Babka <vbabka@...e.cz>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Christoph Lameter <cl@...ux.com>,
David Rientjes <rientjes@...gle.com>,
Pekka Enberg <penberg@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Mel Gorman <mgorman@...hsingularity.net>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Jann Horn <jannh@...gle.com>
Subject: Re: [RFC v2 00/34] SLUB: reduce irq disabled scope and make it RT
compatible
On Sun, 2021-07-18 at 10:29 +0200, Mike Galbraith wrote:
> On Sun, 2021-07-18 at 09:41 +0200, Vlastimil Babka wrote:
> > On 7/3/21 9:24 AM, Mike Galbraith wrote:
> > > On Fri, 2021-07-02 at 20:29 +0200, Sebastian Andrzej Siewior
> > > wrote:
> > > > I replaced my slub changes with slub-local-lock-v2r3.
> > > > I haven't seen any complains from lockdep or so which is good.
> > > > Then I
> > > > did this with RT enabled (and no debug):
> > >
> > > Below is some raw hackbench data from my little i4790 desktop
> > > box. It
> > > says we'll definitely still want list_lock to be raw.
> >
> > Hi Mike, thanks a lot for the testing, sorry for late reply.
> >
> > Did you try, instead of raw list_lock, not applying the last, local
> > lock patch, as I suggested in reply to bigeasy?
>
> No, but I suppose I can give that a go.
The kernel has of course moved forward, so measure 'em again, but
starting before replacing 5.12-rt patches with slub-local-lock-v2r3.
box is old i4790 desktop
perf stat -r10 hackbench -s4096 -l500
full warmup, record, repeat twice for elapsed
Config has only SLUB+SLUB_DEBUG, as originally measured.
5.14.0.g79e92006-tip-rt (5.12-rt, 5.13-rt didn't exist when first measured)
7,984.52 msec task-clock # 7.565 CPUs utilized ( +- 0.66% )
353,566 context-switches # 44.281 K/sec ( +- 2.77% )
37,685 cpu-migrations # 4.720 K/sec ( +- 6.37% )
12,939 page-faults # 1.620 K/sec ( +- 0.67% )
29,901,079,227 cycles # 3.745 GHz ( +- 0.71% )
14,550,797,818 instructions # 0.49 insn per cycle ( +- 0.47% )
3,056,685,643 branches # 382.826 M/sec ( +- 0.51% )
9,598,083 branch-misses # 0.31% of all branches ( +- 2.11% )
1.05542 +- 0.00409 seconds time elapsed ( +- 0.39% )
1.05990 +- 0.00244 seconds time elapsed ( +- 0.23% ) (repeat)
1.05367 +- 0.00303 seconds time elapsed ( +- 0.29% ) (repeat)
5.14.0.g79e92006-tip-rt +slub-local-lock-v2r3 -0034-mm-slub-convert-kmem_cpu_slab-protection-to-local_lock.patch
6,899.35 msec task-clock # 5.637 CPUs utilized ( +- 0.53% )
420,304 context-switches # 60.919 K/sec ( +- 2.83% )
187,130 cpu-migrations # 27.123 K/sec ( +- 1.81% )
13,206 page-faults # 1.914 K/sec ( +- 0.96% )
25,110,362,933 cycles # 3.640 GHz ( +- 0.49% )
15,853,643,635 instructions # 0.63 insn per cycle ( +- 0.64% )
3,366,261,524 branches # 487.910 M/sec ( +- 0.70% )
14,839,618 branch-misses # 0.44% of all branches ( +- 2.01% )
1.22390 +- 0.00744 seconds time elapsed ( +- 0.61% )
1.21813 +- 0.00907 seconds time elapsed ( +- 0.74% ) (repeat)
1.22097 +- 0.00952 seconds time elapsed ( +- 0.78% ) (repeat)
repeat of above with raw list_lock
8,072.62 msec task-clock # 7.605 CPUs utilized ( +- 0.49% )
359,514 context-switches # 44.535 K/sec ( +- 4.95% )
35,285 cpu-migrations # 4.371 K/sec ( +- 5.82% )
13,503 page-faults # 1.673 K/sec ( +- 0.96% )
30,247,989,681 cycles # 3.747 GHz ( +- 0.52% )
14,580,011,391 instructions # 0.48 insn per cycle ( +- 0.81% )
3,063,743,405 branches # 379.523 M/sec ( +- 0.85% )
8,907,160 branch-misses # 0.29% of all branches ( +- 3.99% )
1.06150 +- 0.00427 seconds time elapsed ( +- 0.40% )
1.05041 +- 0.00176 seconds time elapsed ( +- 0.17% ) (repeat)
1.06086 +- 0.00237 seconds time elapsed ( +- 0.22% ) (repeat)
5.14.0.g79e92006-rt3-tip-rt +slub-local-lock-v2r3 full set
7,598.44 msec task-clock # 5.813 CPUs utilized ( +- 0.85% )
488,161 context-switches # 64.245 K/sec ( +- 4.29% )
196,866 cpu-migrations # 25.909 K/sec ( +- 1.49% )
13,042 page-faults # 1.716 K/sec ( +- 0.73% )
27,695,116,746 cycles # 3.645 GHz ( +- 0.79% )
18,423,934,168 instructions # 0.67 insn per cycle ( +- 0.88% )
3,969,540,695 branches # 522.415 M/sec ( +- 0.92% )
15,493,482 branch-misses # 0.39% of all branches ( +- 2.15% )
1.30709 +- 0.00890 seconds time elapsed ( +- 0.68% )
1.3205 +- 0.0134 seconds time elapsed ( +- 1.02% ) (repeat)
1.3083 +- 0.0132 seconds time elapsed ( +- 1.01% ) (repeat)
Powered by blists - more mailing lists