[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpEQ=RUgcAvRzE5jRrhhFpkm8E2PpBK9e9GhK26ZaJQt=Q@mail.gmail.com>
Date: Tue, 16 Sep 2025 10:09:18 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: paulmck@...nel.org, Jan Engelhardt <ej@...i.de>,
Sudarsan Mahendran <sudarsanm@...gle.com>, Liam.Howlett@...cle.com, cl@...two.org,
harry.yoo@...cle.com, howlett@...il.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, maple-tree@...ts.infradead.org, rcu@...r.kernel.org,
rientjes@...gle.com, roman.gushchin@...ux.dev, urezki@...il.com
Subject: Re: Benchmarking [PATCH v5 00/14] SLUB percpu sheaves
On Mon, Sep 15, 2025 at 8:22 AM Vlastimil Babka <vbabka@...e.cz> wrote:
>
> On 9/15/25 14:13, Paul E. McKenney wrote:
> > On Mon, Sep 15, 2025 at 09:51:25AM +0200, Jan Engelhardt wrote:
> >>
> >> On Saturday 2025-09-13 02:09, Sudarsan Mahendran wrote:
> >> >
> >> >Summary of the results:
>
> In any case, thanks a lot for the results!
>
> >> >- Significant change (meaning >10% difference
> >> > between base and experiment) on will-it-scale
> >> > tests in AMD.
> >> >
> >> >Summary of AMD will-it-scale test changes:
> >> >
> >> >Number of runs : 15
> >> >Direction : + is good
> >>
> >> If STDDEV grows more than mean, there is more jitter,
> >> which is not "good".
> >
> > This is true. On the other hand, the mean grew way more in absolute
> > terms than did STDDEV. So might this be a reasonable tradeoff?
>
> Also I'd point out that MIN of TEST is better than MAX of BASE, which means
> there's always an improvement for this config. So jitter here means it's
> changing between better and more better :) and not between worse and (more)
> better.
>
> The annoying part of course is that for other configs it's consistently the
> opposite.
Hi Vlastimil,
I ran my mmap stress test that runs 20000 cycles of mmapping 50 VMAs,
faulting them in then unmapping and timing only mmap and munmap calls.
This is not a realistic scenario but works well for A/B comparison.
The numbers are below with sheaves showing a clear improvement:
Baseline
avg stdev
mmap 2.621073 0.2525161631
munmap 2.292965 0.008831973052
total 4.914038 0.2572620923
Sheaves
avg stdev avg_diff stdev_diff
mmap 1.561220667 0.07748897037 -40.44% -69.31%
munmap 2.042071 0.03603083448 -10.94% 307.96%
total 3.603291667 0.113209047 -26.67% -55.99%
Stdev for munmap went high but I see that there was only one run that
was very different from others, so that might have been just a noisy
run.
One thing I noticed is that with my stress testing mmap/munmap in a
loop we get lots of in-flight freed-by-RCU sheaves before the grace
period arrives and they get freed in bulk. Note that Android enables
lazy RCU config, so that affects the grace period and makes it longer
than normal. This results in sheaves being freed in bulk and when that
happens, the barn gets quickly full (we only have 10
(MAX_FULL_SHEAVES) free slots), the rest of the sheaves being freed
are destroyed instead of being reused.
I tried two modifications:
1. Use call_rcu_hurry() instead of call_rcu() when freeing the
sheaves. This should remove the effects of lazy RCU;
2. Keep a running count of in-flight RCU-freed sheaves and once it
reaches the number of free slots for full sheaves in the barn, I
schedule an rcu_barrier() to free all these in-flight sheaves. Note
that I added an additional condition to skip this RCU flush if the
number of free slots for full sheaves is less than MAX_FULL_SHEAVES/2.
That should prevent flushing to free only a small number of sheaves.
With these modifications the numbers get even better:
Sheaves with call_rcu_hurry
avg avg_diff (vs Baseline)
mmap 1.279308 -51.19%
munmap 1.983921 -13.48%
total 3.263228 -33.59%
Sheaves with rcu_barrier
avg avg_diff (vs Baseline)
mmap 1.210455 -53.82%
munmap 1.963739 -14.36%
total 3.174194 -35.41%
I didn't capture stdev because I did not run as many times as the
first two configurations.
Again, the tight loop in my test is not representative of a real
workloads and the numbers are definitely affected by the use of lazy
RCU mode in Android. While this information can be used for later
optimizations, I don't think these findings should block current
deployment of the sheaves.
Thanks,
Suren.
>
> > Of course, if adjustments can be made to keep the increase in mean while
> > keeping STDDEV low, that would of course be even better.
> >
> > Thanx, Paul
> >
> >> >| | MIN | MAX | MEAN | MEDIAN | STDDEV |
> >> >|:-----------|:-----------|:-----------|:-----------|:-----------|:-----------|
> >> >| brk1_8_processes
> >> >| BASE | 7,667,220 | 7,705,767 | 7,682,782 | 7,676,211 | 12,733 |
> >> >| TEST | 9,477,395 | 10,053,058 | 9,878,753 | 9,959,360 | 182,014 |
> >> >| % | +23.61% | +30.46% | +28.58% | +29.74% | +1,329.46% |
> >> >
> >> >| mmap2_256_processes
> >> >| BASE | 7,483,929 | 7,532,461 | 7,491,876 | 7,489,398 | 11,134 |
> >> >| TEST | 11,580,023 | 16,508,551 | 15,337,145 | 15,943,608 | 1,489,489 |
> >> >| % | +54.73% | +119.17% | +104.72% | +112.88% | +13,276.75%|
> >>
>
Powered by blists - more mailing lists