[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZjHJ+7GiXFMH6oc2@visitorckw-System-Product-Name>
Date: Wed, 1 May 2024 12:50:03 +0800
From: Kuan-Wei Chiu <visitorckw@...il.com>
To: Yury Norov <yury.norov@...il.com>
Cc: akpm@...ux-foundation.org, linux@...musvillemoes.dk,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] lib/find_bit_benchmark: Add benchmark test for
fns()
On Tue, Apr 30, 2024 at 10:24:03AM -0700, Yury Norov wrote:
> On Tue, Apr 30, 2024 at 01:49:11PM +0800, Kuan-Wei Chiu wrote:
> > Introduce a benchmark test for the fns(). It measures the total time
> > taken by fns() to process 1,000,000 test data generated using
> > get_random_long() for each n in the range [0, BITS_PER_LONG].
>
> Can you also print an example of test output?
>
> > Signed-off-by: Kuan-Wei Chiu <visitorckw@...il.com>
> > ---
> > lib/find_bit_benchmark.c | 25 +++++++++++++++++++++++++
> > 1 file changed, 25 insertions(+)
> >
> > diff --git a/lib/find_bit_benchmark.c b/lib/find_bit_benchmark.c
> > index d3fb09e6eff1..8712eacf3bbd 100644
> > --- a/lib/find_bit_benchmark.c
> > +++ b/lib/find_bit_benchmark.c
> > @@ -146,6 +146,28 @@ static int __init test_find_next_and_bit(const void *bitmap,
> > return 0;
> > }
> >
> > +static int __init test_fns(void)
> > +{
> > + const unsigned long round = 1000000;
> > + s64 time[BITS_PER_LONG + 1];
> > + unsigned int i, n;
> > + volatile unsigned long x, y;
> > +
> > + for (n = 0; n <= BITS_PER_LONG; n++) {
>
> n == BITS_PER_LONG is an error. Testing error case together with
> normal cases is even worse error because it fools readers.
>
My initial intention was to add a test for fns() always returning
BITS_PER_LONG. However, I agree that this is not a good idea and may
confuse readers.
> > + time[n] = ktime_get();
> > + for (i = 0; i < round; i++) {
> > + x = get_random_long();
> > + y = fns(x, n);
> > + }
>
> Here you count fns() + get_random_long() time. For your microbench
> purposes it would be better exclude a random number generation
> overhead.
>
> > + time[n] = ktime_get() - time[n];
> > + }
> > +
> > + for (n = 0; n <= BITS_PER_LONG; n++)
> > + pr_err("fns: n = %2u: %12lld ns\n", n, time[n]);
>
> Nah, not like that. Each test in there prints one line in the
> report. Let's keep it that way for test_fns() too. Unless we have
> a strong evidence that fns() for a particular input is worth to be
> tracked separately, let's just print a total gross?
>
> > +
> > + return 0;
> > +}
>
> I'd suggest to modify it like:
>
> static unsigned long buf[1000000];
>
> static int __init test_fns(void)
> {
> get_random_bytes(buf, ARRAY_SIZE(buf));
Instead of ARRAY_SIZE(buf), it should be sizeof(buf).
> time = ktime_get();
>
> for (n = 0; n < BITS_PER_LONG; n++)
> for (i = 0; i < 1000000; i++)
> fns(buf[i], n);
>
> time = ktime_get() - time;
> pr_err(...);
> }
>
That does seem like a better approach. I'll move it to lib/test_bitops
and send a v3 patch series.
Regards,
Kuan-Wei
> > static int __init find_bit_test(void)
> > {
> > unsigned long nbits = BITMAP_LEN / SPARSE;
> > @@ -186,6 +208,9 @@ static int __init find_bit_test(void)
> > test_find_first_and_bit(bitmap, bitmap2, BITMAP_LEN);
> > test_find_next_and_bit(bitmap, bitmap2, BITMAP_LEN);
> >
> > + pr_err("\nStart testing for fns()\n");
> > + test_fns();
>
> There are 2 sections in the test - one for regular, and another for
> sparse data. Adding a new section for a just one function doesn't look
> like a good idea.
>
> Even more, the fns() is already tested here. Maybe test_bitops is a
> better place for this test?
>
> > +
> > /*
> > * Everything is OK. Return error just to let user run benchmark
> > * again without annoying rmmod.
> > --
> > 2.34.1
Powered by blists - more mailing lists