[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3629598.1694784290@warthog.procyon.org.uk>
Date: Fri, 15 Sep 2023 14:24:50 +0100
From: David Howells <dhowells@...hat.com>
To: David Laight <David.Laight@...LAB.COM>
Cc: dhowells@...hat.com, Al Viro <viro@...iv.linux.org.uk>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Jens Axboe <axboe@...nel.dk>, "Christoph
Hellwig" <hch@....de>,
Christian Brauner <christian@...uner.io>,
"Matthew
Wilcox" <willy@...radead.org>,
Brendan Higgins <brendanhiggins@...gle.com>,
David Gow <davidgow@...gle.com>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
"kunit-dev@...glegroups.com" <kunit-dev@...glegroups.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Christian Brauner <brauner@...nel.org>,
"David
Hildenbrand" <david@...hat.com>,
John Hubbard <jhubbard@...dia.com>
Subject: Re: [RFC PATCH 9/9] iov_iter: Add benchmarking kunit tests for UBUF/IOVEC
David Laight <David.Laight@...LAB.COM> wrote:
> You could also just not do the copy!
> Although you need (say) asm volatile("\n",:::"memory") to
> stop it all being completely optimised away.
> That might show up a difference in the 'out_of_line' test
> where 15% on top on the data copies is massive - it may be
> that the data cache behaviour is very different for the
> two cases.
I tried using the following as the load:
volatile unsigned long foo;
static __always_inline
size_t idle_user_iter(void __user *iter_from, size_t progress,
size_t len, void *to, void *priv2)
{
nop();
nop();
foo += (unsigned long)iter_from;
foo += (unsigned long)len;
foo += (unsigned long)to + progress;
nop();
nop();
return 0;
}
static __always_inline
size_t idle_kernel_iter(void *iter_from, size_t progress,
size_t len, void *to, void *priv2)
{
nop();
nop();
foo += (unsigned long)iter_from;
foo += (unsigned long)len;
foo += (unsigned long)to + progress;
nop();
nop();
return 0;
}
size_t iov_iter_idle(struct iov_iter *iter, size_t len, void *priv)
{
return iterate_and_advance(iter, len, priv,
idle_user_iter, idle_kernel_iter);
}
EXPORT_SYMBOL(iov_iter_idle);
adding various things into a volatile variable to prevent the optimiser from
discarding the calculations.
I get:
iov_kunit_benchmark_bvec: avg 395 uS, stddev 46 uS
iov_kunit_benchmark_bvec: avg 397 uS, stddev 38 uS
iov_kunit_benchmark_bvec: avg 411 uS, stddev 57 uS
iov_kunit_benchmark_bvec_outofline: avg 781 uS, stddev 5 uS
iov_kunit_benchmark_bvec_outofline: avg 781 uS, stddev 6 uS
iov_kunit_benchmark_bvec_outofline: avg 781 uS, stddev 7 uS
iov_kunit_benchmark_bvec_split: avg 3599 uS, stddev 737 uS
iov_kunit_benchmark_bvec_split: avg 3664 uS, stddev 838 uS
iov_kunit_benchmark_bvec_split: avg 3669 uS, stddev 875 uS
iov_kunit_benchmark_iovec: avg 472 uS, stddev 17 uS
iov_kunit_benchmark_iovec: avg 506 uS, stddev 59 uS
iov_kunit_benchmark_iovec: avg 525 uS, stddev 14 uS
iov_kunit_benchmark_kvec: avg 421 uS, stddev 73 uS
iov_kunit_benchmark_kvec: avg 428 uS, stddev 68 uS
iov_kunit_benchmark_kvec: avg 469 uS, stddev 75 uS
iov_kunit_benchmark_ubuf: avg 1052 uS, stddev 6 uS
iov_kunit_benchmark_ubuf: avg 1168 uS, stddev 8 uS
iov_kunit_benchmark_ubuf: avg 1168 uS, stddev 9 uS
iov_kunit_benchmark_xarray: avg 680 uS, stddev 11 uS
iov_kunit_benchmark_xarray: avg 682 uS, stddev 20 uS
iov_kunit_benchmark_xarray: avg 686 uS, stddev 46 uS
iov_kunit_benchmark_xarray_outofline: avg 1340 uS, stddev 34 uS
iov_kunit_benchmark_xarray_outofline: avg 1358 uS, stddev 12 uS
iov_kunit_benchmark_xarray_outofline: avg 1358 uS, stddev 15 uS
where I made the iovec and kvec tests split their buffers into PAGE_SIZE
segments and the ubuf test issue an iteration per PAGE_SIZE'd chunk.
Splitting kvec into just 8 results in the iteration taking <1uS.
The bvec_split test is doing a kmalloc() per 256 pages inside of the loop,
which is why that takes quite a long time.
David
Powered by blists - more mailing lists