lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 1 Oct 2020 00:24:36 -0700
From:   John Hubbard <jhubbard@...dia.com>
To:     Kalesh Singh <kaleshsingh@...gle.com>
CC:     <surenb@...gle.com>, <minchan@...gle.com>, <joelaf@...gle.com>,
        <lokeshgidra@...gle.com>, <kernel-team@...roid.com>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        <x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>,
        "Andrew Morton" <akpm@...ux-foundation.org>,
        Shuah Khan <shuah@...nel.org>,
        "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
        Kees Cook <keescook@...omium.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Sami Tolvanen <samitolvanen@...gle.com>,
        Arnd Bergmann <arnd@...db.de>,
        Masahiro Yamada <masahiroy@...nel.org>,
        Frederic Weisbecker <frederic@...nel.org>,
        "Krzysztof Kozlowski" <krzk@...nel.org>,
        Hassan Naveed <hnaveed@...ecomp.com>,
        Christian Brauner <christian.brauner@...ntu.com>,
        Mark Rutland <mark.rutland@....com>,
        Mike Rapoport <rppt@...nel.org>, Gavin Shan <gshan@...hat.com>,
        Dave Martin <Dave.Martin@....com>, Jia He <justin.he@....com>,
        Zhenyu Ye <yezhenyu2@...wei.com>,
        Jason Gunthorpe <jgg@...pe.ca>, Zi Yan <ziy@...dia.com>,
        Mina Almasry <almasrymina@...gle.com>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Ram Pai <linuxram@...ibm.com>,
        "Sandipan Das" <sandipan@...ux.ibm.com>,
        Dave Hansen <dave.hansen@...el.com>,
        "Ralph Campbell" <rcampbell@...dia.com>,
        Brian Geffon <bgeffon@...gle.com>,
        "Masami Hiramatsu" <mhiramat@...nel.org>,
        Ira Weiny <ira.weiny@...el.com>,
        "SeongJae Park" <sjpark@...zon.de>, <linux-kernel@...r.kernel.org>,
        <linux-arm-kernel@...ts.infradead.org>, <linux-mm@...ck.org>,
        <linux-kselftest@...r.kernel.org>
Subject: Re: [PATCH 1/5] kselftests: vm: Add mremap tests

On 9/30/20 3:21 PM, Kalesh Singh wrote:
> Test mremap on regions of various sizes and alignments and validate
> data after remapping. Also provide total time for remapping
> the region which is useful for performance comparison of the mremap
> optimizations that move pages at the PMD/PUD levels if HAVE_MOVE_PMD
> and/or HAVE_MOVE_PUD are enabled.
> 
> Signed-off-by: Kalesh Singh <kaleshsingh@...gle.com>
> ---
>   tools/testing/selftests/vm/.gitignore    |   1 +
>   tools/testing/selftests/vm/Makefile      |   1 +
>   tools/testing/selftests/vm/mremap_test.c | 243 +++++++++++++++++++++++
>   tools/testing/selftests/vm/run_vmtests   |  11 +
>   4 files changed, 256 insertions(+)
>   create mode 100644 tools/testing/selftests/vm/mremap_test.c
> 

Hi,

This takes 100x longer to run than it should: 1:46 min of running time on
my x86_64 test machine. The entire selftests/vm test suite takes 45 sec on a
bad day, where a bad day is defined as up until about tomorrow, when I will
post a compaction_test.c patch that will cut that time down to about half, or
24 sec total run time...for 22 tests!

In other words, most tests here should take about 1 or 2 seconds, unless they
are exceptionally special snowflakes.

At the very least, the invocation within run_vmtests could pass in a parameter
to tell it to run a shorter test. But there's also opportunities to speed it
up, too.

...
> +
> +#define MAKE_TEST(source_align, destination_align, size,	\
> +		  overlaps, should_fail, test_name)		\
> +{								\
> +	.name = test_name,					\
> +	.config = {						\
> +		.src_alignment = source_align,			\
> +		.dest_alignment = destination_align,		\
> +		.region_size = size,				\
> +		.overlapping = overlaps,			\
> +	},							\
> +	.expect_failure = should_fail				\
> +}
> +

OK...

> +#define MAKE_SIMPLE_TEST(source_align, destination_align, size)	\
> +	MAKE_TEST(source_align, destination_align, size, 0, 0,	\
> +		  #size " mremap - Source " #source_align	\
> +		  " aligned, Destination " #destination_align	\
> +		  " aligned")
> +

...and not OK. :)  Because this is just obscuring things. Both the
code and the output are harder to read. For these tiny test programs,
clarity is what we want, not necessarily compactness on the screen.
Because people want to get in, understand what they seen in the code
and match it up with what is printed to stdout--without spending much
time. (And that includes run time, as hinted at above.)

...
> +
> +/* Returns the time taken for the remap on success else returns -1. */
> +static long long remap_region(struct config c)
> +{
> +	void *addr, *src_addr, *dest_addr;
> +	int i, j;
> +	struct timespec t_start = {0, 0}, t_end = {0, 0};
> +	long long  start_ns, end_ns, align_mask, ret, offset;
> +	char pattern[] = {0xa8, 0xcd, 0xfe};

I'd recommend using rand() to help choose the pattern, and using different
patterns for different runs. When testing memory, it's a pitfall to have
the same test pattern.

Normally, you'd also want to report the random seed or the test pattern(s)
or both to stdout, and provide a way to run with the same pattern, but
here I don't *think* you care: all patterns should have the same performance.

> +	int pattern_size = ARRAY_SIZE(pattern);
> +
> +	src_addr = get_source_mapping(c);
> +	if (!src_addr) {
> +		ret = -1;
> +		goto out;
> +	}
> +
> +	/* Set byte pattern */
> +	for (i = 0; i < c.region_size; i++) {
> +		for (j = 0; i+j < c.region_size && j < pattern_size; j++)
> +			memset((char *) src_addr + i+j, pattern[j], 1);
> +		i += pattern_size-1;
> +	}
> +
> +	align_mask = ~(c.dest_alignment - 1);
> +	offset = (c.overlapping) ? -c.dest_alignment : c.dest_alignment;

A comment for what the above two lines are doing would be a nice touch.

...
> +	start_ns = t_start.tv_sec * 1000000000ULL + t_start.tv_nsec;
> +	end_ns = t_end.tv_sec * 1000000000ULL + t_end.tv_nsec;

A const or #defined for all those 0000's would help.

...
> +int main(int argc, char *argv[])
> +{
> +	int failures = 0;
> +	int i;
> +
> +	struct test test_cases[] = {
> +		/* Expected mremap failures */
> +		MAKE_TEST(_4KB, _4KB, _4KB, 1 /* overlaps */, 1 /* fails */,

Named flags instead of 1's and 0's would avoid the need for messy comments.

> +			  "mremap - Source and Destination Regions Overlapping"),
> +		MAKE_TEST(_4KB, _1KB, _4KB, 0 /* overlaps */, 1 /* fails */,
> +			  "mremap - Destination Address Misaligned (1KB-aligned)"),
> +		MAKE_TEST(_1KB, _4KB, _4KB, 0 /* overlaps */, 1 /* fails */,
> +			  "mremap - Source Address Misaligned (1KB-aligned)"),
> +
> +		/* Src addr PTE aligned */
> +		MAKE_SIMPLE_TEST(PTE, PTE, _8KB),
> +
> +		/* Src addr 1MB aligned */
> +		MAKE_SIMPLE_TEST(_1MB, PTE, _2MB),
> +		MAKE_SIMPLE_TEST(_1MB, _1MB, _2MB),
> +
> +		/* Src addr PMD aligned */
> +		MAKE_SIMPLE_TEST(PMD, PTE, _4MB),
> +		MAKE_SIMPLE_TEST(PMD, _1MB, _4MB),
> +		MAKE_SIMPLE_TEST(PMD, PMD, _4MB),
> +
> +		/* Src addr PUD aligned */
> +		MAKE_SIMPLE_TEST(PUD, PTE, _2GB),
> +		MAKE_SIMPLE_TEST(PUD, _1MB, _2GB),
> +		MAKE_SIMPLE_TEST(PUD, PMD, _2GB),
> +		MAKE_SIMPLE_TEST(PUD, PUD, _2GB),


Too concise. Not fun lining these up with the stdout report.


thanks,
-- 
John Hubbard
NVIDIA

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ