[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <da1c3dcb-5296-47bd-b5ed-9cb8833377cf@arm.com>
Date: Thu, 20 Feb 2025 21:18:42 +0530
From: Dev Jain <dev.jain@....com>
To: Brendan Jackman <jackmanb@...gle.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>, Shuah Khan <shuah@...nel.org>
Cc: Mateusz Guzik <mjguzik@...il.com>, linux-mm@...ck.org,
linux-kselftest@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 6/6] selftests/mm: Don't fail uffd-stress if too many CPUs
On 20/02/25 8:33 pm, Brendan Jackman wrote:
> This calculation divides a fixed parameter by an environment-dependent
> parameter i.e. the number of CPUs.
>
> The simple way to avoid machine-specific failures here is to just put a
> cap on the max value of the latter.
I haven't read the test, but if nr_cpus is being computed, then this
value must be important to the test somehow? Would it potentially be
wrong to let the test run for nr_cpus != actual number of cpus?
Also, if the patch is correct then will it be better to also print a
diagnostic telling the user that the number of cpus is going to be
capped for the test to run?
>
> Suggested-by: Mateusz Guzik <mjguzik@...il.com>
> Signed-off-by: Brendan Jackman <jackmanb@...gle.com>
> ---
> tools/testing/selftests/mm/uffd-stress.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/tools/testing/selftests/mm/uffd-stress.c b/tools/testing/selftests/mm/uffd-stress.c
> index 1facfb79e09aa4113e344d7d90dec06a37264058..f306accbef255c79bc3eeba8b9e42161a88fc10e 100644
> --- a/tools/testing/selftests/mm/uffd-stress.c
> +++ b/tools/testing/selftests/mm/uffd-stress.c
> @@ -453,6 +453,10 @@ int main(int argc, char **argv)
> }
>
> nr_cpus = sysconf(_SC_NPROCESSORS_ONLN);
> + if (nr_cpus > 32) {
> + /* Don't let calculation below go to zero. */
> + nr_cpus = 32;
> + }
>
> nr_pages_per_cpu = bytes / page_size / nr_cpus;
> if (!nr_pages_per_cpu) {
>
Powered by blists - more mailing lists