[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <877cwxn7wj.wl-tiwai@suse.de>
Date: Sat, 04 Feb 2023 09:37:16 +0100
From: Takashi Iwai <tiwai@...e.de>
To: Mark Brown <broonie@...nel.org>
Cc: Jaroslav Kysela <perex@...ex.cz>, Takashi Iwai <tiwai@...e.com>,
Shuah Khan <shuah@...nel.org>, alsa-devel@...a-project.org,
linux-kselftest@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] kselftest/alsa: Run PCM tests for multiple cards in parallel
On Fri, 03 Feb 2023 20:52:47 +0100,
Mark Brown wrote:
>
> With each test taking 4 seconds the runtime of pcm-test can add up. Since
> generally each card in the system is physically independent and will be
> unaffected by what's going on with other cards we can mitigate this by
> testing each card in parallel. Make a list of cards as we enumerate the
> system and then start a thread for each, then join the threads to ensure
> they have all finished. The threads each run the same tests we currently
> run for each PCM on the card before exiting.
>
> The list of PCMs is kept global since it helps with global operations
> like working out our planned number of tests and identifying missing PCMs
> and it seemed neater to check for PCMs on the right card in the card
> thread than make every PCM loop iterate over cards as well.
>
> We don't run per-PCM tests in parallel since in embedded systems it can
> be the case that resources are shared between the PCMs and operations on
> one PCM on a card may constrain what can be done on another PCM on the same
> card leading to potentially unstable results.
>
> We use a mutex to ensure that the reporting of results is serialised and we
> don't have issues with anything like the current test number, we could do
> this in the kselftest framework but it seems like this might cause problems
> for other tests that are doing lower level testing and building in
> constrained environments such as nolibc so this seems more sensible.
>
> Note that the ordering of the tests can't be guaranteed as things stand,
> this does not seem like a major problem since the numbering of tests often
> changes as test programs are changed so results parsers are expected to
> rely on the test name rather than the test numbers. We also now prefix the
> machine generated test name when printing the description of the test since
> this is logged before streaming starts.
>
> On my two card desktop system this reduces the overall runtime by a
> third.
>
> Signed-off-by: Mark Brown <broonie@...nel.org>
Thanks, applied now.
Takashi
Powered by blists - more mailing lists