lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Y8Ft7q/NBK5utN+I@alley>
Date:   Fri, 13 Jan 2023 15:45:10 +0100
From:   Petr Mladek <pmladek@...e.com>
To:     Petr Pavlu <petr.pavlu@...e.com>
Cc:     mcgrof@...nel.org, david@...hat.com, linux-modules@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 4/4] selftests: kmod: Add tests for merging same-name
 module load requests

On Thu 2023-01-12 10:03:10, Petr Pavlu wrote:
> [A different fix that the one from this thread was selected but it is still
> useful to discuss these test cases and if they should be added in some form.]
> 
> On 10/17/22 15:51, Petr Mladek wrote:
> > On Sun 2022-10-16 14:30:31, Petr Pavlu wrote:
> >> Add two tests to check that loading the same module multiple times in
> >> parallel results only in one real attempt to initialize it.
> >> Synchronization of the loads is done by waiting 1000 ms in the init
> > 
> > I do not have a good experience with this kind of synchronization.
> > It usually is not reliable. The test might be very slow especially when
> > false positives are solved by prolonging the delay.
> > 
> > Alternative solution would be to have two modules:
> > 
> > 1st module would provide a counter, for example:
> > 
> > int modB_load_cnt;
> > module_param(modB_load_cnt, int, 0444);
> > EXPORT_SYMBOL(modB_load_cnt);
> > 
> > EXPORT_SYMBOL() should allow to directly increment the counter
> > from the 2nd module.
> > 
> > module_param() should make the value readable via
> > /sys/module/modA/parameters/modB_load_cnt. It can be
> > checked by kmod_sh.
> 
> I agree that it would be best to avoid any synchronization based on timeouts
> in these tests.
> 
> My reading is that your idea should allow the tests to remove measuring how
> long it took in total to process all module inserts. It was possible for me to
> implement this change.
> 
> It unfortunately doesn't help with the 1 second timeout that the
> kmod_test_0014 module (modB in your description) has in its init function. Its
> purpose is to make sure that any parallel loads of the same module which were
> started by kmod.sh manage to reach add_unformed_module(), sleep there and
> therefore hit the updated logic.

I see.

> One option how to avoid this timeout is to extend modA to register a kprobe on
> finished_loading() and export via a parameter which loads started by kmod.sh
> reached this point. This approach works ok on my system and a prototype is
> pasted at the end of this mail. Two shortcomings are that it relies on
> internal knowledge of the module loader code and function finished_loading()
> might not be always available for probing as it could get inlined in some
> configurations.

Yeah, it is a bit fragile as well.

> To summarize, I see the following options for these tests:
> * Use a timeout to synchronize the loads.
> * Use the outlined kprobe approach.
> * Don't add these tests at all.

Yet another solution would be to add a support for this test into
the module loaded code. I mean that add_unformed_module() might
increment a global counter when a certain module is waiting there.
The global counter then might be used to unblock the init()
callback.

The test module might be distinguished, for example, by a test
specific module info string. For example, see
check_modinfo_livepatch(). It looks for the info string defined
in the livepatch modules, see MODULE_INFO(livepatch, "Y"); in
samples/livepatch/livepatch-sample.c.

That said, I do not like this much either. I am not sure if it is
more or less crazy than the kprobe approach.


> Any opinions what would be preferred? I'm leaning towards not adding these
> tests as they look fragile to me.

I do not have strong opinion.

My experience is that some tests are not worth the effort. The
maintenance or false positives might add more harm than good.

My feeling is that this one belongs into this category.
We could keep the timeout and make it error prone.
Or we could use some hacks and make it hard to maintain.

Best Regards,
Petr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ