lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date: Fri, 13 Oct 2023 11:58:02 -0700
From: Luis Chamberlain <>
To: Joey Jiao <>,,
Subject: Re: [PATCH v5] module: Add CONFIG_MODULE_DISABLE_INIT_FREE option


Thanks for working hard on expanding on the commit log to try to
describe the rationale for this. I'd like review from the linux-hardening
folks and at least one syzkaller developer.

On Fri, Oct 13, 2023 at 11:57:11AM +0530, Joey Jiao wrote:
> Syzkaller uses the _RET_IP_ (also known as pc) to decode covered
> file/function/line,

OK but that seems immediately limited as your Kconfig confirms to
!CONFIG_RANDOMIZE_BASE and even for things like kaslr.

> and it employs pc ^ hash(prev_pc) (referred to as
> signal) to indicate covered edge. If the pc for the same file/line
> keeps changing across reboots, syzkaller will report incorrect coverage
> data.

Yeah that seems pretty limiting. Why not use something like the
effort being put forward to map symbols a bit more accurately to
file / lines as with what Alessandro Carminati is doing for
scripts/ to kallsyms. Although that effort helps
tracers differentiate duplicate symbols it would seem to also help
fuzzers too even if CONFIG_RANDOMIZE_BASE or kaslr are enabled.


> Additionally, even if kaslr can be disabled, we cannot get the
> same covered edge for module because both pc and prev_pc have changed,
> thus altering pc ^ hash(prev_pc).
> To facilitate syzkaller coverage, it is crucial for both the core kernel
> and modules to maintain at the same addresses across reboots.

The problem I see with this, is that, even if it does help, the argument
being put forward here is that the below recipe is completley
deterministic and it's not obviously clear to me that it truly is.

> So, the following steps are necessary:
> - In userspace:
>   1) To maintain an uninterrupted loading sequence, it is recommended to
> execute modprobe commands by loading one module at a time, to avoid any
> interference from the scheduler.
>   2) Avoid unloading any module during fuzzing.
> - In kernel:
>   1) Disable CONFIG_RANDOMIZE_BASE to load the core kernel at the same
> address consistently.
>   2) To ensure deterministic module loading at the same address, enabling
> CONFIG_MODULE_DISABLE_INIT_FREE prevents the asynchronous freeing of init
> sections. Without this option, there is a possibility that the next module
> could be loaded into previous freed init pages of a previous loaded module.

Is this well documented somewhere as a requirement for kernels running

Because clearly CONFIG_MODULE_DISABLE_INIT_FREE is showing that the
above recipe was *not* deterministic and that there were holes in it.
Who's to say this completes the determinism?

Now, if the justificaiton is that it helps current *state of the art*
fuzzing mapping... that's different and then this could just be
temporary until a more accurate deterministic mechanism is considered.

> It is important to note that this option is intended for fuzzing tests only
> and should not be set as the default configuration in production builds.


> Signed-off-by: Joey Jiao <>
> ---
>  kernel/module/Kconfig | 13 +++++++++++++
>  kernel/module/main.c  |  3 ++-
>  2 files changed, 15 insertions(+), 1 deletion(-)
> diff --git a/kernel/module/Kconfig b/kernel/module/Kconfig
> index 33a2e991f608..d0df0b5997b0 100644
> --- a/kernel/module/Kconfig
> +++ b/kernel/module/Kconfig
> @@ -389,4 +389,17 @@ config MODULES_TREE_LOOKUP
>  	def_bool y
>  	depends on PERF_EVENTS || TRACING || CFI_CLANG
> +	bool "Disable freeing of init sections"
> +	default n
> +	depends on !RANDOMIZE_BASE
> +	help
> +	  By default, the kernel frees init sections after module is fully
> +	  loaded.
> +
> +	  Enabling MODULE_DISABLE_INIT_FREE allows users to prevent the freeing
> +	  of init sections. It is particularly helpful for syzkaller fuzzing,
> +	  ensuring that the module consistently loads at the same address
> +	  across reboots.

But that seems false, I don't see proof to that yet. Helping it be more
acurrate, maybe. If the docs for syzkaller clearly spell these
requirements out then maybe this is valuable upstream for now, but
in the meantime the assumption above is just a bit too large for me
to accept to be true.

> +
>  endif # MODULES
> diff --git a/kernel/module/main.c b/kernel/module/main.c
> index 98fedfdb8db5..d226df3a6cf6 100644
> --- a/kernel/module/main.c
> +++ b/kernel/module/main.c
> @@ -2593,7 +2593,8 @@ static noinline int do_init_module(struct module *mod)
>  	 * be cleaned up needs to sync with the queued work - ie
>  	 * rcu_barrier()
>  	 */
> -	if (llist_add(&freeinit->node, &init_free_list))
> +	    llist_add(&freeinit->node, &init_free_list))
>  		schedule_work(&init_free_wq);
>  	mutex_unlock(&module_mutex);
> -- 
> 2.42.0

Powered by blists - more mailing lists