lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5ad7dffa-204e-4d37-acf6-0206d7a87f37@I-love.SAKURA.ne.jp>
Date: Thu, 25 Jul 2024 13:24:18 +0900
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: LKML <linux-kernel@...r.kernel.org>
Subject: Re: [GIT PULL] orphaned patches for 6.11

On 2024/07/25 4:34, Linus Torvalds wrote:
> But honestly, the minimal fix would seem to be this two-liner:
> 
>   --- a/kernel/ksysfs.c
>   +++ b/kernel/ksysfs.c
>   @@ -92,7 +92,9 @@ static ssize_t profiling_store(struct kobject *kobj,
>                                    const char *buf, size_t count)
>    {
>         int ret;
>   +     static DEFINE_MUTEX(prof_store_mutex);
> 
>   +     guard(mutex)(&prof_store_mutex);
>         if (prof_on)
>                 return -EEXIST;
>         /*
> 
> which I have admittedly not tested at all, but seems trivial.

If guard() is already backported to older kernels, that would work for
the kernel/ksysfs.c part. But I prefer killable version, for a large
memory allocation which may cause current thread waiting on this lock
might be chosen as an OOM victim is performed from profile_init().
I wish if we had a macro that only does unlock upon function exit.
Then, we can use explicit killable lock and automatic unlock (which is
something like golang's "defer" statement).



> And once that "no more multiple concurrent profile initialization" bug
> is fixed, everything else is fine. The assignment to "prof_buffer"
> will now be the last thing that is done, and when it's done the
> profiling should all be good.

Unfortunately, there is a race where KMSAN would complain even if
profile initialization is serialized.



profile_init() {
  (...snipped...)
                                     profile_tick(int type) {
                                       struct pt_regs *regs = get_irq_regs();
  if (!alloc_cpumask_var(&prof_cpu_mask, GFP_KERNEL))
    return -ENOMEM;
                                       if (!user_mode(regs) && cpumask_available(prof_cpu_mask) &&
					 // cpumask_available(prof_cpu_mask) returns true after
					 // alloc_cpumask_var(&prof_cpu_mask) completes, but
					 cpumask_test_cpu(smp_processor_id(), prof_cpu_mask))
                                         // KMSAN complains about uninit-value here, for
					   profile_hit(type, (void *)profile_pc(regs));
                                     }
					 // due to use of GFP_KERNEL, prof_cpu_mask remains
					 // uninitialzed until cpumask_copy() completes.
  cpumask_copy(prof_cpu_mask, cpu_possible_mask);
  prof_buffer = kzalloc(buffer_bytes, GFP_KERNEL|__GFP_NOWARN);
  (...snipped...)
}



Edward Adam Davis and I did s/alloc_cpumask_var/zalloc_cpumask_var/ .
But I also removed cpumask_copy() unless CONFIG_SMP=n, for
cpuhp_setup_state(CPUHP_AP_ONLINE_DYN) in create_proc_profile() calls
cpumask_set_cpu() as needed. That is, currently cpumask_copy() is called
needlessly and inappropriately.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ