lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <874kg2kpwd.mognet@arm.com>
Date:   Mon, 19 Apr 2021 20:58:26 +0100
From:   Valentin Schneider <valentin.schneider@....com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     syzbot <syzbot+9362b31a2e0cad8b749d@...kaller.appspotmail.com>,
        bp@...en8.de, dwmw@...zon.co.uk, hpa@...or.com,
        linux-kernel@...r.kernel.org, luto@...nel.org, mingo@...hat.com,
        syzkaller-bugs@...glegroups.com, tglx@...utronix.de, x86@...nel.org
Subject: Re: [syzbot] WARNING in kthread_is_per_cpu

On 19/04/21 20:45, Peter Zijlstra wrote:
> On Mon, Apr 19, 2021 at 12:31:22PM +0100, Valentin Schneider wrote:
>
>>   if ((p->flags & PF_KTHREAD) && kthread_is_per_cpu(p))
>>                                  `\
>>                                    to_kthread(p);
>>                                     `\
>>                                       WARN_ON(!(p->flags & PF_KTHREAD));
>>
>> ... Huh?
>
> Something like so perhaps?
>

Looks about right, IIUC the key being:

  p->flags & PF_KTHREAD + p->set_child_tid => the struct kthread is
  persistent

  p->flags & PF_KTHREAD => you may or may not have a struct kthread (see
  kernel/umh.c kernel_thread() uses). PF_KTHREAD isn't even guaranteed to
  persist (begin_new_exec()), which seems to be what the syzbot hit.

I'd be happy to see is_per_cpu_kthread() die, but that's somewhat
orthogonal to this here. For now, this does need the tiny extra below.

While we're at it, does free_kthread_struct() want the __to_kthread()
treatment as well? The other to_kthread() callsites looked like they only
made sense with a "proper" kthread anyway.

---
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 49636a49843f..8b470c2d5680 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7612,7 +7612,7 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
 		return 0;
 
 	/* Disregard pcpu kthreads; they are where they need to be. */
-	if ((p->flags & PF_KTHREAD) && kthread_is_per_cpu(p))
+	if (kthread_is_per_cpu(p))
 		return 0;
 
 	if (!cpumask_test_cpu(env->dst_cpu, p->cpus_ptr)) {

> diff --git a/kernel/kthread.c b/kernel/kthread.c
> index 1578973c5740..eeba40df61ac 100644
> --- a/kernel/kthread.c
> +++ b/kernel/kthread.c
> @@ -78,6 +78,14 @@ static inline void set_kthread_struct(void *kthread)
>       current->set_child_tid = (__force void __user *)kthread;
>  }
>
> +static inline struct kthread *__to_kthread(struct task_struct *k)
> +{
> +	void *kthread = (__force void *)k->set_child_tid;
> +	if (kthread && !(k->flags & PF_KTHREAD))
> +		kthread = NULL;
> +	return kthread;
> +}
> +
>  static inline struct kthread *to_kthread(struct task_struct *k)
>  {
>       WARN_ON(!(k->flags & PF_KTHREAD));
> @@ -516,7 +524,7 @@ void kthread_set_per_cpu(struct task_struct *k, int cpu)
>
>  bool kthread_is_per_cpu(struct task_struct *k)
>  {
> -	struct kthread *kthread = to_kthread(k);
> +	struct kthread *kthread = __to_kthread(k);
>       if (!kthread)
>               return false;
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 3384ea74cad4..dc6311bd6986 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7658,7 +7658,7 @@ static void balance_push(struct rq *rq)
>        * histerical raisins.
>        */
>       if (rq->idle == push_task ||
> -	    ((push_task->flags & PF_KTHREAD) && kthread_is_per_cpu(push_task)) ||
> +	    kthread_is_per_cpu(push_task) ||
>           is_migration_disabled(push_task)) {
>
>               /*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ