lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8490f760-44ed-fbae-b91e-671590bdb8d6@arm.com>
Date:   Fri, 3 Jul 2020 12:12:46 +0200
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>
Cc:     linux-kernel@...r.kernel.org, vincent.guittot@...aro.org,
        mgorman@...e.de, Oleg Nesterov <oleg@...hat.com>,
        david@...morbit.com
Subject: Re: [RFC][PATCH] sched: Better document ttwu()

On 02/07/2020 14:52, Peter Zijlstra wrote:
> 
> Dave hit the problem fixed by commit:
> 
>   b6e13e85829f ("sched/core: Fix ttwu() race")
> 
> and failed to understand much of the code involved. Per his request a
> few comments to (hopefully) clarify things.
> 
> Requested-by: Dave Chinner <david@...morbit.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>

LGTM. Just a couple of nitpicks below.

[...]

> + * Special state:
> + *
> + * System-calls and anything external will use task_rq_lock() which acquires
> + * both p->lock and rq->lock. As a consequence the state they change is stable

s/p->lock/p->pi_lock ?

> + * while holding either lock:
> + *
> + *  - sched_setaffinity():	p->cpus_ptr
> + *  - set_user_nice():		p->se.load, p->static_prio

Doesn't set_user_nice() also change p->prio and p->normal_prio, so
p->*prio ?

> + *  - __sched_setscheduler():	p->sched_class, p->policy, p->*prio, p->se.load,
> + *				p->dl.dl_{runtime, deadline, period, flags, bw, density}

p->rt_priority ?

> + *  - sched_setnuma():		p->numa_preferred_nid
> + *  - sched_move_task()/
> + *    cpu_cgroup_fork():	p->sched_task_group

maybe also: set_cpus_allowed_ptr() -> __set_cpus_allowed_ptr() (like
sched_setaffinity()) ?

[...]

> @@ -3134,8 +3274,12 @@ static inline void prepare_task(struct task_struct *next)
>  	/*
>  	 * Claim the task as running, we do this before switching to it
>  	 * such that any running task will have this set.
> +	 *
> +	 * __schedule()'s rq->lock and smp_mb__after_spin_lock() orders this
> +	 * store against prior state change of @next, also see
> +	 * try_to_wake_up(), specifically smp_load_acquire(&p->on_cpu).

s/smp_load_acquire/smp_cond_load_acquire ?

[...]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ