lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6eefe729-2323-d9a1-9903-547e2fc63ab8@bytedance.com>
Date:   Thu, 18 Aug 2022 18:48:39 +0800
From:   Chengming Zhou <zhouchengming@...edance.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     vincent.guittot@...aro.org, dietmar.eggemann@....com,
        mingo@...hat.com, rostedt@...dmis.org, bsegall@...gle.com,
        vschneid@...hat.com, linux-kernel@...r.kernel.org, tj@...nel.org
Subject: Re: [PATCH v5 7/9] sched/fair: allow changing cgroup of new forked
 task

On 2022/8/18 18:36, Peter Zijlstra wrote:
> On Thu, Aug 18, 2022 at 11:43:41AM +0800, Chengming Zhou wrote:
> 
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index 8e3f1c3f0b2c..157f7461a08a 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -4550,11 +4550,11 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
>>  {
>>  	__sched_fork(clone_flags, p);
>>  	/*
>> -	 * We mark the process as NEW here. This guarantees that
>> +	 * We mark the process as running here. This guarantees that
>>  	 * nobody will actually run it, and a signal or other external
>>  	 * event cannot wake it up and insert it on the runqueue either.
>>  	 */
>> -	p->__state = TASK_NEW;
>> +	p->__state = TASK_RUNNING;
>>  
>>  	/*
>>  	 * Make sure we do not leak PI boosting priority to the child.
>> @@ -4672,7 +4672,6 @@ void wake_up_new_task(struct task_struct *p)
>>  	struct rq *rq;
>>  
>>  	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
>> -	WRITE_ONCE(p->__state, TASK_RUNNING);
>>  #ifdef CONFIG_SMP
>>  	/*
>>  	 * Fork balancing, do it here and not earlier because:
>> @@ -10290,36 +10289,19 @@ static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
>>  	sched_unregister_group(tg);
>>  }
> 
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index eba8a64f905a..e0d34ecdabae 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -11840,6 +11840,13 @@ void init_cfs_rq(struct cfs_rq *cfs_rq)
>>  #ifdef CONFIG_FAIR_GROUP_SCHED
>>  static void task_change_group_fair(struct task_struct *p)
>>  {
>> +	/*
>> +	 * We couldn't detach or attach a forked task which
>> +	 * hasn't been woken up by wake_up_new_task().
>> +	 */
>> +	if (!p->on_rq && !p->se.sum_exec_runtime)
>> +		return;
>> +
>>  	detach_task_cfs_rq(p);
> 
> Wouldn't that be much clearer when expressed in TASK_NEW ?

Ah, I was stupid, will change to use TASK_NEW.

Thanks for your suggestion!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ