lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1501441414403369@web28g.yandex.ru>
Date:	Mon, 27 Oct 2014 12:49:29 +0300
From:	Kirill Tkhai <tkhai@...dex.ru>
To:	Peter Zijlstra <peterz@...radead.org>,
	Burke Libbey <burke.libbey@...pify.com>
Cc:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"mingo@...nel.org" <mingo@...nel.org>
Subject: Re: [PATCH] sched: reset sched_entity depth on changing parent

I've dived into this and found, we are really need this.
I'll send a patch with description soon.

24.10.2014, 22:19, "Kirill Tkhai" <tkhai@...dex.ru>:
> 24.10.2014, 19:58, "Peter Zijlstra" <peterz@...radead.org>:
>>  On Fri, Oct 24, 2014 at 11:07:46AM -0400, Burke Libbey wrote:
>>>   From 2014-02-15: https://lkml.org/lkml/2014/2/15/217
>>>
>>>   This issue was reported and patched, but it still occurs in some situations on
>>>   newer kernel versions.
>>>
>>>   [2249353.328452] BUG: unable to handle kernel NULL pointer dereference at 0000000000000150
>>>   [2249353.336528] IP: [<ffffffff810b1cf7>] check_preempt_wakeup+0xe7/0x210
>>>
>>>   se.parent gets out of sync with se.depth, causing a panic when the algorithm in
>>>   find_matching_se assumes they are correct. This patch forces se.depth to be
>>>   updated every time se.parent is, so they can no longer become desync'd.
>>>
>>>   CC: Ingo Molnar <mingo@...nel.org>
>>>   CC: Peter Zijlstra <peterz@...radead.org>
>>>   Signed-off-by: Burke Libbey <burke.libbey@...pify.com>
>>>   ---
>>>
>>>   I haven't been able to isolate the problem. Though I'm pretty confident this
>>>   fixes the issue I've been having, I have not been able to prove it.
>>  So this isn't correct, switching rq should not change depth. I suspect
>>  you're just papering over the issue by frequently resetting the value,
>>  which simply narrows the race window.
>
> Just a hypothesis.
>
> I was seeking a places where task_group of a task may change. I can't understand
> how changing of parent's cgroup during fork() applies to a child.
>
> Child's cgroup is the same as parent's after dup_task_struct(). The only function
> changing task_group is sched_move_task(), but we do not call it between
> dup_task_struct() and wake_up_new_task(). Shouldn't we do something like this?
>
> (compile tested only)
> ---
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index cc18694..0ccbbdb 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7833,6 +7833,11 @@ static void cpu_cgroup_css_offline(struct cgroup_subsys_state *css)
>          sched_offline_group(tg);
>  }
>
> +static void cpu_cgroup_fork(struct task_struct *task)
> +{
> + sched_move_task(task);
> +}
> +
>  static int cpu_cgroup_can_attach(struct cgroup_subsys_state *css,
>                                   struct cgroup_taskset *tset)
>  {
> @@ -8205,6 +8210,7 @@ struct cgroup_subsys cpu_cgrp_subsys = {
>          .css_free = cpu_cgroup_css_free,
>          .css_online = cpu_cgroup_css_online,
>          .css_offline = cpu_cgroup_css_offline,
> + .fork = cpu_cgroup_fork,
>          .can_attach = cpu_cgroup_can_attach,
>          .attach = cpu_cgroup_attach,
>          .exit = cpu_cgroup_exit,
>
> Or we just should set tsk->sched_task_group?
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ