lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c8ababc5-cb9e-58ba-2969-1e061bb564c8@arm.com>
Date:   Fri, 16 Aug 2019 15:31:51 +0100
From:   Valentin Schneider <valentin.schneider@....com>
To:     Liangyan <liangyan.peng@...ux.alibaba.com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        linux-kernel@...r.kernel.org, shanpeic@...ux.alibaba.com,
        xlpang@...ux.alibaba.com, pjt@...gle.com
Subject: Re: [PATCH] sched/fair: don't assign runtime for throttled cfs_rq

On 16/08/2019 15:02, Valentin Schneider wrote:
> On 16/08/2019 08:08, Liangyan wrote:
>> Please check below dmesg log with “WARN_ON(cfs_rq->runtime_remaining > 0)”. If apply my patch, the warning is gone.  Append the reproducing case in the end.
>>
> 
> [...]
> 
> Huh, thanks for the log & the reproducer. I'm still struggling to
> understand how we could hit the condition you're adding, since
> account_cfs_rq_runtime() shouldn't be called for throttled cfs_rqs (which
> I guess is the bug). Also, if the cfs_rq is throttled, shouldn't we
> prevent any further decrement of its ->runtime_remaining ?
> 
> I had a look at the callers of account_cfs_rq_runtime():
> 
> - update_curr(). Seems safe, but has a cfs_rq->curr check at the top. This
>   won't catch throttled cfs_rq's because AFAICT their curr pointer isn't
>   NULL'd on throttle.
> 
> - check_enqueue_throttle(). Already has a cfs_rq_throttled() check.
> 
> - set_next_task_fair(). Peter shuffled the whole set/put task thing
>   recently but last I looked it seemed all sane.
> 
> I'll try to make sense of it, but have also Cc'd Paul since unlike me he
> actually knows this stuff.
> 

Hah, seems like we get update_curr() calls on throttled rqs via
put_prev_entity():

[  151.538560]  put_prev_entity+0x8d/0x100
[  151.538562]  put_prev_task_fair+0x22/0x40
[  151.538564]  pick_next_task_fair+0x140/0x390
[  151.538566]  __schedule+0x122/0x6c0
[  151.538568]  schedule+0x2d/0x90
[  151.538570]  exit_to_usermode_loop+0x61/0x100
[  151.538572]  prepare_exit_to_usermode+0x91/0xa0
[  151.538573]  retint_user+0x8/0x8

Debug warns:
-----8<-----
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1054d2cf6aaa..41e0e78de4fe 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -828,6 +828,8 @@ static void update_tg_load_avg(struct cfs_rq *cfs_rq, int force)
 }
 #endif /* CONFIG_SMP */
 
+static inline int cfs_rq_throttled(struct cfs_rq *cfs_rq);
+
 /*
  * Update the current task's runtime statistics.
  */
@@ -840,6 +842,8 @@ static void update_curr(struct cfs_rq *cfs_rq)
 	if (unlikely(!curr))
 		return;
 
+	WARN_ON(cfs_rq_throttled(cfs_rq));
+
 	delta_exec = now - curr->exec_start;
 	if (unlikely((s64)delta_exec <= 0))
 		return;
@@ -10169,6 +10173,7 @@ static void set_next_task_fair(struct rq *rq, struct task_struct *p)
 		struct cfs_rq *cfs_rq = cfs_rq_of(se);
 
 		set_next_entity(cfs_rq, se);
+		WARN_ON(cfs_rq_throttled(cfs_rq));
 		/* ensure bandwidth has been allocated on our new cfs_rq */
 		account_cfs_rq_runtime(cfs_rq, 0);
 	}
----->8-----

So I guess what we'd want there is something like
-----8<-----
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1054d2cf6aaa..b2c40f994aa9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -828,6 +828,8 @@ static void update_tg_load_avg(struct cfs_rq *cfs_rq, int force)
 }
 #endif /* CONFIG_SMP */
 
+static inline int cfs_rq_throttled(struct cfs_rq *cfs_rq);
+
 /*
  * Update the current task's runtime statistics.
  */
@@ -840,6 +842,9 @@ static void update_curr(struct cfs_rq *cfs_rq)
 	if (unlikely(!curr))
 		return;
 
+	if (cfs_rq_throttled(cfs_rq))
+		return;
+
 	delta_exec = now - curr->exec_start;
 	if (unlikely((s64)delta_exec <= 0))
 		return;
----->8-----

but I still don't comprehend how we can get there in the first place.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ