lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210318195734.3579799-1-brho@google.com>
Date:   Thu, 18 Mar 2021 15:57:34 -0400
From:   Barret Rhoden <brho@...gle.com>
To:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>
Cc:     Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        linux-kernel@...r.kernel.org
Subject: sched: allow resubmits to queue_balance_callback()

Prior to this commit, if you submitted the same callback_head twice, it
would be enqueued twice, but only if it was the last callback on the
list.  The first time it was submitted, rq->balance_callback was NULL,
so head->next is NULL.  That defeated the check in
queue_balance_callback().

This commit changes the callback list such that whenever an item is on
the list, its head->next is not NULL.  The last element (first inserted)
will point to itself.  This allows us to detect and ignore any attempt
to reenqueue a callback_head.

Signed-off-by: Barret Rhoden <brho@...gle.com>
---

i might be missing something here, but this was my interpretation of
what the "if (unlikely(head->next))" check was for in
queue_balance_callback.

 kernel/sched/core.c  | 3 ++-
 kernel/sched/sched.h | 6 +++++-
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2d95dc3f4644..6322975032ea 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3669,7 +3669,8 @@ static void __balance_callback(struct rq *rq)
 	rq->balance_callback = NULL;
 	while (head) {
 		func = (void (*)(struct rq *))head->func;
-		next = head->next;
+		/* The last element pointed to itself */
+		next = head->next == head ? NULL : head->next;
 		head->next = NULL;
 		head = next;
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 28709f6b0975..42629e153f83 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1389,11 +1389,15 @@ queue_balance_callback(struct rq *rq,
 {
 	lockdep_assert_held(&rq->lock);
 
+	/*
+	 * The last element on the list points to itself, so we can always
+	 * detect if head is already enqueued.
+	 */
 	if (unlikely(head->next))
 		return;
 
 	head->func = (void (*)(struct callback_head *))func;
-	head->next = rq->balance_callback;
+	head->next = rq->balance_callback ?: NULL;
 	rq->balance_callback = head;
 }
 
-- 
2.31.0.rc2.261.g7f71774620-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ