lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251219035334.39790-3-kernellwp@gmail.com>
Date: Fri, 19 Dec 2025 11:53:26 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Paolo Bonzini <pbonzini@...hat.com>,
	Sean Christopherson <seanjc@...gle.com>
Cc: K Prateek Nayak <kprateek.nayak@....com>,
	Christian Borntraeger <borntraeger@...ux.ibm.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Juri Lelli <juri.lelli@...hat.com>,
	linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org,
	Wanpeng Li <wanpengli@...cent.com>
Subject: [PATCH v2 2/9] sched/fair: Add rate-limiting and validation helpers

From: Wanpeng Li <wanpengli@...cent.com>

Implement core safety mechanisms for yield deboost operations.

Add yield_deboost_rate_limit() for high-frequency gating to prevent
excessive overhead on compute-intensive workloads. The 6ms threshold
balances responsiveness with overhead reduction.

Add yield_deboost_validate_tasks() for comprehensive validation ensuring
both tasks are valid and distinct, both belong to fair_sched_class,
target is on the same runqueue, and tasks are runnable.

The rate limiter prevents pathological high-frequency cases while
validation ensures only appropriate task pairs proceed. Both functions
are static and will be integrated in subsequent patches.

v1 -> v2:
- Remove unnecessary READ_ONCE/WRITE_ONCE for per-rq fields accessed
  under rq->lock
- Change rq->clock to rq_clock(rq) helper for consistency
- Change yield_deboost_rate_limit() signature from (rq, now_ns) to (rq),
  obtaining time internally via rq_clock()
- Remove redundant sched_class check for p_yielding (already implied by
  rq->donor being fair)
- Simplify task_rq check to only verify p_target
- Change rq->curr to rq->donor for correct EEVDF donor tracking
- Move sysctl_sched_vcpu_debooster_enabled and NULL checks to caller
  (yield_to_deboost) for early exit before update_rq_clock()
- Simplify function signature by returning p_yielding directly instead
  of using output pointer parameters
- Add documentation explaining the 6ms rate limit threshold

Signed-off-by: Wanpeng Li <wanpengli@...cent.com>
---
 kernel/sched/fair.c | 62 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 62 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 87c30db2c853..2f327882bf4d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9040,6 +9040,68 @@ static void put_prev_task_fair(struct rq *rq, struct task_struct *prev, struct t
 	}
 }
 
+/*
+ * Rate-limit yield deboost operations to prevent excessive overhead.
+ * Returns true if the operation should be skipped due to rate limiting.
+ *
+ * The 6ms threshold balances responsiveness with overhead reduction:
+ * - Short enough to allow timely yield boosting for lock contention
+ * - Long enough to prevent pathological high-frequency penalty application
+ *
+ * Called under rq->lock, so direct field access is safe.
+ */
+static bool yield_deboost_rate_limit(struct rq *rq)
+{
+	u64 now = rq_clock(rq);
+	u64 last = rq->yield_deboost_last_time_ns;
+
+	if (last && (now - last) <= 6 * NSEC_PER_MSEC)
+		return true;
+
+	rq->yield_deboost_last_time_ns = now;
+	return false;
+}
+
+/*
+ * Validate tasks for yield deboost operation.
+ * Returns the yielding task on success, NULL on validation failure.
+ *
+ * Checks: feature enabled, valid target, same runqueue, target is fair class,
+ * both on_rq. Called under rq->lock.
+ *
+ * Note: p_yielding (rq->donor) is guaranteed to be fair class by the caller
+ * (yield_to_task_fair is only called when curr->sched_class == p->sched_class).
+ */
+static struct task_struct __maybe_unused *
+yield_deboost_validate_tasks(struct rq *rq, struct task_struct *p_target)
+{
+	struct task_struct *p_yielding;
+
+	if (!sysctl_sched_vcpu_debooster_enabled)
+		return NULL;
+
+	if (!p_target)
+		return NULL;
+
+	if (yield_deboost_rate_limit(rq))
+		return NULL;
+
+	p_yielding = rq->donor;
+	if (!p_yielding || p_yielding == p_target)
+		return NULL;
+
+	if (p_target->sched_class != &fair_sched_class)
+		return NULL;
+
+	if (task_rq(p_target) != rq)
+		return NULL;
+
+	if (!p_target->se.on_rq || !p_yielding->se.on_rq)
+		return NULL;
+
+	return p_yielding;
+}
+
 /*
  * sched_yield() is very simple
  */
-- 
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ