[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251110033232.12538-3-kernellwp@gmail.com>
Date: Mon, 10 Nov 2025 11:32:23 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <seanjc@...gle.com>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Juri Lelli <juri.lelli@...hat.com>,
linux-kernel@...r.kernel.org,
kvm@...r.kernel.org,
Wanpeng Li <wanpengli@...cent.com>
Subject: [PATCH 02/10] sched/fair: Add rate-limiting and validation helpers
From: Wanpeng Li <wanpengli@...cent.com>
From: Wanpeng Li <wanpengli@...cent.com>
Implement core safety mechanisms for yield deboost operations.
Add yield_deboost_rate_limit() for high-frequency gating to prevent
excessive overhead on compute-intensive workloads. Use 6ms threshold
with lockless READ_ONCE/WRITE_ONCE to minimize cache line contention
while providing effective rate limiting.
Add yield_deboost_validate_tasks() for comprehensive validation
ensuring feature is enabled via sysctl, both tasks are valid and
distinct, both belong to fair_sched_class, entities are on the same
runqueue, and tasks are runnable.
The rate limiter prevents pathological high-frequency cases while
validation ensures only appropriate task pairs proceed. Both functions
are static and will be integrated in subsequent patches.
Signed-off-by: Wanpeng Li <wanpengli@...cent.com>
---
kernel/sched/fair.c | 68 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 68 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5b7fcc86ccff..a7dc21c2dbdb 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8990,6 +8990,74 @@ static void put_prev_task_fair(struct rq *rq, struct task_struct *prev, struct t
}
}
+/*
+ * High-frequency yield gating to reduce overhead on compute-intensive workloads.
+ * Returns true if the yield should be skipped due to frequency limits.
+ *
+ * Optimized: single threshold with READ_ONCE/WRITE_ONCE, refresh timestamp on every call.
+ */
+static bool yield_deboost_rate_limit(struct rq *rq, u64 now_ns)
+{
+ u64 last = READ_ONCE(rq->yield_deboost_last_time_ns);
+ bool limited = false;
+
+ if (last) {
+ u64 delta = now_ns - last;
+ limited = (delta <= 6000ULL * NSEC_PER_USEC);
+ }
+
+ WRITE_ONCE(rq->yield_deboost_last_time_ns, now_ns);
+ return limited;
+}
+
+/*
+ * Validate tasks and basic parameters for yield deboost operation.
+ * Performs comprehensive safety checks including feature enablement,
+ * NULL pointer validation, task state verification, and same-rq requirement.
+ * Returns false with appropriate debug logging if any validation fails,
+ * ensuring only safe and meaningful yield operations proceed.
+ */
+static bool __maybe_unused yield_deboost_validate_tasks(struct rq *rq, struct task_struct *p_target,
+ struct task_struct **p_yielding_out,
+ struct sched_entity **se_y_out,
+ struct sched_entity **se_t_out)
+{
+ struct task_struct *p_yielding;
+ struct sched_entity *se_y, *se_t;
+ u64 now_ns;
+
+ if (!sysctl_sched_vcpu_debooster_enabled)
+ return false;
+
+ if (!rq || !p_target)
+ return false;
+
+ now_ns = rq->clock;
+
+ if (yield_deboost_rate_limit(rq, now_ns))
+ return false;
+
+ p_yielding = rq->curr;
+ if (!p_yielding || p_yielding == p_target ||
+ p_target->sched_class != &fair_sched_class ||
+ p_yielding->sched_class != &fair_sched_class)
+ return false;
+
+ se_y = &p_yielding->se;
+ se_t = &p_target->se;
+
+ if (!se_t || !se_y || !se_t->on_rq || !se_y->on_rq)
+ return false;
+
+ if (task_rq(p_yielding) != rq || task_rq(p_target) != rq)
+ return false;
+
+ *p_yielding_out = p_yielding;
+ *se_y_out = se_y;
+ *se_t_out = se_t;
+ return true;
+}
+
/*
* sched_yield() is very simple
*/
--
2.43.0
Powered by blists - more mailing lists