lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f9b42bf8-e4c8-4028-a977-f324ba2f2275@amd.com>
Date: Fri, 31 Oct 2025 10:31:40 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Peter Zijlstra <peterz@...radead.org>, John Stultz <jstultz@...gle.com>,
	LKML <linux-kernel@...r.kernel.org>
CC: Juri Lelli <juri.lelli@...hat.com>, Valentin Schneider
	<valentin.schneider@....com>, Connor O'Brien <connoro@...gle.com>, "Joel
 Fernandes" <joelagnelf@...dia.com>, Qais Yousef <qyousef@...alina.io>, "Ingo
 Molnar" <mingo@...hat.com>, Vincent Guittot <vincent.guittot@...aro.org>,
	Dietmar Eggemann <dietmar.eggemann@....com>, Valentin Schneider
	<vschneid@...hat.com>, Steven Rostedt <rostedt@...dmis.org>, Ben Segall
	<bsegall@...gle.com>, Zimuzo Ezeozue <zezeozue@...gle.com>, Mel Gorman
	<mgorman@...e.de>, Will Deacon <will@...nel.org>, Waiman Long
	<longman@...hat.com>, Boqun Feng <boqun.feng@...il.com>, "Paul E. McKenney"
	<paulmck@...nel.org>, Metin Kaya <Metin.Kaya@....com>, Xuewen Yan
	<xuewen.yan94@...il.com>, Thomas Gleixner <tglx@...utronix.de>, "Daniel
 Lezcano" <daniel.lezcano@...aro.org>, Suleiman Souhlal <suleiman@...gle.com>,
	kuyo chang <kuyo.chang@...iatek.com>, hupu <hupu.gm@...il.com>,
	<kernel-team@...roid.com>
Subject: Re: [PATCH v23 8/9] sched: Add blocked_donor link to task for smarter
 mutex handoffs

Hello Peter, John,

On 10/30/2025 5:48 AM, John Stultz wrote:
> @@ -958,7 +964,34 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
>  
>  	raw_spin_lock_irqsave(&lock->wait_lock, flags);
>  	debug_mutex_unlock(lock);
> -	if (!list_empty(&lock->wait_list)) {
> +
> +	if (sched_proxy_exec()) {
> +		raw_spin_lock(&current->blocked_lock);
> +		/*
> +		 * If we have a task boosting current, and that task was boosting
> +		 * current through this lock, hand the lock to that task, as that
> +		 * is the highest waiter, as selected by the scheduling function.
> +		 */
> +		donor = current->blocked_donor;
> +		if (donor) {

Any concerns on new waiters always appearing as donors and in-turn
starving the long time waiters on the list?

> +			struct mutex *next_lock;
> +
> +			raw_spin_lock_nested(&donor->blocked_lock, SINGLE_DEPTH_NESTING);
> +			next_lock = __get_task_blocked_on(donor);
> +			if (next_lock == lock) {
> +				next = donor;
> +				__set_task_blocked_on_waking(donor, next_lock);
> +				wake_q_add(&wake_q, donor);
> +				current->blocked_donor = NULL;
> +			}
> +			raw_spin_unlock(&donor->blocked_lock);
> +		}
> +	}
-- 
Thanks and Regards,
Prateek


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ