lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 12 Jun 2023 18:09:36 +0100
From:   Valentin Schneider <vschneid@...hat.com>
To:     Wang You <wangyoua@...ontech.com>, mingo@...hat.com,
        peterz@...radead.org, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, dietmar.eggemann@....com,
        rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        bristot@...hat.com
Cc:     linux-kernel@...r.kernel.org, Wang You <wangyoua@...ontech.com>
Subject: Re: [RFC PATCH] sched/wait: Determine whether the wait queue is
 empty before waking up

On 09/06/23 13:38, Wang You wrote:
> When we did some benchmark tests (such as pipe tests), we found
> that the wake behavior was still triggered when the wait queue
> was empty, even though it would exit later.
>
> This processing consumes some unnecessary resources. Can we
> determine at the beginning of the wake up whether there are
> elements in the queue that need to be awakened? I think this
> judgment is probably good for performance and should be safe.
>
> Looking forward to your reply, thank you.
>
> Signed-off-by: Wang You <wangyoua@...ontech.com>
> ---
>  kernel/sched/wait.c | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
> index 133b74730738..17011780aa21 100644
> --- a/kernel/sched/wait.c
> +++ b/kernel/sched/wait.c
> @@ -205,6 +205,9 @@ void __wake_up_sync_key(struct wait_queue_head *wq_head, unsigned int mode,
>       if (unlikely(!wq_head))
>               return;
>
> +	if (unlikely(!wq_has_sleeper(wq_head)))
> +		return;
> +


Hm so I suppose that if we end up in wake_up*() then the update of the wait
condition has been done (so this shouldn't lead to missed wakeups), but
that still means an extra smp_mb() before grabbing the wq_head->lock.

I'd suggest benchmarking the change, this is going to cause unwanted
overhead when dealing with busy queues, and I'm not sure it's saving much
vs grabbing wq_head->lock before noticing the list is empty.


>       __wake_up_common_lock(wq_head, mode, 1, WF_SYNC, key);
>  }
>  EXPORT_SYMBOL_GPL(__wake_up_sync_key);
> --
> 2.20.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ