[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <284138d64b5010ebe5a6490402dc01d006677dd1.camel@gmx.de>
Date: Wed, 17 Mar 2021 06:12:41 +0100
From: Mike Galbraith <efault@....de>
To: Wang Qing <wangqing@...o.com>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched: rename __prepare_to_swait() to
add_swait_queue_locked()
On Tue, 2021-03-16 at 19:59 +0800, Wang Qing wrote:
> This function just puts wait into queue, and does not do an operation similar
> to prepare_to_wait() in wait.c.
> And during the operation, the caller needs to hold the lock to protect.
I see zero benefit to churn like this. You're taking a dinky little
file that's perfectly clear (and pretty), and restating the obvious.
>
> Signed-off-by: Wang Qing <wangqing@...o.com>
> ---
> kernel/sched/completion.c | 2 +-
> kernel/sched/sched.h | 2 +-
> kernel/sched/swait.c | 6 +++---
> 3 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/sched/completion.c b/kernel/sched/completion.c
> index a778554..3d28a5a
> --- a/kernel/sched/completion.c
> +++ b/kernel/sched/completion.c
> @@ -79,7 +79,7 @@ do_wait_for_common(struct completion *x,
> timeout = -ERESTARTSYS;
> break;
> }
> - __prepare_to_swait(&x->wait, &wait);
> + add_swait_queue_locked(&x->wait, &wait);
> __set_current_state(state);
> raw_spin_unlock_irq(&x->wait.lock);
> timeout = action(timeout);
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 10a1522..0516f50
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2719,4 +2719,4 @@ static inline bool is_per_cpu_kthread(struct task_struct *p)
> #endif
>
> void swake_up_all_locked(struct swait_queue_head *q);
> -void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
> +void add_swait_queue_locked(struct swait_queue_head *q, struct swait_queue *wait);
> diff --git a/kernel/sched/swait.c b/kernel/sched/swait.c
> index 7a24925..f48a544
> --- a/kernel/sched/swait.c
> +++ b/kernel/sched/swait.c
> @@ -82,7 +82,7 @@ void swake_up_all(struct swait_queue_head *q)
> }
> EXPORT_SYMBOL(swake_up_all);
>
> -void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait)
> +void add_swait_queue_locked(struct swait_queue_head *q, struct swait_queue *wait)
> {
> wait->task = current;
> if (list_empty(&wait->task_list))
> @@ -94,7 +94,7 @@ void prepare_to_swait_exclusive(struct swait_queue_head *q, struct swait_queue *
> unsigned long flags;
>
> raw_spin_lock_irqsave(&q->lock, flags);
> - __prepare_to_swait(q, wait);
> + add_swait_queue_locked(q, wait);
> set_current_state(state);
> raw_spin_unlock_irqrestore(&q->lock, flags);
> }
> @@ -114,7 +114,7 @@ long prepare_to_swait_event(struct swait_queue_head *q, struct swait_queue *wait
> list_del_init(&wait->task_list);
> ret = -ERESTARTSYS;
> } else {
> - __prepare_to_swait(q, wait);
> + add_swait_queue_locked(q, wait);
> set_current_state(state);
> }
> raw_spin_unlock_irqrestore(&q->lock, flags);
Powered by blists - more mailing lists