lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 16 Mar 2021 19:59:29 +0800 From: Wang Qing <wangqing@...o.com> To: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>, Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot <vincent.guittot@...aro.org>, Dietmar Eggemann <dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, Daniel Bristot de Oliveira <bristot@...hat.com>, linux-kernel@...r.kernel.org Cc: Wang Qing <wangqing@...o.com> Subject: [PATCH] sched: rename __prepare_to_swait() to add_swait_queue_locked() This function just puts wait into queue, and does not do an operation similar to prepare_to_wait() in wait.c. And during the operation, the caller needs to hold the lock to protect. Signed-off-by: Wang Qing <wangqing@...o.com> --- kernel/sched/completion.c | 2 +- kernel/sched/sched.h | 2 +- kernel/sched/swait.c | 6 +++--- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/kernel/sched/completion.c b/kernel/sched/completion.c index a778554..3d28a5a --- a/kernel/sched/completion.c +++ b/kernel/sched/completion.c @@ -79,7 +79,7 @@ do_wait_for_common(struct completion *x, timeout = -ERESTARTSYS; break; } - __prepare_to_swait(&x->wait, &wait); + add_swait_queue_locked(&x->wait, &wait); __set_current_state(state); raw_spin_unlock_irq(&x->wait.lock); timeout = action(timeout); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 10a1522..0516f50 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2719,4 +2719,4 @@ static inline bool is_per_cpu_kthread(struct task_struct *p) #endif void swake_up_all_locked(struct swait_queue_head *q); -void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait); +void add_swait_queue_locked(struct swait_queue_head *q, struct swait_queue *wait); diff --git a/kernel/sched/swait.c b/kernel/sched/swait.c index 7a24925..f48a544 --- a/kernel/sched/swait.c +++ b/kernel/sched/swait.c @@ -82,7 +82,7 @@ void swake_up_all(struct swait_queue_head *q) } EXPORT_SYMBOL(swake_up_all); -void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait) +void add_swait_queue_locked(struct swait_queue_head *q, struct swait_queue *wait) { wait->task = current; if (list_empty(&wait->task_list)) @@ -94,7 +94,7 @@ void prepare_to_swait_exclusive(struct swait_queue_head *q, struct swait_queue * unsigned long flags; raw_spin_lock_irqsave(&q->lock, flags); - __prepare_to_swait(q, wait); + add_swait_queue_locked(q, wait); set_current_state(state); raw_spin_unlock_irqrestore(&q->lock, flags); } @@ -114,7 +114,7 @@ long prepare_to_swait_event(struct swait_queue_head *q, struct swait_queue *wait list_del_init(&wait->task_list); ret = -ERESTARTSYS; } else { - __prepare_to_swait(q, wait); + add_swait_queue_locked(q, wait); set_current_state(state); } raw_spin_unlock_irqrestore(&q->lock, flags); -- 2.7.4
Powered by blists - more mailing lists