lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 14 Feb 2021 00:06:11 +0000 From: Yiwei Zhang <zzyiwei@...roid.com> To: Andrew Morton <akpm@...ux-foundation.org>, Felix Kuehling <Felix.Kuehling@....com>, Jens Axboe <axboe@...nel.dk>, Petr Mladek <pmladek@...e.com>, "J. Bruce Fields" <bfields@...hat.com>, Peter Zijlstra <peterz@...radead.org>, Frederic Weisbecker <frederic@...nel.org>, Marcelo Tosatti <mtosatti@...hat.com>, Ilias Stamatis <stamatis.iliass@...il.com>, Rob Clark <robdclark@...omium.org>, Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Liang Chen <cl@...k-chips.com> Cc: linux-kernel@...r.kernel.org, kernel-team@...roid.com, Yiwei Zhang <zzyiwei@...roid.com> Subject: [PATCH] kthread: add kthread_mod_pending_delayed_work api The existing kthread_mod_delayed_work api will queue a new work if failing to cancel the current work due to no longer being pending. However, there's a case that the same work can be enqueued from both an async request and a delayed work, and a racing could happen if the async request comes right after the timeout delayed work gets scheduled, because the clean up work may not be safe to run twice. Signed-off-by: Yiwei Zhang <zzyiwei@...roid.com> --- include/linux/kthread.h | 3 +++ kernel/kthread.c | 48 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 51 insertions(+) diff --git a/include/linux/kthread.h b/include/linux/kthread.h index 65b81e0c494d..250cdc5ff2a5 100644 --- a/include/linux/kthread.h +++ b/include/linux/kthread.h @@ -192,6 +192,9 @@ bool kthread_queue_delayed_work(struct kthread_worker *worker, bool kthread_mod_delayed_work(struct kthread_worker *worker, struct kthread_delayed_work *dwork, unsigned long delay); +bool kthread_mod_pending_delayed_work(struct kthread_worker *worker, + struct kthread_delayed_work *dwork, + unsigned long delay); void kthread_flush_work(struct kthread_work *work); void kthread_flush_worker(struct kthread_worker *worker); diff --git a/kernel/kthread.c b/kernel/kthread.c index a5eceecd4513..13881076afdd 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -1142,6 +1142,54 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker, } EXPORT_SYMBOL_GPL(kthread_mod_delayed_work); +/** + * kthread_mod_pending_delayed_work - modify delay of a pending delayed work + * @worker: kthread worker to use + * @dwork: kthread delayed work to queue + * @delay: number of jiffies to wait before queuing + * + * If @dwork is still pending modify @dwork's timer so that it expires after + * @delay. If @dwork is still pending and @delay is zero, @work is guaranteed to + * be queued immediately. + * + * Return: %true if @dwork was pending and its timer was modified, + * %false otherwise. + * + * A special case is when the work is being canceled in parallel. + * It might be caused either by the real kthread_cancel_delayed_work_sync() + * or yet another kthread_mod_delayed_work() call. We let the other command + * win and return %false here. The caller is supposed to synchronize these + * operations a reasonable way. + * + * This function is safe to call from any context including IRQ handler. + * See __kthread_cancel_work() and kthread_delayed_work_timer_fn() + * for details. + */ +bool kthread_mod_pending_delayed_work(struct kthread_worker *worker, + struct kthread_delayed_work *dwork, + unsigned long delay) +{ + struct kthread_work *work = &dwork->work; + unsigned long flags; + int ret = true; + + raw_spin_lock_irqsave(&worker->lock, flags); + if (!work->worker || work->canceling || + !__kthread_cancel_work(work, true, &flags)) { + ret = false; + goto out; + } + + /* Work must not be used with >1 worker, see kthread_queue_work() */ + WARN_ON_ONCE(work->worker != worker); + + __kthread_queue_delayed_work(worker, dwork, delay); +out: + raw_spin_unlock_irqrestore(&worker->lock, flags); + return ret; +} +EXPORT_SYMBOL_GPL(kthread_mod_pending_delayed_work); + static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork) { struct kthread_worker *worker = work->worker; -- 2.30.0.478.g8a0d178c01-goog
Powered by blists - more mailing lists