[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200417194145.36350-3-lyude@redhat.com>
Date: Fri, 17 Apr 2020 15:40:49 -0400
From: Lyude Paul <lyude@...hat.com>
To: dri-devel@...ts.freedesktop.org
Cc: Daniel Vetter <daniel@...ll.ch>, Tejun Heo <tj@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Petr Mladek <pmladek@...e.com>,
Suren Baghdasaryan <surenb@...gle.com>,
"Steven Rostedt (VMware)" <rostedt@...dmis.org>,
Ben Dooks <ben.dooks@...ethink.co.uk>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Liang Chen <cl@...k-chips.com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org
Subject: [RFC v3 02/11] kthread: Introduce __kthread_queue_work()
While kthread_queue_work() is fine for basic kthread_worker usecases,
it's a little limiting if you want to create your own delayed work
implementations that delay off things other than a clock. Looking at
kthread_delayed_works for instance, all of the code shares the lock in
kthread_work so that that both the timer_list and actual kthread_worker
can be inspected and modified together atomically.
Currently, we want to be able to implement a type of delayed
kthread_work in DRM that delays until a specific vblank sequence has
passed, which we refer to as a drm_vblank_work, as opposed to using a
simple time-based delay. Some of the requirements we have for this are
the ability to allow for rescheduling and flushing on drm_vblank_works,
which become a lot harder to do properly if we can't re-queue work under
lock. Additionally, being able to specify a custom position in the
kthread_worker's work_list (which requires being under lock) is
important to be able to do since it's needed for implementing a custom
work flushing mechanism that waits for both the vblank sequence and
kthread_worker to complete once.
So - let's expose an unlocked version of kthread_queue_work() called
__kthread_queue_work(), which also allows for specifying a custom list
position in which to insert the work before.
Cc: Tejun Heo <tj@...nel.org>
Signed-off-by: Lyude Paul <lyude@...hat.com>
---
include/linux/kthread.h | 3 +++
kernel/kthread.c | 34 ++++++++++++++++++++++++++++++----
2 files changed, 33 insertions(+), 4 deletions(-)
diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index 8bbcaad7ef0f..02e0c1c157bf 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -179,6 +179,9 @@ __printf(3, 4) struct kthread_worker *
kthread_create_worker_on_cpu(int cpu, unsigned int flags,
const char namefmt[], ...);
+bool __kthread_queue_work(struct kthread_worker *worker,
+ struct kthread_work *work,
+ struct list_head *pos);
bool kthread_queue_work(struct kthread_worker *worker,
struct kthread_work *work);
diff --git a/kernel/kthread.c b/kernel/kthread.c
index bfbfa481be3a..46de56142593 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -816,6 +816,35 @@ static void kthread_insert_work(struct kthread_worker *worker,
wake_up_process(worker->task);
}
+/**
+ * __kthread_queue_work - queue a kthread_work while under lock
+ * @worker: target kthread_worker
+ * @work: kthread_work to queue
+ * @pos: The position in @worker.work_list to insert @work before
+ *
+ * This is the same as kthread_queue_work(), except that it already expects
+ * the caller to be holding &kthread_worker.lock and allows for specifying a
+ * custom position in @worker.work_list to insert @work before.
+ *
+ * This function is mostly useful for users which might need to create their
+ * own delayed kthread_worker implementations.
+ *
+ * Returns: %true if @work was successfully queued, %false if it was already
+ * pending.
+ */
+bool __kthread_queue_work(struct kthread_worker *worker,
+ struct kthread_work *work,
+ struct list_head *pos)
+{
+ if (!queuing_blocked(worker, work)) {
+ kthread_insert_work(worker, work, pos);
+ return true;
+ }
+
+ return false;
+}
+EXPORT_SYMBOL_GPL(__kthread_queue_work);
+
/**
* kthread_queue_work - queue a kthread_work
* @worker: target kthread_worker
@@ -835,10 +864,7 @@ bool kthread_queue_work(struct kthread_worker *worker,
unsigned long flags;
raw_spin_lock_irqsave(&worker->lock, flags);
- if (!queuing_blocked(worker, work)) {
- kthread_insert_work(worker, work, &worker->work_list);
- ret = true;
- }
+ ret = __kthread_queue_work(worker, work, &worker->work_list);
raw_spin_unlock_irqrestore(&worker->lock, flags);
return ret;
}
--
2.25.1
Powered by blists - more mailing lists