lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <0594105e-0bb2-07e9-56f1-ed38d9a7b951@163.com>
Date:   Fri, 26 Feb 2021 23:14:49 +0800
From:   Canjiang Lu <craftsfish@....com>
To:     tj@...nel.org, jiangshanlai@...il.com, linux-kernel@...r.kernel.org
Subject: workqueue: useless call of smp_mb() is removed from insert_work

When worker is going to sleep, check whether an new idle worker should be
kicked is protected by pool->lock. Since insert_work is also protected by
pool->lock, both parts are serialized. The original lock-less design doesn't
make sense anymore and we can remove the call of smp_mb() from insert_work.
Related comments are also removed.

Signed-off-by: Canjiang Lu <craftsfish@....com>
---
 kernel/workqueue.c | 20 --------------------
 1 file changed, 20 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 9880b6c0e272..861f23a6f1ba 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -883,18 +883,6 @@ void wq_worker_sleeping(struct task_struct *task)
 
        worker->sleeping = 1;
        raw_spin_lock_irq(&pool->lock);
-
-       /*
-        * The counterpart of the following dec_and_test, implied mb,
-        * worklist not empty test sequence is in insert_work().
-        * Please read comment there.
-        *
-        * NOT_RUNNING is clear.  This means that we're bound to and
-        * running on the local cpu w/ rq lock held and preemption
-        * disabled, which in turn means that none else could be
-        * manipulating idle_list, so dereferencing idle_list without pool
-        * lock is safe.
-        */
        if (atomic_dec_and_test(&pool->nr_running) &&
            !list_empty(&pool->worklist)) {
                next = first_idle_worker(pool);
@@ -1334,14 +1322,6 @@ static void insert_work(struct pool_workqueue *pwq, struct work_struct *work,
        set_work_pwq(work, pwq, extra_flags);
        list_add_tail(&work->entry, head);
        get_pwq(pwq);
-
-       /*
-        * Ensure either wq_worker_sleeping() sees the above
-        * list_add_tail() or we see zero nr_running to avoid workers lying
-        * around lazily while there are works to be processed.
-        */
-       smp_mb();
-
        if (__need_more_worker(pool))
                wake_up_worker(pool);
 }
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ