lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YvwJs66gR71UAHi8@slm.duckdns.org>
Date:   Tue, 16 Aug 2022 11:18:43 -1000
From:   Tejun Heo <tj@...nel.org>
To:     Lai Jiangshan <jiangshanlai@...il.com>
Cc:     linux-kernel@...r.kernel.org,
        Lai Jiangshan <jiangshan.ljs@...group.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        "Eric W. Biederman" <ebiederm@...ssion.com>,
        Petr Mladek <pmladek@...e.com>, Michal Hocko <mhocko@...e.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Wedson Almeida Filho <wedsonaf@...gle.com>,
        Valentin Schneider <vschneid@...hat.com>,
        Waiman Long <longman@...hat.com>
Subject: Re: [RFC PATCH 1/8] workqueue: Unconditionally set cpumask in
 worker_attach_to_pool()

cc'ing Waiman.

On Thu, Aug 04, 2022 at 04:41:28PM +0800, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@...group.com>
> 
> If a worker is spuriously woken up after kthread_bind_mask() but before
> worker_attach_to_pool(), and there are some cpu-hot-[un]plug happening
> during the same interval, the worker task might be pushed away from its
> bound CPU with its affinity changed by the scheduler and worker_attach_to_pool()
> doesn't rebind it properly.
> 
> Do unconditionally affinity binding in worker_attach_to_pool() to fix
> the problem.
> 
> Prepare for moving worker_attach_to_pool() from create_worker() to the
> starting of worker_thread() which will really cause the said interval
> even without spurious wakeup.

So, this looks fine but I think the whole thing can be simplified if we
integrate this with the persistent user cpumask change that Waiman is
working on. We can just set the cpumask once during init and let the
scheduler core figure out what the current effective mask is as CPU
availability changes.

 http://lkml.kernel.org/r/20220816192734.67115-4-longman@redhat.com

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ