lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEQmJ=gYe=d53HHC1xW_epmPmmddA4J28SHybwGmQzUZgxZovg@mail.gmail.com>
Date:   Fri, 9 Jun 2023 14:28:19 +0800
From:   Yuanhan Zhang <zyhtheonly@...il.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     jiangshanlai@...il.com, linux-kernel@...r.kernel.org,
        pmladek@...e.com, zyhtheonly@...h.net, zwp10758@...il.com,
        tiozhang@...iglobal.com, fuyuanli@...iglobal.com
Subject: Re: [PATCH] workqueue: introduce queue_work_cpumask to queue work
 onto a given cpumask

// I resend this to put it into the same thread, sorry for the confusion.

> Can you elaborate the intended use cases?

Hi Tejun,

Thanks for your reply! Please let me use myself as an example to explain this.

In my scenario, I have 7 cpus on my machine (actually it is uma, so
queue_work_node
or using UNBOUND do not works for me), and for some unlucky reasons
there are always some irqs running on cpu 0 and cpu 6, since I'm using arm64
with irqs tuning into FIFO threads, those threaded irqs are always running on
cpu 0 and 6 too (for affinity). And this would not be fixed easily in
short terms :(

So in order to help async init for better boot times for my devices,
I'd like to prevent
works from running on cpu 0 and 6. With queue_work_cpumask(), it would be simply
done by:

...
cpumask_clear_cpu(0, cpumask);  // actually I use sysfs to parse my cpumask
cpumask_clear_cpu(6, cpumask);
queue_work_cpumask(cpumask, my_wq, &my_work->work);
...


> The code seems duplicated too. Could you do a little refactoring and make
> they (queue_work_cpumask() & queue_work_node()) share some code?

Hi Lai,

Thanks for your advice!

I do the refactoring in PATCH v2, there are some changes:
1. removed WARN_ONCE in previous code
  1). queue_work_node works well in UNBOUND since we have unbound_pwq_by_node()
       in __queue_work() to choose the right node.
  2). queue_work_cpumask does not work in UNBOUND since list
numa_pwq_tbl is designed
       to be per numa node. I comment on this in this patch.
2. remove the previous workqueue_select_cpu_near and let queue_work_node() use
    queue_work_on() and queue_work_cpumask().

I test this patch with 100,000 queue_work_cpumask() &
queue_work_node() with randomly
inputs cpumask & node, it works as expected on my machines (80 cores
x86_64 & 7 cores ARM64
& 16 cores ARM64).

Please help review, thanks a lot!

Thanks,
Tio Zhang

Tejun Heo <tj@...nel.org> 于2023年6月9日周五 06:52写道:
>


Tejun Heo <tj@...nel.org> 于2023年6月9日周五 06:52写道:
>
> On Tue, Jun 06, 2023 at 05:31:35PM +0800, Tio Zhang wrote:
> > Introduce queue_work_cpumask to queue work on a "random" CPU onto a given
> > cpumask. It would be helpful when devices/modules want to assign works on
> > different cpusets but do not want to maintain extra workqueues, since they
> > have to alloc different workqueues and set different
> > workqueue_attrs->cpumask in the past times.
> >
> > For now only available for unbound workqueues, We will try to further
> > patch it.
> > And default to the first CPU that is in the intersection of the cpumask
> > given and the online cpumask.
> > The only exception is if the CPU is local in the cpuset we will just use
> > the current CPU.
> >
> > The implementation and comments are referenced from
> > 'commit 8204e0c1113d ("workqueue: Provide queue_work_node to queue work
> > near a given NUMA node")'
>
> Can you elaborate the intended use cases?
>
> Thanks.
>
> --
> tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ