lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEQmJ=geNmoOk37w=owwkpvL6-FgDfzaPhCTPNcKiFtL0pv4hg@mail.gmail.com>
Date:   Tue, 13 Jun 2023 18:25:44 +0800
From:   Yuanhan Zhang <zyhtheonly@...il.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     jiangshanlai@...il.com, linux-kernel@...r.kernel.org,
        pmladek@...e.com, zyhtheonly@...h.net, zwp10758@...il.com,
        tiozhang@...iglobal.com, fuyuanli@...iglobal.com
Subject: Re: [PATCH] workqueue: introduce queue_work_cpumask to queue work
 onto a given cpumask

Hi Tejun,

Tejun Heo <tj@...nel.org> 于2023年6月13日周二 01:54写道:
>
> Hello,
>
> On Fri, Jun 09, 2023 at 02:28:19PM +0800, Yuanhan Zhang wrote:
> > // I resend this to put it into the same thread, sorry for the confusion.
>
> This got resent quite a few times and I don't know which one to reply to.
> Just picking the one which seems like the latest.

Thanks for your patience.

>
> > > Can you elaborate the intended use cases?
> >
> > Thanks for your reply! Please let me use myself as an example to explain this.
> >
> > In my scenario, I have 7 cpus on my machine (actually it is uma, so
> > queue_work_node
> > or using UNBOUND do not works for me), and for some unlucky reasons
> > there are always some irqs running on cpu 0 and cpu 6, since I'm using arm64
> > with irqs tuning into FIFO threads, those threaded irqs are always running on
> > cpu 0 and 6 too (for affinity). And this would not be fixed easily in
> > short terms :(
> >
> > So in order to help async init for better boot times for my devices,
> > I'd like to prevent
> > works from running on cpu 0 and 6. With queue_work_cpumask(), it would be simply
> > done by:
> >
> > ...
> > cpumask_clear_cpu(0, cpumask);  // actually I use sysfs to parse my cpumask
> > cpumask_clear_cpu(6, cpumask);
> > queue_work_cpumask(cpumask, my_wq, &my_work->work);
> > ...
>
> But this would require explicit code customization on every call site which
> doesn't seem ideal given that this is to work around something which is tied
> to the specific hardware.

Yes, I agree that using wq_unbound_cpumask would be a great idea and a
substitute
solution for devices booting. But wq_unbound_cpumask could only constrain on
WQ_UNBOUND while I'm trying to make each of my work configurable.
Please let me try to explain this by another example:

If I have several kinds of works, and I'd like to make them run on
different cpusets
(so it is not ideal to put them on WQ_UNBOUND).

This would be done like this:
queue_work_cpumask(cpumask_A  /*B or C or D, just maintain different cpumasks*/,
system_wq, work);

And after that, I don't have to customize my codes anymore since I could
control those cpumasks by procfs or sysfs or ioctl or whatever I like.

So I believe this feature to let per work to choose its cpumask would be quite
convenient sometimes :)

>
> Wouldn't it be better to add a kernel parameter to further constrain
> wq_unbound_cpumask? Right now, on boot, it's only determined by isolcpus but
> it shouldn't be difficult to add a workqueue parameter to further constrain
> it.

Yes, thanks again, this would be a great solution for device boot. And
I have followed
your suggestion to submit another patch  '[PATCH] sched/isolation: add
a workqueue
parameter to constrain unbound CPUs'.  This patch simply let
"isolcpus=" to have a
"workqueue" option, which makes `housekeeping_cpumask(HK_TYPE_WQ)` to copy
a constrained workqueue cpumask to wq_unbound_cpumask.

Please help review, and thank you again for your time.

Thanks,
Tio Zhang

>
> Thanks.

>
> --
> tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ