lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <HK2PR0302MB2594DAC4323A14A5652C153EB3EF0@HK2PR0302MB2594.apcprd03.prod.outlook.com>
Date:   Wed, 4 Nov 2020 11:06:48 +0000
From:   Adrian Huang12 <ahuang12@...ovo.com>
To:     Lai Jiangshan <jiangshanlai@...il.com>,
        Adrian Huang <adrianhuang0701@...il.com>
CC:     Tejun Heo <tj@...nel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: RE: [External]  Re: [PATCH 1/1] workqueue: Remove redundant
 assignment

Hi Jiangshan,

> -----Original Message-----
> From: Lai Jiangshan <jiangshanlai@...il.com>
> Sent: Tuesday, November 3, 2020 12:35 PM
> To: Adrian Huang <adrianhuang0701@...il.com>
> Cc: Tejun Heo <tj@...nel.org>; LKML <linux-kernel@...r.kernel.org>; Adrian
> Huang12 <ahuang12@...ovo.com>
> Subject: [External] Re: [PATCH 1/1] workqueue: Remove redundant assignment
> 
> Hello, Adrian
> 
> I believe the pool->node is being used as a node hint before workqueue_init() for
> allocating memory. It is useful when it is correct.

Thanks for the comments. I had the same concern in my mind before
submitting this patch. My understanding is that the worker_pool.node
member is used to provide a node hint when allocating the 'worker'
structure or allocating the woker_pool structure (for unbound workqueue
only).

-- Bound workqueue --
The worker structure is allocated when invoking create_worker() in
the end of workqueue_init(), so this won't lead to the potential
problem by removing the pool->node assignment in workqueue_init_early().

-- Unbound workqueue --
The worker_pool structure is allocated in get_unbound_pool().
The function gets the corresponding node id by checking the
global variable 'wq_numa_possible_cpumask' (of course, it depends
on if the global variable 'wq_numa_enabled' is true).
This is not related to 'per_cpu worker_pool.node' (global variable
'cpu_worker_pools').

Please correct me if I miss something. Thanks.
 
> 
> I think it is better to init it early unless there is a bug about it in this early stage
> reported (on same archs).

OK, please ignore this patch since the patch is not created for a bug report.

> Thanks
> Lai.-
> 
> On Sun, Nov 1, 2020 at 8:21 PM Adrian Huang <adrianhuang0701@...il.com>
> wrote:
> >
> > From: Adrian Huang <ahuang12@...ovo.com>
> >
> > The member 'node' of worker_pool struct (per_cpu worker_pool) is
> > assigned in workqueue_init_early() and workqueue_init().
> > Commit 2186d9f940b6 ("workqueue: move wq_numa_init() to
> > workqueue_init()") fixes an issue by moving wq_numa_init() to
> > workqueue_init() in order to get the valid 'cpu to node' mapping. So,
> > remove the redundant assignment in workqueue_init_early().
> >
> > Signed-off-by: Adrian Huang <ahuang12@...ovo.com>
> > ---
> >  kernel/workqueue.c | 1 -
> >  1 file changed, 1 deletion(-)
> >
> > diff --git a/kernel/workqueue.c b/kernel/workqueue.c index
> > 437935e7a199..cf8c0df2410e 100644
> > --- a/kernel/workqueue.c
> > +++ b/kernel/workqueue.c
> > @@ -5937,7 +5937,6 @@ void __init workqueue_init_early(void)
> >                         pool->cpu = cpu;
> >                         cpumask_copy(pool->attrs->cpumask, cpumask_of(cpu));
> >                         pool->attrs->nice = std_nice[i++];
> > -                       pool->node = cpu_to_node(cpu);
> >
> >                         /* alloc pool ID */
> >                         mutex_lock(&wq_pool_mutex);
> > --
> > 2.17.1
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ