lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 21 Apr 2023 06:19:03 +0000
From:   Sunil Kovvuri Goutham <sgoutham@...vell.com>
To:     Tejun Heo <tj@...nel.org>,
        "jiangshanlai@...il.com" <jiangshanlai@...il.com>
CC:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "kernel-team@...a.com" <kernel-team@...a.com>,
        "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: [EXT] [PATCH 06/22] net: thunderx: Use alloc_ordered_workqueue()
 to create ordered workqueues



> -----Original Message-----
> From: Tejun Heo <htejun@...il.com> On Behalf Of Tejun Heo
> Sent: Friday, April 21, 2023 8:21 AM
> To: jiangshanlai@...il.com
> Cc: linux-kernel@...r.kernel.org; kernel-team@...a.com; Tejun Heo
> <tj@...nel.org>; Sunil Kovvuri Goutham <sgoutham@...vell.com>; David S.
> Miller <davem@...emloft.net>; Eric Dumazet <edumazet@...gle.com>; Jakub
> Kicinski <kuba@...nel.org>; Paolo Abeni <pabeni@...hat.com>; linux-arm-
> kernel@...ts.infradead.org; netdev@...r.kernel.org
> Subject: [EXT] [PATCH 06/22] net: thunderx: Use alloc_ordered_workqueue() to
> create ordered workqueues
> 
> External Email
> 
> ----------------------------------------------------------------------
> BACKGROUND
> ==========
> 
> When multiple work items are queued to a workqueue, their execution order
> doesn't match the queueing order. They may get executed in any order and
> simultaneously. When fully serialized execution - one by one in the queueing
> order - is needed, an ordered workqueue should be used which can be created
> with alloc_ordered_workqueue().
> 
> However, alloc_ordered_workqueue() was a later addition. Before it, an ordered
> workqueue could be obtained by creating an UNBOUND workqueue with
> @max_active==1. This originally was an implementation side-effect which was
> broken by 4c16bd327c74 ("workqueue: restore WQ_UNBOUND/max_active==1
> to be ordered"). Because there were users that depended on the ordered
> execution,
> 5c0338c68706 ("workqueue: restore WQ_UNBOUND/max_active==1 to be
> ordered") made workqueue allocation path to implicitly promote UNBOUND
> workqueues w/
> @max_active==1 to ordered workqueues.
> 
> While this has worked okay, overloading the UNBOUND allocation interface this
> way creates other issues. It's difficult to tell whether a given workqueue actually
> needs to be ordered and users that legitimately want a min concurrency level wq
> unexpectedly gets an ordered one instead. With planned UNBOUND workqueue
> updates to improve execution locality and more prevalence of chiplet designs
> which can benefit from such improvements, this isn't a state we wanna be in
> forever.
> 
> This patch series audits all callsites that create an UNBOUND workqueue w/
> @max_active==1 and converts them to alloc_ordered_workqueue() as
> necessary.
> 
> WHAT TO LOOK FOR
> ================
> 
> The conversions are from
> 
>   alloc_workqueue(WQ_UNBOUND | flags, 1, args..)
> 
> to
> 
>   alloc_ordered_workqueue(flags, args...)
> 
> which don't cause any functional changes. If you know that fully ordered
> execution is not ncessary, please let me know. I'll drop the conversion and
> instead add a comment noting the fact to reduce confusion while conversion is
> in progress.
> 
> If you aren't fully sure, it's completely fine to let the conversion through. The
> behavior will stay exactly the same and we can always reconsider later.
> 
> As there are follow-up workqueue core changes, I'd really appreciate if the
> patch can be routed through the workqueue tree w/ your acks. Thanks.
> 
> Signed-off-by: Tejun Heo <tj@...nel.org>
> Cc: Sunil Goutham <sgoutham@...vell.com>
> Cc: "David S. Miller" <davem@...emloft.net>
> Cc: Eric Dumazet <edumazet@...gle.com>
> Cc: Jakub Kicinski <kuba@...nel.org>
> Cc: Paolo Abeni <pabeni@...hat.com>
> Cc: linux-arm-kernel@...ts.infradead.org
> Cc: netdev@...r.kernel.org
> ---
>  drivers/net/ethernet/cavium/thunder/thunder_bgx.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
> b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
> index 7eb2ddbe9bad..a317feb8decb 100644
> --- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
> +++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
> @@ -1126,8 +1126,7 @@ static int bgx_lmac_enable(struct bgx *bgx, u8
> lmacid)
>  	}
> 
>  poll:
> -	lmac->check_link = alloc_workqueue("check_link", WQ_UNBOUND |
> -					   WQ_MEM_RECLAIM, 1);
> +	lmac->check_link = alloc_ordered_workqueue("check_link",
> +WQ_MEM_RECLAIM);
>  	if (!lmac->check_link)
>  		return -ENOMEM;
>  	INIT_DELAYED_WORK(&lmac->dwork, bgx_poll_for_link);
> --
> 2.40.0

Reviewed-by: Sunil Goutham <sgoutham@...vell.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ