lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sat, 1 Jun 2024 22:41:49 -0400
From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
To: Jung-JaeJoon <rgbi3307@...il.com>
Cc: maple-tree@...ts.infradead.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] maple_tree: add mas_node_count() before going to
 slow_path in mas_wr_modify()

* Jung-JaeJoon <rgbi3307@...il.com> [240531 22:55]:
> From: Jung-JaeJoon <rgbi3307@...il.com>
> 
> If there are not enough nodes, mas_node_count() set an error state via mas_set_err()
> and return control flow to the beginning.
> 
> In the return flow, mas_nomem() checks the error status, allocates new nodes,
> and resumes execution again.
> 
> In particular,
> if this happens in mas_split() in the slow_path section executed in mas_wr_modify(),
> unnecessary work is repeated, causing a slowdown in speed as below flow:
> 
> _begin:
> mas_wr_modify() --> if (new_end >= mt_slots[wr_mas->type]) --> goto slow_path
> slow_path:
>     --> mas_wr_bnode() --> mas_store_b_node() --> mas_commit_b_node() --> mas_split()
>     --> mas_node_count() return to _begin
> 
> But, in the above flow, if mas_node_count() is executed before entering slow_path,
> execution efficiency is improved by allocating nodes without entering slow_path repeatedly.

Thank you for your patch, but I have to NACK this change.

You are trying to optimise the work done when we are out of memory,
which is a very rare state.  How did you check this works?

If we run out of memory, the code will kick back to mas_nomem() and
may start running in reclaim to free enough memory for the allocations.
There is nothing we can do to make a meaningful change in the speed of
execution at this point. IOW, the duplicate work is the least of our
problems.

This change has also separated the allocations from why we are
allocating - which isn't really apparent in this change.  We could put
in a comment about why we are doing this, but the difference in
execution speed when we are in a low memory, probably reclaim retry
situation is not worth this complication.

We also have a feature on the mailing list called "Store type" around
changing how this works to make preallocations avoid duplicate work and
it is actively being worked on (as noted in the email to the list). [1]
The key difference being that the store type feature will allow us to
avoid unnecessary work that happens all the time for preallocations.

[1] http://lists.infradead.org/pipermail/maple-tree/2023-December/003098.html

Thanks,
Liam

> 
> Signed-off-by: JaeJoon Jung <rgbi3307@...il.com>
> ---
>  lib/maple_tree.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 2d7d27e6ae3c..8ffabd73619f 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -4176,8 +4176,13 @@ static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
>  	 * path.
>  	 */
>  	new_end = mas_wr_new_end(wr_mas);
> -	if (new_end >= mt_slots[wr_mas->type])
> +	if (new_end >= mt_slots[wr_mas->type]) {
> +                mas->depth = mas_mt_height(mas);
> +                mas_node_count(mas, 1 + mas->depth * 2);
> +                if (mas_is_err(mas))
> +                        return;
>  		goto slow_path;
> +        }
>  
>  	/* Attempt to append */
>  	if (mas_wr_append(wr_mas, new_end))
> -- 
> 2.17.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ