lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 27 Jul 2019 11:16:08 +0100
From:   Chris Down <chris@...isdown.name>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Nathan Chancellor <natechancellor@...il.com>,
        Randy Dunlap <rdunlap@...radead.org>, broonie@...nel.org,
        linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, linux-next@...r.kernel.org, mhocko@...e.cz,
        mm-commits@...r.kernel.org, sfr@...b.auug.org.au
Subject: Re: mmotm 2019-07-24-21-39 uploaded (mm/memcontrol)

u64 division: truly the gift that keeps on giving. Thanks Andrew for following 
up on these.

Andrew Morton writes:
>Ah.
>
>It's rather unclear why that u64 cast is there anyway.  We're dealing
>with ulongs all over this code.  The below will suffice.

This place in particular uses u64 to make sure we don't overflow when left 
shifting, since the numbers can get pretty big (and that's somewhat needed due 
to the need for high precision when calculating the penalty jiffies). It's ok 
if the output after division is an unsigned long, just the intermediate steps 
need to have enough precision.

>Chris, please take a look?
>
>--- a/mm/memcontrol.c~mm-throttle-allocators-when-failing-reclaim-over-memoryhigh-fix-fix-fix
>+++ a/mm/memcontrol.c
>@@ -2415,7 +2415,7 @@ void mem_cgroup_handle_over_high(void)
> 	clamped_high = max(high, 1UL);
>
> 	overage = (u64)(usage - high) << MEMCG_DELAY_PRECISION_SHIFT;
>-	do_div(overage, clamped_high);
>+	overage /= clamped_high;

I think this isn't going to work because left shifting by 
MEMCG_DELAY_PRECISION_SHIFT can make the number bigger than ULONG_MAX, which 
may cause wraparound -- we need to retain the u64 until we divide.

Maybe div_u64 will satisfy both ARM and i386? ie.

diff --git mm/memcontrol.c mm/memcontrol.c
index 5c7b9facb0eb..e12a47e96154 100644
--- mm/memcontrol.c
+++ mm/memcontrol.c
@@ -2419,8 +2419,8 @@ void mem_cgroup_handle_over_high(void)
         */
        clamped_high = max(high, 1UL);
 
-       overage = (u64)(usage - high) << MEMCG_DELAY_PRECISION_SHIFT;
-       do_div(overage, clamped_high);
+       overage = div_u64((u64)(usage - high) << MEMCG_DELAY_PRECISION_SHIFT,
+                         clamped_high);
 
        penalty_jiffies = ((u64)overage * overage * HZ)
                >> (MEMCG_DELAY_PRECISION_SHIFT + MEMCG_DELAY_SCALING_SHIFT);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ