lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sat, 24 Jun 2017 21:26:14 +0800
From:   Wei Yang <richard.weiyang@...il.com>
To:     Rasmus Villemoes <linux@...musvillemoes.dk>
Cc:     Michal Hocko <mhocko@...e.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Hillf Danton <hillf.zj@...baba-inc.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
        Vinayak Menon <vinmenon@...eaurora.org>,
        Xishi Qiu <qiuxishi@...wei.com>,
        Hao Lee <haolee.swjtu@...il.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/page_alloc.c: eliminate unsigned confusion in
 __rmqueue_fallback

On Wed, Jun 21, 2017 at 08:55:28PM +0200, Rasmus Villemoes wrote:
>Since current_order starts as MAX_ORDER-1 and is then only
>decremented, the second half of the loop condition seems
>superfluous. However, if order is 0, we may decrement current_order
>past 0, making it UINT_MAX. This is obviously too subtle ([1], [2]).
>
>Since we need to add some comment anyway, change the two variables to
>signed, making the counting-down for loop look more familiar, and
>apparently also making gcc generate slightly smaller code.
>
>[1] https://lkml.org/lkml/2016/6/20/493
>[2] https://lkml.org/lkml/2017/6/19/345
>
>Signed-off-by: Rasmus Villemoes <linux@...musvillemoes.dk>
>---
>Michal, something like this, perhaps?
>
>mm/page_alloc.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
>diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>index 2302f250d6b1..e656f4da9772 100644
>--- a/mm/page_alloc.c
>+++ b/mm/page_alloc.c
>@@ -2204,19 +2204,23 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>  * list of requested migratetype, possibly along with other pages from the same
>  * block, depending on fragmentation avoidance heuristics. Returns true if
>  * fallback was found so that __rmqueue_smallest() can grab it.
>+ *
>+ * The use of signed ints for order and current_order is a deliberate
>+ * deviation from the rest of this file, to make the for loop
>+ * condition simpler.
>  */
> static inline bool
>-__rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>+__rmqueue_fallback(struct zone *zone, int order, int start_migratetype)
> {
> 	struct free_area *area;
>-	unsigned int current_order;
>+	int current_order;
> 	struct page *page;
> 	int fallback_mt;
> 	bool can_steal;
> 
> 	/* Find the largest possible block of pages in the other list */
> 	for (current_order = MAX_ORDER-1;
>-				current_order >= order && current_order <= MAX_ORDER-1;
>+				current_order >= order;
> 				--current_order) {
> 		area = &(zone->free_area[current_order]);
> 		fallback_mt = find_suitable_fallback(area, current_order,
>-- 
>2.11.0

Looks nice. Why I didn't come up with this change.

Acked-by: Wei Yang <weiyang@...il.com>

-- 
Wei Yang
Help you, Help me

Download attachment "signature.asc" of type "application/pgp-signature" (820 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ