lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 15 Feb 2017 17:11:38 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     linux-mm@...ck.org, Johannes Weiner <hannes@...xchg.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        David Rientjes <rientjes@...gle.com>,
        linux-kernel@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH v2 00/10] try to reduce fragmenting fallbacks

On 02/15/2017 03:29 PM, Vlastimil Babka wrote:
> Results for patch 4 ("count movable pages when stealing from pageblock")
> are really puzzling me, as it increases the number of fragmenting events
> for reclaimable allocations, implicating "reclaimable placed with (i.e.
> falling back to) unmovable" (which is not listed separately above, but
> follows logically from "reclaimable placed with movable" not changing
> that much). I really wonder why is that. The patch effectively only
> changes the decision to change migratetype of a pageblock, it doesn't
> affect the actual stealing decision (which is always true for
> RECLAIMABLE anyway, see can_steal_fallback()). Moreover, since we can't
> distinguish UNMOVABLE from RECLAIMABLE when counting, good_pages is 0
> and thus even the decision to change pageblock migratetype shouldn't be
> changed by the patch for this case. I must recheck the implementation...

Ah, there it is... not enough LISP

-       if (pages >= (1 << (pageblock_order-1)) ||
+       /* Claim the whole block if over half of it is free or good type */
+       if (free_pages + good_pages >= (1 << (pageblock_order-1)) ||

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ