lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161026044125.GC2901@js1304-P5Q-DELUXE>
Date:   Wed, 26 Oct 2016 13:41:25 +0900
From:   Joonsoo Kim <iamjoonsoo.kim@....com>
To:     Vlastimil Babka <vbabka@...e.cz>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Mel Gorman <mgorman@...hsingularity.net>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 2/5] mm/page_alloc: use smallest fallback page first
 in movable allocation

On Fri, Oct 14, 2016 at 12:52:26PM +0200, Vlastimil Babka wrote:
> On 10/14/2016 03:26 AM, Joonsoo Kim wrote:
> >On Thu, Oct 13, 2016 at 11:12:10AM +0200, Vlastimil Babka wrote:
> >>On 10/13/2016 10:08 AM, js1304@...il.com wrote:
> >>>From: Joonsoo Kim <iamjoonsoo.kim@....com>
> >>>
> >>>When we try to find freepage in fallback buddy list, we always serach
> >>>the largest one. This would help for fragmentation if we process
> >>>unmovable/reclaimable allocation request because it could cause permanent
> >>>fragmentation on movable pageblock and spread out such allocations would
> >>>cause more fragmentation. But, movable allocation request is
> >>>rather different. It would be simply freed or migrated so it doesn't
> >>>contribute to fragmentation on the other pageblock. In this case, it would
> >>>be better not to break the precious highest order freepage so we need to
> >>>search the smallest freepage first.
> >>
> >>I've also pondered this, but then found a lower hanging fruit that
> >>should be hopefully clear win and mitigate most cases of breaking
> >>high-order pages unnecessarily:
> >>
> >>http://marc.info/?l=linux-mm&m=147582914330198&w=2
> >
> >Yes, I agree with that change. That's the similar patch what I tried
> >before.
> >
> >"mm/page_alloc: don't break highest order freepage if steal"
> >http://marc.info/?l=linux-mm&m=143011930520417&w=2
> 
> Ah, indeed, I forgot about it and had to rediscover :)
> 
> >
> >>
> >>So I would try that first, and then test your patch on top? In your
> >>patch there's a risk that we make it harder for
> >>unmovable/reclaimable pageblocks to become movable again (we start
> >>with the smallest page which means there's lower chance that
> >>move_freepages_block() will convert more than half of the block).
> >
> >Indeed, but, with your "count movable pages when stealing", risk would
> >disappear. :)
> 
> Hmm, but that counting is only triggered when we attempt to steal
> whole pageblock. For movable allocation, can_steal_fallback() allows
> that only for
> (order >= pageblock_order / 2), and since your patch makes "order"
> as small as possible for movable allocations, the chances are lower?

Chances are lower than current but we eventually try to steal that
(order >= pageblock_order / 2) freepage from unmovable pageblock and
your logic will result in changing pageblock migratetype from
unmovable to movable.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ