lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 17 Jun 2020 12:03:42 +0800
From:   Baoquan He <bhe@...hat.com>
To:     Jaewon Kim <jaewon31.kim@...il.com>
Cc:     Vlastimil Babka <vbabka@...e.cz>,
        김재원 <jaewon31.kim@...sung.com>,
        "mgorman@...hsingularity.net" <mgorman@...hsingularity.net>,
        "minchan@...nel.org" <minchan@...nel.org>,
        "mgorman@...e.de" <mgorman@...e.de>,
        "hannes@...xchg.org" <hannes@...xchg.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        이용택 <ytk.lee@...sung.com>,
        김철민 <cmlaika.kim@...sung.com>
Subject: Re: (2) [PATCH v2] page_alloc: consider highatomic reserve in
 wmartermark fast

On 06/17/20 at 12:46am, Jaewon Kim wrote:
...
> > > >>> i.e)
> > > >>> In following situation, watermark check fails (9MB - 8MB < 4MB) though there are
> > > >>> enough free (9MB - 4MB > 4MB). If this is really matter, we need to
> > > >>> count highatomic
> > > >>> free accurately.
> > > >>
> > > >> Seems to make sense. We only use zone->nr_reserved_highatomic to account
> > > >> the reserved highatomic, don't track how much have been used for
> > > >> highatomic allocation. But not sure if this will happen often, or just a
> > > >> very rare case, whether taking that into account will impact anything.
> > > >
> > > >Unfortunately there's a problem when trying to account free pages of a migrate
> > > >type exactly, as e.g. during reserve_highatomic_pageblock(), some pages might be
> > > >in pcplist of other cpu with other migratetype, and once they are freed, the
> > > >buddy merging will merge the different migratetypes and distort the accounting.
> >
> > Yeah, the migratetype could have been cached in page->index before we
> > finish the reserve_highatomic_pageblock(). Seems we take a very coarse
> > grained way to do the highatomic reservation and accounting. When I went
> > through the related code, found out we call
> > reserve_highatomic_pageblock() if below condition is met. So what if
> > order is 1, and all other pages on the page block have been used? Do we
> > possibly have this kind of extreme case?
> 
> If I correctly understand your question, yes I think so.
> If a hight-order free page was allocated on ALLOC_HARDER context, and the page
> was the last order-1, then zone->nr_reserved_highatomic will be increased by
> pageblock_nr_pages even though there was actually no free page moved to the
> highatomic free page list.

Exactly.

> 
> The highatomic logic, I think, was originally designed to reserve in
> that coarse grained
> way to mitigate highatomic allocation failure.
> 
> >
> > From Jaewon's captured information, we can see that the available free
> > highatomic is even less than half the zone->nr_reserved_highatomic.
> > Should we at least limit the reservation to the case with a bigger
> > order. E.g 1/2 of pageblock_nr_pages? Please correct me if I am wrong or
> > miss anyting.
> >
> 
> I do not know well, but I think high-order lower than 1/2 of pageblock_nr_pages
> also should be considered. i.e) a system using huge order-3 atomic allocation
> in a short time.
> 
> > "reserved_highatomic:32768KB and actually 14232kB free."
> 
> I think this unwanted case might happen by freed pages. The pages allocated
> for non-high-atomic also would be freed back into highatomic free
> list. But I guess
> there was sudden huge use of highatomic and partially freed.

OK, thanks for sharing. So we can leave with it, may do some improvement
if any issue is reported in the future.

> 
> >
> > get_page_from_freelist
> > {
> >         ...
> >                         if (unlikely(order && (alloc_flags & ALLOC_HARDER)))
> >                                 reserve_highatomic_pageblock(page, zone, order);
> >         ...
> > }
> > > >Fixing this for all migratetypes would have performance overhead, so we only do
> > > >that for MIGRATE_ISOLATE which is not that frequent (and it took a while to
> > > >eliminate all corner cases), and CMA which doesn't change pageblocks dynamically.
> > >
> > > AFAIK we do not account free cma in pcp either. But yes accurate check could be
> > > overhead. For example, __mod_zone_freepage_state should account highatomic free
> > > as cma free. And we may see some incorrect accounting issue.
> > >
> > > >
> > > >So either we live with the possible overreclaim due to inaccurate counting per
> > > >your example above, or we instead let order-0 atomic allocations use highatomic
> > > >reserves.
> > > >
> > >
> > > Additionally regarding existing Mel's comment, let me remove some of it if you
> > > don't mind.
> > >
> > >         /*
> > >          * Fast check for order-0 only. If this fails then the reserves
> > > -        * need to be calculated. There is a corner case where the check
> > > -        * passes but only the high-order atomic reserve are free. If
> > > -        * the caller is !atomic then it'll uselessly search the free
> > > -        * list. That corner case is then slower but it is harmless.
> > > +        * need to be calculated.
> > >          */
> > >
> > > I will prepare v3 patch.
> > > Thank you again.
> > >
> >
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ