lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200622094020epcms1p639cc33933fbb7a9d578adb16a6ea0734@epcms1p6>
Date:   Mon, 22 Jun 2020 18:40:20 +0900
From:   김재원 <jaewon31.kim@...sung.com>
To:     Michal Hocko <mhocko@...nel.org>,
        김재원 <jaewon31.kim@...sung.com>
CC:     "vbabka@...e.cz" <vbabka@...e.cz>,
        "bhe@...hat.com" <bhe@...hat.com>,
        "mgorman@...hsingularity.net" <mgorman@...hsingularity.net>,
        "minchan@...nel.org" <minchan@...nel.org>,
        "mgorman@...e.de" <mgorman@...e.de>,
        "hannes@...xchg.org" <hannes@...xchg.org>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "jaewon31.kim@...il.com" <jaewon31.kim@...il.com>,
        이용택 <ytk.lee@...sung.com>,
        김철민 <cmlaika.kim@...sung.com>
Subject: RE: [PATCH v4] page_alloc: consider highatomic reserve in watermark
 fast

>On Sat 20-06-20 08:59:58, Jaewon Kim wrote:
>[...]
>> @@ -3502,19 +3525,12 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
>>          const bool alloc_harder = (alloc_flags & (ALLOC_HARDER|ALLOC_OOM));
>>  
>>          /* free_pages may go negative - that's OK */
>> -        free_pages -= (1 << order) - 1;
>> +        free_pages -= __zone_watermark_unusable_free(z, order, alloc_flags);
>>  
>>          if (alloc_flags & ALLOC_HIGH)
>>                  min -= min / 2;
>>  
>> -        /*
>> -         * If the caller does not have rights to ALLOC_HARDER then subtract
>> -         * the high-atomic reserves. This will over-estimate the size of the
>> -         * atomic reserve but it avoids a search.
>> -         */
>> -        if (likely(!alloc_harder)) {
>> -                free_pages -= z->nr_reserved_highatomic;
>> -        } else {
>> +        if (unlikely(alloc_harder)) {
>>                  /*
>>                   * OOM victims can try even harder than normal ALLOC_HARDER
>>                   * users on the grounds that it's definitely going to be in
>[...]
>> @@ -3582,25 +3591,22 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
>>                                  unsigned long mark, int highest_zoneidx,
>>                                  unsigned int alloc_flags)
>>  {
>> -        long free_pages = zone_page_state(z, NR_FREE_PAGES);
>> -        long cma_pages = 0;
>> +        long free_pages;
>> +        long unusable_free;
>>  
>> -#ifdef CONFIG_CMA
>> -        /* If allocation can't use CMA areas don't use free CMA pages */
>> -        if (!(alloc_flags & ALLOC_CMA))
>> -                cma_pages = zone_page_state(z, NR_FREE_CMA_PAGES);
>> -#endif
>> +        free_pages = zone_page_state(z, NR_FREE_PAGES);
>> +        unusable_free = __zone_watermark_unusable_free(z, order, alloc_flags);
>>  
>>          /*
>>           * Fast check for order-0 only. If this fails then the reserves
>> -         * need to be calculated. There is a corner case where the check
>> -         * passes but only the high-order atomic reserve are free. If
>> -         * the caller is !atomic then it'll uselessly search the free
>> -         * list. That corner case is then slower but it is harmless.
>> +         * need to be calculated.
>>           */
>> -        if (!order && (free_pages - cma_pages) >
>> -                                mark + z->lowmem_reserve[highest_zoneidx])
>> -                return true;
>> +        if (!order) {
>> +                long fast_free = free_pages - unusable_free;
>> +
>> +                if (fast_free > mark + z->lowmem_reserve[highest_zoneidx])
>> +                        return true;
>> +        }
> 
>There is no user of unusable_free for order > 0. With you current code
>__zone_watermark_unusable_free would be called twice for high order
>allocations unless compiler tries to be clever..

Yes you're right.
Following code could be moved only for order-0.
 unusable_free = __zone_watermark_unusable_free(z, order, alloc_flags);
Let me fix it at v5.

> 
>But more importantly, I have hard time to follow why we need both
>zone_watermark_fast and zone_watermark_ok now. They should be
>essentially the same for anything but order == 0. For order 0 the
>only difference between the two is that zone_watermark_ok checks for
>ALLOC_HIGH resp ALLOC_HARDER, ALLOC_OOM. So what is exactly fast about
>the former and why do we need it these days?
> 

I think the author, Mel, may ansewr. But I think the wmark_fast may
fast by 1) not checking more condition about wmark and 2) using inline
rather than function. According to description on commit 48ee5f3696f6,
it seems to bring about 4% improvement.

>> 
>>          return __zone_watermark_ok(z, order, mark, highest_zoneidx, alloc_flags,
>>                                          free_pages);
>> -- 
>> 2.17.1
>> 
> 
>-- 
>Michal Hocko
>SUSE Labs
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ