lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <6c69c2d1-8889-aa63-f28e-4cd33a5fd854@suse.cz>
Date:   Thu, 5 Sep 2019 15:59:13 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Sangwoo <sangwoo2.park@....com>, hannes@...xchg.org,
        arunks@...eaurora.org, guro@...com, richard.weiyang@...il.com,
        glider@...gle.com, jannh@...gle.com, dan.j.williams@...el.com,
        akpm@...ux-foundation.org, alexander.h.duyck@...ux.intel.com,
        rppt@...ux.vnet.ibm.com, gregkh@...uxfoundation.org,
        janne.huttunen@...ia.com, pasha.tatashin@...een.com,
        Michal Hocko <mhocko@...e.com>, osalvador@...e.de,
        mgorman@...hsingularity.net, khlebnikov@...dex-team.ru
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: Add nr_free_highatomimic to fix incorrect watermatk
 routine

On 8/30/19 11:25 AM, Sangwoo wrote:
> The highatomic migrate block can be increased to 1% of Total memory.
> And, this is for only highorder ( > 0 order). So, this block size is
> excepted during check watermark if allocation type isn't alloc_harder.
> 
> It has problem. The usage of highatomic is already calculated at NR_FREE_PAGES.
> So, if we except total block size of highatomic, it's twice minus size of allocated
> highatomic.
> It's cause allocation fail although free pages enough.

This is known, the comment in __zone_watermark_order says "This will
over-estimate the size of the atomic reserve but it avoids a search."
It was discussed during review and wasn't considered a large issue
thanks to unreserving on demand before OOM happens.

> @@ -919,6 +923,9 @@ static inline void __free_one_page(struct page *page,
>  	VM_BUG_ON(migratetype == -1);
>  	if (likely(!is_migrate_isolate(migratetype)))
>  		__mod_zone_freepage_state(zone, 1 << order, migratetype);
> +	if (is_migrate_highatomic(migratetype) ||
> +		is_migrate_highatomic_page(page))
> +		__mod_zone_page_state(zone, NR_FREE_HIGHATOMIC_PAGES, 1 << order);

I suspect the counter will eventually get imbalanced, at the least due
to merging a highatomic pageblock and non-highatomic pageblock. To get
it right, it would have to be complicated in a similar way that we
handle MIGRATE_ISOLATED and MIGRATE_CMA. It wasn't considered serious
enough to warrant these complications.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ