lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 24 Aug 2020 22:10:49 -0700
From:   Andrew Morton <akpm@...ux-foundation.org>
To:     js1304@...il.com
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Michal Hocko <mhocko@...nel.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        "Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
        kernel-team@....com, Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH for v5.9] mm/page_alloc: handle a missing case for
 memalloc_nocma_{save/restore} APIs

On Tue, 25 Aug 2020 13:59:42 +0900 js1304@...il.com wrote:

> From: Joonsoo Kim <iamjoonsoo.kim@....com>
> 
> memalloc_nocma_{save/restore} APIs can be used to skip page allocation
> on CMA area, but, there is a missing case and the page on CMA area could
> be allocated even if APIs are used. This patch handles this case to fix
> the potential issue.
> 
> Missing case is an allocation from the pcplist. MIGRATE_MOVABLE pcplist
> could have the pages on CMA area so we need to skip it if ALLOC_CMA isn't
> specified.
> 
> This patch implements this behaviour by checking allocated page from
> the pcplist rather than skipping an allocation from the pcplist entirely.
> Skipping the pcplist entirely would result in a mismatch between watermark
> check and actual page allocation. And, it requires to break current code
> layering that order-0 page is always handled by the pcplist. I'd prefer
> to avoid it so this patch uses different way to skip CMA page allocation
> from the pcplist.
> 
> ...
>
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3341,6 +3341,22 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
>  	pcp = &this_cpu_ptr(zone->pageset)->pcp;
>  	list = &pcp->lists[migratetype];
>  	page = __rmqueue_pcplist(zone,  migratetype, alloc_flags, pcp, list);
> +#ifdef CONFIG_CMA
> +	if (page) {
> +		int mt = get_pcppage_migratetype(page);
> +
> +		/*
> +		 * pcp could have the pages on CMA area and we need to skip it
> +		 * when !ALLOC_CMA. Free all pcplist and retry allocation.
> +		 */
> +		if (is_migrate_cma(mt) && !(alloc_flags & ALLOC_CMA)) {
> +			list_add(&page->lru, &pcp->lists[migratetype]);
> +			pcp->count++;
> +			free_pcppages_bulk(zone, pcp->count, pcp);
> +			page = __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp, list);
> +		}
> +	}
> +#endif
>  	if (page) {
>  		__count_zid_vm_events(PGALLOC, page_zonenum(page), 1);
>  		zone_statistics(preferred_zone, zone);

That's a bunch more code on a very hot path to serve an obscure feature
which has a single obscure callsite.

Can we instead put the burden on that callsite rather than upon
everyone?  For (dumb) example, teach __gup_longterm_locked() to put the
page back if it's CMA and go get another one?


Powered by blists - more mailing lists