lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 25 Aug 2020 13:59:42 +0900 From: js1304@...il.com To: Andrew Morton <akpm@...ux-foundation.org> Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, Michal Hocko <mhocko@...nel.org>, Vlastimil Babka <vbabka@...e.cz>, "Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>, kernel-team@....com, Joonsoo Kim <iamjoonsoo.kim@....com> Subject: [PATCH for v5.9] mm/page_alloc: handle a missing case for memalloc_nocma_{save/restore} APIs From: Joonsoo Kim <iamjoonsoo.kim@....com> memalloc_nocma_{save/restore} APIs can be used to skip page allocation on CMA area, but, there is a missing case and the page on CMA area could be allocated even if APIs are used. This patch handles this case to fix the potential issue. Missing case is an allocation from the pcplist. MIGRATE_MOVABLE pcplist could have the pages on CMA area so we need to skip it if ALLOC_CMA isn't specified. This patch implements this behaviour by checking allocated page from the pcplist rather than skipping an allocation from the pcplist entirely. Skipping the pcplist entirely would result in a mismatch between watermark check and actual page allocation. And, it requires to break current code layering that order-0 page is always handled by the pcplist. I'd prefer to avoid it so this patch uses different way to skip CMA page allocation from the pcplist. Fixes: 8510e69c8efe (mm/page_alloc: fix memalloc_nocma_{save/restore} APIs) Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com> --- mm/page_alloc.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0e2bab4..c4abf58 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3341,6 +3341,22 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, pcp = &this_cpu_ptr(zone->pageset)->pcp; list = &pcp->lists[migratetype]; page = __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp, list); +#ifdef CONFIG_CMA + if (page) { + int mt = get_pcppage_migratetype(page); + + /* + * pcp could have the pages on CMA area and we need to skip it + * when !ALLOC_CMA. Free all pcplist and retry allocation. + */ + if (is_migrate_cma(mt) && !(alloc_flags & ALLOC_CMA)) { + list_add(&page->lru, &pcp->lists[migratetype]); + pcp->count++; + free_pcppages_bulk(zone, pcp->count, pcp); + page = __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp, list); + } + } +#endif if (page) { __count_zid_vm_events(PGALLOC, page_zonenum(page), 1); zone_statistics(preferred_zone, zone); -- 2.7.4
Powered by blists - more mailing lists