lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 4 Jan 2017 21:47:44 +0100 From: Greg Kroah-Hartman <gregkh@...uxfoundation.org> To: linux-kernel@...r.kernel.org Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>, stable@...r.kernel.org, Mel Gorman <mgorman@...e.de>, Vlastimil Babka <vbabka@...e.cz>, Michal Hocko <mhocko@...e.com>, Hillf Danton <hillf.zj@...baba-inc.com>, Christoph Lameter <cl@...ux.com>, Johannes Weiner <hannes@...xchg.org>, Jesper Dangaard Brouer <brouer@...hat.com>, Joonsoo Kim <iamjoonsoo.kim@....com>, Andrew Morton <akpm@...ux-foundation.org>, Linus Torvalds <torvalds@...ux-foundation.org> Subject: [PATCH 4.8 59/85] mm, page_alloc: keep pcp count and list contents in sync if struct page is corrupted 4.8-stable review patch. If anyone has any objections, please let me know. ------------------ From: Mel Gorman <mgorman@...hsingularity.net> commit a6de734bc002fe2027ccc074fbbd87d72957b7a4 upstream. Vlastimil Babka pointed out that commit 479f854a207c ("mm, page_alloc: defer debugging checks of pages allocated from the PCP") will allow the per-cpu list counter to be out of sync with the per-cpu list contents if a struct page is corrupted. The consequence is an infinite loop if the per-cpu lists get fully drained by free_pcppages_bulk because all the lists are empty but the count is positive. The infinite loop occurs here do { batch_free++; if (++migratetype == MIGRATE_PCPTYPES) migratetype = 0; list = &pcp->lists[migratetype]; } while (list_empty(list)); What the user sees is a bad page warning followed by a soft lockup with interrupts disabled in free_pcppages_bulk(). This patch keeps the accounting in sync. Fixes: 479f854a207c ("mm, page_alloc: defer debugging checks of pages allocated from the PCP") Link: http://lkml.kernel.org/r/20161202112951.23346-2-mgorman@techsingularity.net Signed-off-by: Mel Gorman <mgorman@...e.de> Acked-by: Vlastimil Babka <vbabka@...e.cz> Acked-by: Michal Hocko <mhocko@...e.com> Acked-by: Hillf Danton <hillf.zj@...baba-inc.com> Cc: Christoph Lameter <cl@...ux.com> Cc: Johannes Weiner <hannes@...xchg.org> Cc: Jesper Dangaard Brouer <brouer@...hat.com> Cc: Joonsoo Kim <iamjoonsoo.kim@....com> Signed-off-by: Andrew Morton <akpm@...ux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org> --- mm/page_alloc.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2173,7 +2173,7 @@ static int rmqueue_bulk(struct zone *zon unsigned long count, struct list_head *list, int migratetype, bool cold) { - int i; + int i, alloced = 0; spin_lock(&zone->lock); for (i = 0; i < count; ++i) { @@ -2198,13 +2198,21 @@ static int rmqueue_bulk(struct zone *zon else list_add_tail(&page->lru, list); list = &page->lru; + alloced++; if (is_migrate_cma(get_pcppage_migratetype(page))) __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, -(1 << order)); } + + /* + * i pages were removed from the buddy list even if some leak due + * to check_pcp_refill failing so adjust NR_FREE_PAGES based + * on i. Do not confuse with 'alloced' which is the number of + * pages added to the pcp list. + */ __mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order)); spin_unlock(&zone->lock); - return i; + return alloced; } #ifdef CONFIG_NUMA
Powered by blists - more mailing lists