lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0a5ccb09-6f51-ca65-3c19-4c6371dbb9ba@suse.cz>
Date:   Fri, 2 Dec 2016 12:53:33 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Mel Gorman <mgorman@...hsingularity.net>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     Christoph Lameter <cl@...ux.com>, Michal Hocko <mhocko@...e.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Linux-MM <linux-mm@...ck.org>,
        Linux-Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] mm, page_alloc: Keep pcp count and list contents in
 sync if struct page is corrupted

On 12/02/2016 12:29 PM, Mel Gorman wrote:
> Vlastimil Babka pointed out that commit 479f854a207c ("mm, page_alloc:
> defer debugging checks of pages allocated from the PCP") will allow the
> per-cpu list counter to be out of sync with the per-cpu list contents
> if a struct page is corrupted.
>
> The consequence is an infinite loop if the per-cpu lists get fully drained
> by free_pcppages_bulk because all the lists are empty but the count is
> positive. The infinite loop occurs here
>
>                 do {
>                         batch_free++;
>                         if (++migratetype == MIGRATE_PCPTYPES)
>                                 migratetype = 0;
>                         list = &pcp->lists[migratetype];
>                 } while (list_empty(list));
>
> From a user perspective, it's a bad page warning followed by a soft lockup
> with interrupts disabled in free_pcppages_bulk().
>
> This patch keeps the accounting in sync.
>
> Fixes: 479f854a207c ("mm, page_alloc: defer debugging checks of pages allocated from the PCP")
> Signed-off-by: Mel Gorman <mgorman@...e.de>
> cc: stable@...r.kernel.org [4.7+]

Acked-by: Vlastimil Babka <vbabka@...e.cz>

> ---
>  mm/page_alloc.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 6de9440e3ae2..34ada718ef47 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2192,7 +2192,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
>  			unsigned long count, struct list_head *list,
>  			int migratetype, bool cold)
>  {
> -	int i;
> +	int i, alloced = 0;
>
>  	spin_lock(&zone->lock);
>  	for (i = 0; i < count; ++i) {
> @@ -2217,13 +2217,21 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
>  		else
>  			list_add_tail(&page->lru, list);
>  		list = &page->lru;
> +		alloced++;
>  		if (is_migrate_cma(get_pcppage_migratetype(page)))
>  			__mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
>  					      -(1 << order));
>  	}
> +
> +	/*
> +	 * i pages were removed from the buddy list even if some leak due
> +	 * to check_pcp_refill failing so adjust NR_FREE_PAGES based
> +	 * on i. Do not confuse with 'alloced' which is the number of
> +	 * pages added to the pcp list.
> +	 */
>  	__mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
>  	spin_unlock(&zone->lock);
> -	return i;
> +	return alloced;
>  }
>
>  #ifdef CONFIG_NUMA
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ