lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220509170632.fec2f56ad9f640329330b9de@linux-foundation.org>
Date:   Mon, 9 May 2022 17:06:32 -0700
From:   Andrew Morton <akpm@...ux-foundation.org>
To:     Sultan Alsawaf <sultan@...neltoast.com>
Cc:     stable@...r.kernel.org, Minchan Kim <minchan@...nel.org>,
        Nitin Gupta <ngupta@...are.org>,
        Sergey Senozhatsky <senozhatsky@...omium.org>,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free
 and page migration

On Sun,  8 May 2022 19:47:02 -0700 Sultan Alsawaf <sultan@...neltoast.com> wrote:

> From: Sultan Alsawaf <sultan@...neltoast.com>
> 
> The asynchronous zspage free worker tries to lock a zspage's entire page
> list without defending against page migration. Since pages which haven't
> yet been locked can concurrently migrate off the zspage page list while
> lock_zspage() churns away, lock_zspage() can suffer from a few different
> lethal races. It can lock a page which no longer belongs to the zspage and
> unsafely dereference page_private(), it can unsafely dereference a torn
> pointer to the next page (since there's a data race), and it can observe a
> spurious NULL pointer to the next page and thus not lock all of the
> zspage's pages (since a single page migration will reconstruct the entire
> page list, and create_page_chain() unconditionally zeroes out each list
> pointer in the process).
> 
> Fix the races by using migrate_read_lock() in lock_zspage() to synchronize
> with page migration.
> 
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1718,11 +1718,40 @@ static enum fullness_group putback_zspage(struct size_class *class,
>   */
>  static void lock_zspage(struct zspage *zspage)
>  {
> -	struct page *page = get_first_page(zspage);
> +	struct page *curr_page, *page;
>  
> -	do {
> -		lock_page(page);
> -	} while ((page = get_next_page(page)) != NULL);
> +	/*
> +	 * Pages we haven't locked yet can be migrated off the list while we're
> +	 * trying to lock them, so we need to be careful and only attempt to
> +	 * lock each page under migrate_read_lock(). Otherwise, the page we lock
> +	 * may no longer belong to the zspage. This means that we may wait for
> +	 * the wrong page to unlock, so we must take a reference to the page
> +	 * prior to waiting for it to unlock outside migrate_read_lock().
> +	 */
> +	while (1) {
> +		migrate_read_lock(zspage);
> +		page = get_first_page(zspage);
> +		if (trylock_page(page))
> +			break;
> +		get_page(page);
> +		migrate_read_unlock(zspage);
> +		wait_on_page_locked(page);

Why not simply lock_page() here?  The get_page() alone won't protect
from all the dire consequences which you have identified?

> +		put_page(page);
> +	}
> +
> +	curr_page = page;
> +	while ((page = get_next_page(curr_page))) {
> +		if (trylock_page(page)) {
> +			curr_page = page;
> +		} else {
> +			get_page(page);
> +			migrate_read_unlock(zspage);
> +			wait_on_page_locked(page);

ditto.

> +			put_page(page);
> +			migrate_read_lock(zspage);
> +		}
> +	}
> +	migrate_read_unlock(zspage);
>  }
>  
>  static int zs_init_fs_context(struct fs_context *fc)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ