lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 21 Aug 2023 11:33:51 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Sidhartha Kumar <sidhartha.kumar@...cle.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        akpm@...ux-foundation.org, songmuchun@...edance.com,
        willy@...radead.org
Subject: Re: [PATCH v6] mm/filemap: remove hugetlb special casing in filemap.c

On 08/17/23 11:18, Sidhartha Kumar wrote:
> Remove special cased hugetlb handling code within the page cache by
> changing the granularity of each index to the base page size rather than
> the huge page size. Adds new wrappers for hugetlb code to to interact with the
> page cache which convert to a linear index.
<snip>
> @@ -237,7 +234,7 @@ void filemap_free_folio(struct address_space *mapping, struct folio *folio)
>  	if (free_folio)
>  		free_folio(folio);
>  
> -	if (folio_test_large(folio) && !folio_test_hugetlb(folio))
> +	if (folio_test_large(folio))
>  		refs = folio_nr_pages(folio);
>  	folio_put_refs(folio, refs);
>  }
> @@ -858,14 +855,15 @@ noinline int __filemap_add_folio(struct address_space *mapping,
>  
>  	if (!huge) {
>  		int error = mem_cgroup_charge(folio, NULL, gfp);
> -		VM_BUG_ON_FOLIO(index & (folio_nr_pages(folio) - 1), folio);
>  		if (error)
>  			return error;
>  		charged = true;
> -		xas_set_order(&xas, index, folio_order(folio));
> -		nr = folio_nr_pages(folio);
>  	}

When a hugetlb page is added to the page cache, the ref count will now
be increased by folio_nr_pages.  So, the ref count for a 2MB hugetlb page
on x86 will be increased by 512.

We will need a corresponding change to migrate_huge_page_move_mapping().
For migration, the ref count is checked as follows:

	xas_lock_irq(&xas);
	expected_count = 2 + folio_has_private(src);
	if (!folio_ref_freeze(src, expected_count)) {
		xas_unlock_irq(&xas);
		return -EAGAIN;
	}

So, this patch will break hugetlb migration of hugetlb pages in the page
cache.

Sorry for not noticing this earlier.
-- 
Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ