lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4ac7912e-724a-448b-b396-960956a46b37@redhat.com>
Date: Fri, 15 Dec 2023 16:16:33 +0100
From: David Hildenbrand <david@...hat.com>
To: "Yin, Fengwei" <fengwei.yin@...el.com>, linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
 "Matthew Wilcox (Oracle)" <willy@...radead.org>,
 Hugh Dickins <hughd@...gle.com>, Ryan Roberts <ryan.roberts@....com>,
 Mike Kravetz <mike.kravetz@...cle.com>, Muchun Song <muchun.song@...ux.dev>,
 Peter Xu <peterx@...hat.com>
Subject: Re: [PATCH v1 14/39] mm/rmap: introduce
 folio_add_anon_rmap_[pte|ptes|pmd]()

On 15.12.23 03:26, Yin, Fengwei wrote:
> 
> 
> On 12/11/2023 11:56 PM, David Hildenbrand wrote:
>> Let's mimic what we did with folio_add_file_rmap_*() so we can similarly
>> replace page_add_anon_rmap() next.
>>
>> Make the compiler always special-case on the granularity by using
>> __always_inline.
>>
>> Note that the new functions ignore the RMAP_COMPOUND flag, which we will
>> remove as soon as page_add_anon_rmap() is gone.
>>
>> Signed-off-by: David Hildenbrand <david@...hat.com>
> Reviewed-by: Yin Fengwei <fengwei.yin@...el.com>
> 
> With a small question below.
> 

Thanks!

[...]

>> +	if (flags & RMAP_EXCLUSIVE) {
>> +		switch (mode) {
>> +		case RMAP_MODE_PTE:
>> +			for (i = 0; i < nr_pages; i++)
>> +				SetPageAnonExclusive(page + i);
>> +			break;
>> +		case RMAP_MODE_PMD:
>> +			SetPageAnonExclusive(page);
>> +			break;
>> +		}
>> +	}
>> +	for (i = 0; i < nr_pages; i++) {
>> +		struct page *cur_page = page + i;
>> +
>> +		/* While PTE-mapping a THP we have a PMD and a PTE mapping. */
>> +		VM_WARN_ON_FOLIO((atomic_read(&cur_page->_mapcount) > 0 ||
>> +				  (folio_test_large(folio) &&
>> +				   folio_entire_mapcount(folio) > 1)) &&
>> +				 PageAnonExclusive(cur_page), folio);
>> +	}
> This change will iterate all pages for PMD case. The original behavior
> didn't check all pages. Is this change by purpose? Thanks.

Yes, on purpose. I first thought about also separating the code paths 
here, but realized that it makes much more sense to check each 
individual subpage that is effectively getting mapped by that PMD, 
instead of only the head page.

I'll add a comment to the patch description.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ