lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3572ae2e-2141-4a70-99da-850b2e7ade41@redhat.com>
Date: Wed, 21 Aug 2024 23:34:37 +0200
From: David Hildenbrand <david@...hat.com>
To: Barry Song <21cnbao@...il.com>, akpm@...ux-foundation.org,
 linux-mm@...ck.org
Cc: baolin.wang@...ux.alibaba.com, chrisl@...nel.org, hanchuanhua@...o.com,
 ioworker0@...il.com, kaleshsingh@...gle.com, kasong@...cent.com,
 linux-kernel@...r.kernel.org, ryan.roberts@....com, v-songbaohua@...o.com,
 ziy@...dia.com, yuanshuai@...o.com
Subject: Re: [PATCH v2 1/2] mm: collect the number of anon large folios

On 12.08.24 00:49, Barry Song wrote:
> From: Barry Song <v-songbaohua@...o.com>
> 
> Anon large folios come from three places:
> 1. new allocated large folios in PF, they will call folio_add_new_anon_rmap()
> for rmap;
> 2. a large folio is split into multiple lower-order large folios;
> 3. a large folio is migrated to a new large folio.
> 
> In all above three counts, we increase nr_anon by 1;
> 
> Anon large folios might go either because of be split or be put
> to free, in these cases, we reduce the count by 1.
> 
> Folios that have been added to the swap cache but have not yet received
> an anon mapping won't be counted. This is consistent with the AnonPages
> statistics in /proc/meminfo.

Thinking out loud, I wonder if we want to have something like that for 
any anon folios (including small ones).

Assume we longterm-pinned an anon folio and unmapped/zapped it. It would 
be quite interesting to see that these are actually anon pages still 
consuming memory. Same with memory leaks, when an anon folio doesn't get 
freed for some reason.

The whole "AnonPages" counter thingy is just confusing, it only counts 
what's currently mapped ... so we'd want something different.

But it's okay to start with large folios only, there we have a new 
interface without that legacy stuff :)

> 
> Signed-off-by: Barry Song <v-songbaohua@...o.com>
> ---
>   Documentation/admin-guide/mm/transhuge.rst |  5 +++++
>   include/linux/huge_mm.h                    | 15 +++++++++++++--
>   mm/huge_memory.c                           | 13 ++++++++++---
>   mm/migrate.c                               |  4 ++++
>   mm/page_alloc.c                            |  5 ++++-
>   mm/rmap.c                                  |  1 +
>   6 files changed, 37 insertions(+), 6 deletions(-)
> 
> diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
> index 058485daf186..9fdfb46e4560 100644
> --- a/Documentation/admin-guide/mm/transhuge.rst
> +++ b/Documentation/admin-guide/mm/transhuge.rst
> @@ -527,6 +527,11 @@ split_deferred
>           it would free up some memory. Pages on split queue are going to
>           be split under memory pressure, if splitting is possible.
>   
> +nr_anon
> +       the number of anon huge pages we have in the whole system.

"transparent ..." otherwise people might confuse it with anon hugetlb 
"huge pages" ... :)

I briefly tried coming up with a better name than "nr_anon" but failed.


[...]

> @@ -447,6 +449,8 @@ static int __folio_migrate_mapping(struct address_space *mapping,
>   	 */
>   	newfolio->index = folio->index;
>   	newfolio->mapping = folio->mapping;
> +	if (folio_test_anon(folio) && folio_test_large(folio))
> +		mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON, 1);
>   	folio_ref_add(newfolio, nr); /* add cache reference */
>   	if (folio_test_swapbacked(folio)) {
>   		__folio_set_swapbacked(newfolio);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 84a7154fde93..382c364d3efa 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1084,8 +1084,11 @@ __always_inline bool free_pages_prepare(struct page *page,
>   			(page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
>   		}
>   	}
> -	if (PageMappingFlags(page))
> +	if (PageMappingFlags(page)) {
> +		if (PageAnon(page) && compound)
> +			mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);

I wonder if you could even drop the "compound" check. mod_mthp_stat 
would handle order == 0 just fine. Not that I think it makes much 
difference.


Nothing else jumped at me.

Acked-by: David Hildenbrand <david@...hat.com>

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ