lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 14 Feb 2024 10:12:02 +0100
From: David Hildenbrand <david@...hat.com>
To: Zi Yan <ziy@...dia.com>, "Pankaj Raghav (Samsung)"
 <kernel@...kajraghav.com>, linux-mm@...ck.org
Cc: "Matthew Wilcox (Oracle)" <willy@...radead.org>,
 Yang Shi <shy828301@...il.com>, Yu Zhao <yuzhao@...gle.com>,
 "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
 Ryan Roberts <ryan.roberts@....com>, Michal Koutný
 <mkoutny@...e.com>, Roman Gushchin <roman.gushchin@...ux.dev>,
 Zach O'Keefe <zokeefe@...gle.com>, Hugh Dickins <hughd@...gle.com>,
 Mcgrof Chamberlain <mcgrof@...nel.org>,
 Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
 cgroups@...r.kernel.org, linux-fsdevel@...r.kernel.org,
 linux-kselftest@...r.kernel.org
Subject: Re: [PATCH v4 1/7] mm/memcg: use order instead of nr in
 split_page_memcg()

On 13.02.24 22:55, Zi Yan wrote:
> From: Zi Yan <ziy@...dia.com>
> 
> We do not have non power of two pages, using nr is error prone if nr
> is not power-of-two. Use page order instead.
> 
> Signed-off-by: Zi Yan <ziy@...dia.com>
> ---
>   include/linux/memcontrol.h | 4 ++--
>   mm/huge_memory.c           | 3 ++-
>   mm/memcontrol.c            | 3 ++-
>   mm/page_alloc.c            | 4 ++--
>   4 files changed, 8 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 4e4caeaea404..173bbb53c1ec 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1163,7 +1163,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm,
>   	rcu_read_unlock();
>   }
>   
> -void split_page_memcg(struct page *head, unsigned int nr);
> +void split_page_memcg(struct page *head, int order);
>   
>   unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order,
>   						gfp_t gfp_mask,
> @@ -1621,7 +1621,7 @@ void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx)
>   {
>   }
>   
> -static inline void split_page_memcg(struct page *head, unsigned int nr)
> +static inline void split_page_memcg(struct page *head, int order)
>   {
>   }
>   
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 016e20bd813e..0cd5fba0923c 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2877,9 +2877,10 @@ static void __split_huge_page(struct page *page, struct list_head *list,
>   	unsigned long offset = 0;
>   	unsigned int nr = thp_nr_pages(head);
>   	int i, nr_dropped = 0;
> +	int order = folio_order(folio);

You could calculate "nr" from "order" here, removing the usage of 
thp_nr_pages().

>   
>   	/* complete memcg works before add pages to LRU */
> -	split_page_memcg(head, nr);
> +	split_page_memcg(head, order);
>   
>   	if (folio_test_anon(folio) && folio_test_swapcache(folio)) {

Acked-by: David Hildenbrand <david@...hat.com>

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ