lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6b0447d0-30c7-4432-a4f3-97e2d27e9e3b@gmail.com>
Date:   Tue, 12 Sep 2023 09:49:04 +0800
From:   Vern Hao <haoxing990@...il.com>
To:     Johannes Weiner <hannes@...xchg.org>
Cc:     mhocko@...nel.org, roman.gushchin@...ux.dev, shakeelb@...gle.com,
        akpm@...ux-foundation.org, cgroups@...r.kernel.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: memcg: add THP swap out info for anonymous reclaim


在 2023/9/12 00:08, Johannes Weiner 写道:
> On Sat, Sep 09, 2023 at 11:52:41PM +0800, Xin Hao wrote:
>> At present, we support per-memcg reclaim strategy, however we do not
>> know the number of transparent huge pages being reclaimed, as we know
>> the transparent huge pages need to be splited before reclaim them, and
>> they will bring some performance bottleneck effect. for example, when
>> two memcg (A & B) are doing reclaim for anonymous pages at same time,
>> and 'A' memcg is reclaiming a large number of transparent huge pages, we
>> can better analyze that the performance bottleneck will be caused by 'A'
>> memcg.  therefore, in order to better analyze such problems, there add
>> THP swap out info for per-memcg.
>>
>> Signed-off-by: Xin Hao <vernhao@...cent.com>
> That sounds reasonable. A few comments below:
>
>> @@ -4131,6 +4133,10 @@ static const unsigned int memcg1_events[] = {
>>   	PGPGOUT,
>>   	PGFAULT,
>>   	PGMAJFAULT,
>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> +	THP_SWPOUT,
>> +	THP_SWPOUT_FALLBACK,
>> +#endif
>>   };
> Cgroup1 is maintenance-only, please drop this hunk.
Will remove it next version thanks.
>
>>   static void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s)
>> diff --git a/mm/page_io.c b/mm/page_io.c
>> index fe4c21af23f2..008ada2e024a 100644
>> --- a/mm/page_io.c
>> +++ b/mm/page_io.c
>> @@ -208,8 +208,10 @@ int swap_writepage(struct page *page, struct writeback_control *wbc)
>>   static inline void count_swpout_vm_event(struct folio *folio)
>>   {
>>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> -	if (unlikely(folio_test_pmd_mappable(folio)))
>> +	if (unlikely(folio_test_pmd_mappable(folio))) {
>> +		count_memcg_events(folio_memcg(folio), THP_SWPOUT, 1);
> count_memcg_folio_events()
Done.
>
>>   		count_vm_event(THP_SWPOUT);
>> +	}
>>   #endif
>>   	count_vm_events(PSWPOUT, folio_nr_pages(folio));
>>   }
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index ea57a43ebd6b..29a82b72345a 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -1928,6 +1928,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
>>   								folio_list))
>>   						goto activate_locked;
>>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> +					count_memcg_events(folio_memcg(folio),
>> +							   THP_SWPOUT_FALLBACK, 1);
> count_memcg_folio_events()

Done.

thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ