lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1cc1e3d1-4646-436c-92cb-f8584406bf88@suse.cz>
Date: Tue, 9 Apr 2024 08:23:20 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Baolin Wang <baolin.wang@...ux.alibaba.com>,
 Andrew Morton <akpm@...ux-foundation.org>,
 Mel Gorman <mgorman@...hsingularity.net>, Zi Yan <ziy@...dia.com>,
 "Huang, Ying" <ying.huang@...el.com>, David Hildenbrand <david@...hat.com>,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 10/10] mm: page_alloc: consolidate free page accounting

On 4/8/24 4:23 PM, Johannes Weiner wrote:
> On Mon, Apr 08, 2024 at 09:38:20AM +0200, Vlastimil Babka wrote:
>> On 4/7/24 12:19 PM, Baolin Wang wrote:
>> > On 2024/3/21 02:02, Johannes Weiner wrote:
>> >>   
>> >> +	account_freepages(page, zone, 1 << order, migratetype);
>> >> +
>> >>   	while (order < MAX_PAGE_ORDER) {
>> >> -		if (compaction_capture(capc, page, order, migratetype)) {
>> >> -			__mod_zone_freepage_state(zone, -(1 << order),
>> >> -								migratetype);
>> >> +		int buddy_mt = migratetype;
>> >> +
>> >> +		if (compaction_capture(capc, page, order, migratetype))
>> >>   			return;
>> >> -		}
>> > 
>> > IIUC, if the released page is captured by compaction, then the 
>> > statistics for free pages should be correspondingly decreased, 
>> > otherwise, there will be a slight regression for my thpcompact benchmark.
>> > 
>> > thpcompact Percentage Faults Huge
>> >                            k6.9-rc2-base        base + patch10 + 2 fixes	
>> > Percentage huge-1        78.18 (   0.00%)       71.92 (  -8.01%)
>> > Percentage huge-3        86.70 (   0.00%)       86.07 (  -0.73%)
>> > Percentage huge-5        90.26 (   0.00%)       78.02 ( -13.57%)
>> > Percentage huge-7        92.34 (   0.00%)       78.67 ( -14.81%)
>> > Percentage huge-12       91.18 (   0.00%)       81.04 ( -11.12%)
>> > Percentage huge-18       89.00 (   0.00%)       79.57 ( -10.60%)
>> > Percentage huge-24       90.52 (   0.00%)       80.07 ( -11.54%)
>> > Percentage huge-30       94.44 (   0.00%)       96.28 (   1.95%)
>> > Percentage huge-32       93.09 (   0.00%)       99.39 (   6.77%)
>> > 
>> > I add below fix based on your fix 2, then the thpcompact Percentage 
>> > looks good. How do you think for the fix?
>> 
>> Yeah another well spotted, thanks. "slight regression" is an understatement,
>> this affects not just a "statistics" but very important counter
>> NR_FREE_PAGES which IIUC would eventually become larger than reality, make
>> the watermark checks false positive and result in depleted reserves etc etc.
>> Actually wondering why we're not seeing -next failures already (or maybe I
>> just haven't noticed).
> 
> Good catch indeed.
> 
> Trying to understand why I didn't notice this during testing, and I
> think it's because I had order-10 pageblocks in my config. There is
> this in compaction_capture():
> 
> 	if (order < pageblock_order && migratetype == MIGRATE_MOVABLE)
> 		return false;
> 
> Most compaction is for order-9 THPs on movable blocks, so I didn't get
> much capturing in practice in order for that leak to be noticable.
> 
> In earlier versions of the patches I had more aggressive capturing,
> but also did the accounting in add_page_to/del_page_from_freelist(),
> so capturing only steals what's already accounted to be off list.
> 
> With the __add/__del and the consolidated account_freepages()
> optimization, compaction_capture() needs explicit accounting again.
> 
>> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> > index 8330c5c2de6b..2facf844ef84 100644
>> > --- a/mm/page_alloc.c
>> > +++ b/mm/page_alloc.c
>> > @@ -805,8 +805,10 @@ static inline void __free_one_page(struct page *page,
>> >          while (order < MAX_PAGE_ORDER) {
>> >                  int buddy_mt = migratetype;
>> > 
>> > -               if (compaction_capture(capc, page, order, migratetype))
>> > +               if (compaction_capture(capc, page, order, migratetype)) {
>> > +                       account_freepages(zone, -(1 << order), migratetype);
>> >                          return;
>> > +               }
>> > 
>> >                  buddy = find_buddy_page_pfn(page, pfn, order, &buddy_pfn);
>> >                  if (!buddy)
>> > 
>> > With my fix, the THP percentage looks better:
>> >                        k6.9-rc2-base          base + patch10 + 2 fixes	+ 
>> > my fix
>> > Percentage huge-1        78.18 (   0.00%)       82.83 (   5.94%)
>> > Percentage huge-3        86.70 (   0.00%)       93.47 (   7.81%)
>> > Percentage huge-5        90.26 (   0.00%)       94.73 (   4.95%)
>> > Percentage huge-7        92.34 (   0.00%)       95.22 (   3.12%)
>> > Percentage huge-12       91.18 (   0.00%)       92.40 (   1.34%)
>> > Percentage huge-18       89.00 (   0.00%)       85.39 (  -4.06%)
>> > Percentage huge-24       90.52 (   0.00%)       94.70 (   4.61%)
>> > Percentage huge-30       94.44 (   0.00%)       97.00 (   2.71%)
>> > Percentage huge-32       93.09 (   0.00%)       92.87 (  -0.24%)
> 
> Acked-by: Johannes Weiner <hannes@...xchg.org>

Acked-by: Vlastimil Babka <vbabka@...e.cz>

> With fixed indentation:

Maybe Baolin could resend the finalized 2 fixups in a more ready-to-pick form?

> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 70f82635f650..96815e3c22f2 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -805,8 +805,10 @@ static inline void __free_one_page(struct page *page,
>  	while (order < MAX_PAGE_ORDER) {
>  		int buddy_mt = migratetype;
>  
> -		if (compaction_capture(capc, page, order, migratetype))
> +		if (compaction_capture(capc, page, order, migratetype)) {
> +			account_freepages(zone, -(1 << order), migratetype);
>  			return;
> +		}
>  
>  		buddy = find_buddy_page_pfn(page, pfn, order, &buddy_pfn);
>  		if (!buddy)


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ