[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210301221545.GV2723601@casper.infradead.org>
Date: Mon, 1 Mar 2021 22:15:45 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Zi Yan <ziy@...dia.com>
Cc: linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 03/25] mm/vmstat: Add folio stat wrappers
On Mon, Mar 01, 2021 at 04:17:39PM -0500, Zi Yan wrote:
> On 28 Jan 2021, at 2:03, Matthew Wilcox (Oracle) wrote:
> > Allow page counters to be more readily modified by callers which have
> > a folio. Name these wrappers with 'stat' instead of 'state' as requested
>
> Shouldn’t we change the stats with folio_nr_pages(folio) here? And all
> changes below. Otherwise one folio is always counted as a single page.
That's a good point. Looking through the changes in my current folio
tree (which doesn't get as far as the thp tree did; ie doesn't yet allocate
multi-page folios, so hasn't been tested with anything larger than a
single page), the callers are ...
@@ -2698,3 +2698,3 @@ int clear_page_dirty_for_io(struct page *page)
- if (TestClearPageDirty(page)) {
- dec_lruvec_page_state(page, NR_FILE_DIRTY);
- dec_zone_page_state(page, NR_ZONE_WRITE_PENDING);
+ if (TestClearFolioDirty(folio)) {
+ dec_lruvec_folio_stat(folio, NR_FILE_DIRTY);
+ dec_zone_folio_stat(folio, NR_ZONE_WRITE_PENDING);
@@ -2432,3 +2433,3 @@ void account_page_dirtied(struct page *page, struct addres
s_space *mapping)
- __inc_lruvec_page_state(page, NR_FILE_DIRTY);
- __inc_zone_page_state(page, NR_ZONE_WRITE_PENDING);
- __inc_node_page_state(page, NR_DIRTIED);
+ __inc_lruvec_folio_stat(folio, NR_FILE_DIRTY);
+ __inc_zone_folio_stat(folio, NR_ZONE_WRITE_PENDING);
+ __inc_node_folio_stat(folio, NR_DIRTIED);
@@ -891 +890 @@ noinline int __add_to_page_cache_locked(struct page *page,
- __inc_lruvec_page_state(page, NR_FILE_PAGES);
+ __inc_lruvec_folio_stat(folio, NR_FILE_PAGES);
@@ -2759,2 +2759,2 @@ int test_clear_page_writeback(struct page *page)
- dec_zone_page_state(page, NR_ZONE_WRITE_PENDING);
- inc_node_page_state(page, NR_WRITTEN);
+ dec_zone_folio_stat(folio, NR_ZONE_WRITE_PENDING);
+ inc_node_folio_stat(folio, NR_WRITTEN);
I think it's clear from this that I haven't found all the places
that I need to change yet ;-)
Looking at the places I did change in the thp tree, there are changes
like this:
@@ -860,27 +864,30 @@ noinline int __add_to_page_cache_locked(struct page *page,
- if (!huge)
- __inc_lruvec_page_state(page, NR_FILE_PAGES);
+ if (!huge) {
+ __mod_lruvec_page_state(page, NR_FILE_PAGES, nr);
+ if (nr > 1)
+ __mod_node_page_state(page_pgdat(page),
+ NR_FILE_THPS, nr);
+ }
... but I never did do some of the changes which the above changes imply
are needed. So the thp tree probably had all kinds of bad statistics
that I never noticed.
So ... at least some of the users are definitely going to want to
cache the 'nr_pages' and use it multiple times, including calling
__mod_node_folio_state(), but others should do what you suggested.
Thanks! I'll make that change.
Powered by blists - more mailing lists