[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YzP+aZsR6Lov7zi6@kernel.org>
Date: Wed, 28 Sep 2022 10:57:29 +0300
From: Mike Rapoport <rppt@...nel.org>
To: Vernon Yang <vernon2gm@...il.com>
Cc: corbet@....net, akpm@...ux-foundation.org, bobwxc@...il.cn,
hughd@...gle.com, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] Documentation/mm: modify page_referenced to
folio_referenced
On Mon, Sep 26, 2022 at 11:20:32PM +0800, Vernon Yang wrote:
> Since commit b3ac04132c4b ("mm/rmap: Turn page_referenced() into
> folio_referenced()") the page_referenced function name was modified,
> so fix it up to use the correct one.
>
> Signed-off-by: Vernon Yang <vernon2gm@...il.com>
Reviewed-by: Mike Rapoport <rppt@...ux.ibm.com>
> ---
> Documentation/mm/unevictable-lru.rst | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst
> index b280367d6a44..4a0e158aa9ce 100644
> --- a/Documentation/mm/unevictable-lru.rst
> +++ b/Documentation/mm/unevictable-lru.rst
> @@ -197,7 +197,7 @@ unevictable list for the memory cgroup and node being scanned.
> There may be situations where a page is mapped into a VM_LOCKED VMA, but the
> page is not marked as PG_mlocked. Such pages will make it all the way to
> shrink_active_list() or shrink_page_list() where they will be detected when
> -vmscan walks the reverse map in page_referenced() or try_to_unmap(). The page
> +vmscan walks the reverse map in folio_referenced() or try_to_unmap(). The page
> is culled to the unevictable list when it is released by the shrinker.
>
> To "cull" an unevictable page, vmscan simply puts the page back on the LRU list
> @@ -267,7 +267,7 @@ the LRU. Such pages can be "noticed" by memory management in several places:
> (4) in the fault path and when a VM_LOCKED stack segment is expanded; or
>
> (5) as mentioned above, in vmscan:shrink_page_list() when attempting to
> - reclaim a page in a VM_LOCKED VMA by page_referenced() or try_to_unmap().
> + reclaim a page in a VM_LOCKED VMA by folio_referenced() or try_to_unmap().
>
> mlocked pages become unlocked and rescued from the unevictable list when:
>
> @@ -547,7 +547,7 @@ vmscan's shrink_inactive_list() and shrink_page_list() also divert obviously
> unevictable pages found on the inactive lists to the appropriate memory cgroup
> and node unevictable list.
>
> -rmap's page_referenced_one(), called via vmscan's shrink_active_list() or
> +rmap's folio_referenced_one(), called via vmscan's shrink_active_list() or
> shrink_page_list(), and rmap's try_to_unmap_one() called via shrink_page_list(),
> check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_page()
> to correct them. Such pages are culled to the unevictable list when released
> --
> 2.25.1
>
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists