lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 22 May 2020 09:33:35 -0400
From:   Qian Cai <cai@....pw>
To:     Johannes Weiner <hannes@...xchg.org>
Cc:     linux-mm@...ck.org, Rik van Riel <riel@...riel.com>,
        Minchan Kim <minchan.kim@...il.com>,
        Michal Hocko <mhocko@...e.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        linux-kernel@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH 09/14] mm: deactivations shouldn't bias the LRU balance

On Wed, May 20, 2020 at 07:25:20PM -0400, Johannes Weiner wrote:
> Operations like MADV_FREE, FADV_DONTNEED etc. currently move any
> affected active pages to the inactive list to accelerate their reclaim
> (good) but also steer page reclaim toward that LRU type, or away from
> the other (bad).
> 
> The reason why this is undesirable is that such operations are not
> part of the regular page aging cycle, and rather a fluke that doesn't
> say much about the remaining pages on that list; they might all be in
> heavy use, and once the chunk of easy victims has been purged, the VM
> continues to apply elevated pressure on those remaining hot pages. The
> other LRU, meanwhile, might have easily reclaimable pages, and there
> was never a need to steer away from it in the first place.
> 
> As the previous patch outlined, we should focus on recording actually
> observed cost to steer the balance rather than speculating about the
> potential value of one LRU list over the other. In that spirit, leave
> explicitely deactivated pages to the LRU algorithm to pick up, and let
> rotations decide which list is the easiest to reclaim.
> 
> Signed-off-by: Johannes Weiner <hannes@...xchg.org>
> Acked-by: Minchan Kim <minchan@...nel.org>
> Acked-by: Michal Hocko <mhocko@...e.com>
> ---
>  mm/swap.c | 4 ----
>  1 file changed, 4 deletions(-)
> 
> diff --git a/mm/swap.c b/mm/swap.c
> index 5d62c5a0c651..d7912bfb597f 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -515,14 +515,12 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec,
>  
>  	if (active)
>  		__count_vm_event(PGDEACTIVATE);
> -	lru_note_cost(lruvec, !file, hpage_nr_pages(page));
>  }
>
[]

mm/swap.c: In function 'lru_deactivate_file_fn':
mm/swap.c:504:11: warning: variable 'file' set but not used
[-Wunused-but-set-variable]
  int lru, file;
           ^~~~  

This?

diff --git a/mm/swap.c b/mm/swap.c
index fedf5847dfdb..9c38c1b545af 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -501,7 +501,7 @@ void lru_cache_add_active_or_unevictable(struct page *page,
 static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec,
 			      void *arg)
 {
-	int lru, file;
+	int lru;
 	bool active;
 
 	if (!PageLRU(page))
@@ -515,7 +515,6 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec,
 		return;
 
 	active = PageActive(page);
-	file = page_is_file_lru(page);
 	lru = page_lru_base_type(page);
 
 	del_page_from_lru_list(page, lruvec, lru + active);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ