lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200621144621.GF6571@kernel.org>
Date:   Sun, 21 Jun 2020 17:46:21 +0300
From:   Mike Rapoport <rppt@...nel.org>
To:     David Hildenbrand <david@...hat.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...e.com>,
        Huang Ying <ying.huang@...el.com>,
        Minchan Kim <minchan@...nel.org>,
        Wei Yang <richard.weiyang@...il.com>
Subject: Re: [PATCH v1 1/2] mm: drop vm_total_pages

On Fri, Jun 19, 2020 at 03:24:09PM +0200, David Hildenbrand wrote:
> The global variable "vm_total_pages" is a relict from older days. There
> is only a single user that reads the variable - build_all_zonelists() -
> and the first thing it does is updating it. Use a local variable in
> build_all_zonelists() instead and drop the local variable.

Nit:                                       ^ global

> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: Huang Ying <ying.huang@...el.com>
> Cc: Minchan Kim <minchan@...nel.org>
> Cc: Wei Yang <richard.weiyang@...il.com>
> Signed-off-by: David Hildenbrand <david@...hat.com>

Except the nit above

Reviewed-by: Mike Rapoport <rppt@...ux.ibm.com>

> ---
>  include/linux/swap.h | 1 -
>  mm/memory_hotplug.c  | 3 ---
>  mm/page-writeback.c  | 6 ++----
>  mm/page_alloc.c      | 2 ++
>  mm/vmscan.c          | 5 -----
>  5 files changed, 4 insertions(+), 13 deletions(-)
> 
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 4c5974bb9ba94..124261acd5d0a 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -371,7 +371,6 @@ extern unsigned long mem_cgroup_shrink_node(struct mem_cgroup *mem,
>  extern unsigned long shrink_all_memory(unsigned long nr_pages);
>  extern int vm_swappiness;
>  extern int remove_mapping(struct address_space *mapping, struct page *page);
> -extern unsigned long vm_total_pages;
>  
>  extern unsigned long reclaim_pages(struct list_head *page_list);
>  #ifdef CONFIG_NUMA
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 9b34e03e730a4..d682781cce48d 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -835,8 +835,6 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>  	kswapd_run(nid);
>  	kcompactd_run(nid);
>  
> -	vm_total_pages = nr_free_pagecache_pages();
> -
>  	writeback_set_ratelimit();
>  
>  	memory_notify(MEM_ONLINE, &arg);
> @@ -1586,7 +1584,6 @@ static int __ref __offline_pages(unsigned long start_pfn,
>  		kcompactd_stop(node);
>  	}
>  
> -	vm_total_pages = nr_free_pagecache_pages();
>  	writeback_set_ratelimit();
>  
>  	memory_notify(MEM_OFFLINE, &arg);
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index 28b3e7a675657..4e4ddd67b71e5 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -2076,13 +2076,11 @@ static int page_writeback_cpu_online(unsigned int cpu)
>   * Called early on to tune the page writeback dirty limits.
>   *
>   * We used to scale dirty pages according to how total memory
> - * related to pages that could be allocated for buffers (by
> - * comparing nr_free_buffer_pages() to vm_total_pages.
> + * related to pages that could be allocated for buffers.
>   *
>   * However, that was when we used "dirty_ratio" to scale with
>   * all memory, and we don't do that any more. "dirty_ratio"
> - * is now applied to total non-HIGHPAGE memory (by subtracting
> - * totalhigh_pages from vm_total_pages), and as such we can't
> + * is now applied to total non-HIGHPAGE memory, and as such we can't
>   * get into the old insane situation any more where we had
>   * large amounts of dirty pages compared to a small amount of
>   * non-HIGHMEM memory.
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 0c435b2ed665c..7b0dde69748c1 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -5903,6 +5903,8 @@ build_all_zonelists_init(void)
>   */
>  void __ref build_all_zonelists(pg_data_t *pgdat)
>  {
> +	unsigned long vm_total_pages;
> +
>  	if (system_state == SYSTEM_BOOTING) {
>  		build_all_zonelists_init();
>  	} else {
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index b6d84326bdf2d..0010859747df2 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -170,11 +170,6 @@ struct scan_control {
>   * From 0 .. 200.  Higher means more swappy.
>   */
>  int vm_swappiness = 60;
> -/*
> - * The total number of pages which are beyond the high watermark within all
> - * zones.
> - */
> -unsigned long vm_total_pages;
>  
>  static void set_task_reclaim_state(struct task_struct *task,
>  				   struct reclaim_state *rs)
> -- 
> 2.26.2
> 
> 

-- 
Sincerely yours,
Mike.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ