[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181108083400.GQ27423@dhcp22.suse.cz>
Date: Thu, 8 Nov 2018 09:34:00 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Arun KS <arunks@...eaurora.org>
Cc: akpm@...ux-foundation.org, keescook@...omium.org,
khlebnikov@...dex-team.ru, minchan@...nel.org, vbabka@...e.cz,
osalvador@...e.de, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, getarunks@...il.com
Subject: Re: [PATCH v3 4/4] mm: Remove managed_page_count spinlock
On Thu 08-11-18 13:53:18, Arun KS wrote:
> Now totalram_pages and managed_pages are atomic varibles. No need
> of managed_page_count spinlock.
As explained earlier. Please add a motivation here. Feel free to reuse
wording from http://lkml.kernel.org/r/20181107103630.GF2453@dhcp22.suse.cz
>
> Signed-off-by: Arun KS <arunks@...eaurora.org>
> Reviewed-by: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
> Acked-by: Michal Hocko <mhocko@...e.com>
> Acked-by: Vlastimil Babka <vbabka@...e.cz>
> ---
> include/linux/mmzone.h | 6 ------
> mm/page_alloc.c | 5 -----
> 2 files changed, 11 deletions(-)
>
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index e73dc31..c71b4d9 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -428,12 +428,6 @@ struct zone {
> * Write access to present_pages at runtime should be protected by
> * mem_hotplug_begin/end(). Any reader who can't tolerant drift of
> * present_pages should get_online_mems() to get a stable value.
> - *
> - * Read access to managed_pages should be safe because it's unsigned
> - * long. Write access to zone->managed_pages and totalram_pages are
> - * protected by managed_page_count_lock at runtime. Idealy only
> - * adjust_managed_page_count() should be used instead of directly
> - * touching zone->managed_pages and totalram_pages.
> */
> atomic_long_t managed_pages;
> unsigned long spanned_pages;
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index f8b64cc..26c5e14 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -122,9 +122,6 @@
> };
> EXPORT_SYMBOL(node_states);
>
> -/* Protect totalram_pages and zone->managed_pages */
> -static DEFINE_SPINLOCK(managed_page_count_lock);
> -
> atomic_long_t _totalram_pages __read_mostly;
> EXPORT_SYMBOL(_totalram_pages);
> unsigned long totalreserve_pages __read_mostly;
> @@ -7065,14 +7062,12 @@ static int __init cmdline_parse_movablecore(char *p)
>
> void adjust_managed_page_count(struct page *page, long count)
> {
> - spin_lock(&managed_page_count_lock);
> atomic_long_add(count, &page_zone(page)->managed_pages);
> totalram_pages_add(count);
> #ifdef CONFIG_HIGHMEM
> if (PageHighMem(page))
> totalhigh_pages_add(count);
> #endif
> - spin_unlock(&managed_page_count_lock);
> }
> EXPORT_SYMBOL(adjust_managed_page_count);
>
> --
> 1.9.1
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists