lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 2 Jun 2021 20:37:02 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Oscar Salvador <osalvador@...e.de>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     Dave Hansen <dave.hansen@...ux.intel.com>,
        Michal Hocko <mhocko@...nel.org>,
        Anshuman Khandual <anshuman.khandual@....com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Pavel Tatashin <pasha.tatashin@...een.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/3] mm,page_alloc: Use {get,put}_online_mems() to get
 stable zone's values

On 02.06.21 11:14, Oscar Salvador wrote:
> Currently, page_outside_zone_boundaries() takes zone's span_seqlock
> when reading zone_start_pfn and spanned_pages so those values are
> stable vs memory hotplug operations.
> move_pfn_range_to_zone() and remove_pfn_range_from_zone(), which are
> the functions that can change zone's values are serialized by
> mem_hotplug_lock by mem_hotplug_{begin,done}, so we can just use
> {get,put}_online_mems() on the readers.
> 
> This will allow us to completely kill span_seqlock lock as no users
> will remain after this series.
> 
> Signed-off-by: Oscar Salvador <osalvador@...e.de>
> ---
>   mm/page_alloc.c | 14 ++++++--------
>   1 file changed, 6 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index aaa1655cf682..296cb00802b4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -582,17 +582,15 @@ void set_pageblock_migratetype(struct page *page, int migratetype)
>   static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
>   {
>   	int ret = 0;
> -	unsigned seq;
>   	unsigned long pfn = page_to_pfn(page);
>   	unsigned long sp, start_pfn;
>   
> -	do {
> -		seq = zone_span_seqbegin(zone);
> -		start_pfn = zone->zone_start_pfn;
> -		sp = zone->spanned_pages;
> -		if (!zone_spans_pfn(zone, pfn))
> -			ret = 1;
> -	} while (zone_span_seqretry(zone, seq));
> +	get_online_mems();
> +	start_pfn = zone->zone_start_pfn;
> +	sp = zone->spanned_pages;
> +	if (!zone_spans_pfn(zone, pfn))
> +		ret = 1;
> +	put_online_mems();
>   
>   	if (ret)
>   		pr_err("page 0x%lx outside node %d zone %s [ 0x%lx - 0x%lx ]\n",
> 

It's worth noting that memory offlining might hold the memory hotplug 
lock for quite some time. It's not a lightweight lock, compared to the 
seqlock we have here.

I can see that page_outside_zone_boundaries() is only called from 
bad_range(). bad_range() is only called under VM_BUG_ON_PAGE(). Still, 
are you sure that it's even valid to block e.g., __free_one_page() and 
others for eventually all eternity? And I think that we might just call 
it from atomic context where we cannot even sleep.

Long story short, using get_online_mems() looks wrong.

Maybe the current lightweight reader/writer protection does serve a purpose?

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ