lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 19 Aug 2020 15:19:53 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Wei Yang <richard.weiyang@...ux.alibaba.com>,
        Baoquan He <bhe@...hat.com>,
        Pankaj Gupta <pankaj.gupta.linux@...il.com>,
        Oscar Salvador <osalvador@...e.de>,
        Charan Teja Reddy <charante@...eaurora.org>
Subject: Re: [PATCH v1 11/11] mm/memory_hotplug: mark pageblocks
 MIGRATE_ISOLATE while onlining memory

On 19.08.20 15:16, Michal Hocko wrote:
> On Wed 19-08-20 12:11:57, David Hildenbrand wrote:
>> Currently, it can happen that pages are allocated (and freed) via the buddy
>> before we finished basic memory onlining.
>>
>> For example, pages are exposed to the buddy and can be allocated before
>> we actually mark the sections online. Allocated pages could suddenly
>> fail pfn_to_online_page() checks. We had similar issues with pcp
>> handling, when pages are allocated+freed before we reach
>> zone_pcp_update() in online_pages() [1].
>>
>> Instead, mark all pageblocks MIGRATE_ISOLATE, such that allocations are
>> impossible. Once done with the heavy lifting, use
>> undo_isolate_page_range() to move the pages to the MIGRATE_MOVABLE
>> freelist, marking them ready for allocation. Similar to offline_pages(),
>> we have to manually adjust zone->nr_isolate_pageblock.
>>
>> [1] https://lkml.kernel.org/r/1597150703-19003-1-git-send-email-charante@codeaurora.org
>>
>> Cc: Andrew Morton <akpm@...ux-foundation.org>
>> Cc: Michal Hocko <mhocko@...e.com>
>> Cc: Wei Yang <richard.weiyang@...ux.alibaba.com>
>> Cc: Baoquan He <bhe@...hat.com>
>> Cc: Pankaj Gupta <pankaj.gupta.linux@...il.com>
>> Cc: Oscar Salvador <osalvador@...e.de>
>> Cc: Charan Teja Reddy <charante@...eaurora.org>
>> Signed-off-by: David Hildenbrand <david@...hat.com>
> 
> Acked-by: Michal Hocko <mhocko@...e.com>
> 
> Yes this looks very sensible and we should have done that from the
> beginning. I just have one minor comment below
>> @@ -816,6 +816,14 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>>  	if (ret)
>>  		goto failed_addition;
>>  
>> +	/*
>> +	 * Fixup the number of isolated pageblocks before marking the sections
>> +	 * onlining, such that undo_isolate_page_range() works correctly.
>> +	 */
>> +	spin_lock_irqsave(&zone->lock, flags);
>> +	zone->nr_isolate_pageblock += nr_pages / pageblock_nr_pages;
>> +	spin_unlock_irqrestore(&zone->lock, flags);
>> +
> 
> I am not entirely happy about this. I am wondering whether it would make
> more sense to keep the counter in sync already in memmap_init_zone. Sure
> we add a branch to the boot time initialization - and it always fails
> there - but the code would be cleaner and we wouldn't have to do tricks
> like this in caller(s).

I had that in mind initially. The issue is that we have to fixup in case
onlining fails, which I consider even more ugly. Also

1. It's the complete reverse of the offlining path now.
2. pageblock flags are essentially stale unless the section is online,
my approach moves the handling to the point where nothing else will go
wrong and we are just about to mark sections online. That looks a little
cleaner to me.

Unless there are strong opinions, I'd prefer to keep it like this.

Thanks for the very fast review Michal!

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ