lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 6 Feb 2020 10:48:16 +0800
From:   Baoquan He <bhe@...hat.com>
To:     Wei Yang <richard.weiyang@...il.com>
Cc:     Wei Yang <richardw.yang@...ux.intel.com>,
        David Hildenbrand <david@...hat.com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Segher Boessenkool <segher@...nel.crashing.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Michal Hocko <mhocko@...nel.org>,
        Oscar Salvador <osalvador@...e.de>
Subject: Re: [PATCH v1] mm/memory_hotplug: Easier calculation to get pages to
 next section boundary

On 02/06/20 at 02:26am, Wei Yang wrote:
> On Thu, Feb 06, 2020 at 08:37:36AM +0800, Baoquan He wrote:
> >On 02/06/20 at 08:13am, Baoquan He wrote:
> >> On 02/06/20 at 07:50am, Wei Yang wrote:
> >> > On Thu, Feb 06, 2020 at 07:19:45AM +0800, Wei Yang wrote:
> >> > >On Wed, Feb 05, 2020 at 02:52:51PM +0100, David Hildenbrand wrote:
> >> > >>Let's use a calculation that's easier to understand and calculates the
> >> > >>same result. Reusing existing macros makes this look nicer.
> >> > >>
> >> > >>We always want to have the number of pages (> 0) to the next section
> >> > >>boundary, starting from the current pfn.
> >> > >>
> >> > >>Suggested-by: Segher Boessenkool <segher@...nel.crashing.org>
> >> > >>Cc: Andrew Morton <akpm@...ux-foundation.org>
> >> > >>Cc: Michal Hocko <mhocko@...nel.org>
> >> > >>Cc: Oscar Salvador <osalvador@...e.de>
> >> > >>Cc: Baoquan He <bhe@...hat.com>
> >> > >>Cc: Wei Yang <richardw.yang@...ux.intel.com>
> >> > >>Signed-off-by: David Hildenbrand <david@...hat.com>
> >> > >
> >> > >Reviewed-by: Wei Yang <richardw.yang@...ux.intel.com>
> >> > >
> >> > >BTW, I got one question about hotplug size requirement.
> >> > >
> >> > >I thought the hotplug range should be section size aligned, while taking a
> >> > >look into current code function check_hotplug_memory_range() guard the range.
> >> 
> >> A good question. The current code should be block size aligned. I
> >> remember in some places we assume each block comprise all the sections.
> >> Can't imagine one or some of them are half section filled.
> >
> >I could be wrong, half filled block may not cause problem. 
> >
> 
> David must be angry about our flooding the mail list :-)

Believe he won't, :-) If you like, we can talk off line.

> 
> Check the code again, there are two memory range check:
> 
>   * check_hotplug_memory_range(), block/section aligned
>   * check_pfn_span(), subsection aligned
> 
> The second check, check_pfn_span() in __add_pages(), enable the capability to
> add a memory range with subsection size.
> 
> This means hotplug still keeps section alignment.

memremap_pages() also call add_pages(), it doesn't have the
check_hotplug_memory_range() invocation. check_pfn_span() is made for
it specifically.

> 
> BTW, __add_pages() share the same logic as __remove_pages(). Why not change it
> too? Do I miss something or I don't have the latest source code?

Good question, and I think it need. Just David is refactoring/cleaning
up the remove_pages() code path, this is found out by Segher from patch
reviewing.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ