lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c5f8e6ae-9d2c-24a6-c21a-6c6c83912b35@redhat.com>
Date:   Thu, 1 Jul 2021 16:34:13 +0200
From:   David Hildenbrand <david@...hat.com>
To:     ohoono.kwon@...sung.com,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
        "mhocko@...e.com" <mhocko@...e.com>
Cc:     "bhe@...hat.com" <bhe@...hat.com>,
        "rppt@...ux.ibm.com" <rppt@...ux.ibm.com>,
        "ohkwon1043@...il.com" <ohkwon1043@...il.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: sparse: pass section_nr to section_mark_present

On 01.07.21 15:55, 권오훈 wrote:
> With CONFIG_SPARSEMEM_EXTREME enabled, __section_nr() which converts
> mem_section to section_nr could be costly since it iterates all
> sections to check if the given mem_section is in its range.

It actually iterates all section roots.

> 
> On the other hand, __nr_to_section which converts section_nr to
> mem_section can be done in O(1).
> 
> Let's pass section_nr instead of mem_section ptr to section_mark_present
> in order to reduce needless iterations.

I'd expect this to be mostly noise, especially as we iterate section 
roots and for most (smallish) machines we might just work on the lowest 
section roots only.

Can you actually observe an improvement regarding boot times?

Anyhow, looks straight forward to me, although we might just reintroduce 
similar patterns again easily if it's really just noise (see 
find_memory_block() as used by). And it might allow for a nice cleanup 
(see below).

Reviewed-by: David Hildenbrand <david@...hat.com>


Can you send 1) a patch to convert find_memory_block() as well and 2) a 
patch to rip out __section_nr() completely?

> 
> Signed-off-by: Ohhoon Kwon <ohoono.kwon@...sung.com>
> ---
>   mm/sparse.c | 9 +++++----
>   1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 55c18aff3e42..4a2700e9a65f 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -186,13 +186,14 @@ void __meminit mminit_validate_memmodel_limits(unsigned long *start_pfn,
>    * those loops early.
>    */
>   unsigned long __highest_present_section_nr;
> -static void section_mark_present(struct mem_section *ms)
> +static void section_mark_present(unsigned long section_nr)
>   {
> -	unsigned long section_nr = __section_nr(ms);
> +	struct mem_section *ms;
>   
>   	if (section_nr > __highest_present_section_nr)
>   		__highest_present_section_nr = section_nr;
>   
> +	ms = __nr_to_section(section_nr);
>   	ms->section_mem_map |= SECTION_MARKED_PRESENT;
>   }
>   
> @@ -279,7 +280,7 @@ static void __init memory_present(int nid, unsigned long start, unsigned long en
>   		if (!ms->section_mem_map) {
>   			ms->section_mem_map = sparse_encode_early_nid(nid) |
>   							SECTION_IS_ONLINE;
> -			section_mark_present(ms);
> +			section_mark_present(section);
>   		}
>   	}
>   }
> @@ -933,7 +934,7 @@ int __meminit sparse_add_section(int nid, unsigned long start_pfn,
>   
>   	ms = __nr_to_section(section_nr);
>   	set_section_nid(section_nr, nid);
> -	section_mark_present(ms);
> +	section_mark_present(section_nr);
>   
>   	/* Align memmap to section boundary in the subsection case */
>   	if (section_nr_to_pfn(section_nr) != start_pfn)
> 


-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ