lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200624014737.GG3346@MiWiFi-R3L-srv>
Date:   Wed, 24 Jun 2020 09:47:37 +0800
From:   Baoquan He <bhe@...hat.com>
To:     Dan Williams <dan.j.williams@...el.com>
Cc:     Wei Yang <richard.weiyang@...ux.alibaba.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Oscar Salvador <osalvador@...e.de>,
        Linux MM <linux-mm@...ck.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        David Hildenbrand <david@...hat.com>
Subject: Re: [PATCH] mm/spase: never partially remove memmap for early section

On 06/23/20 at 05:21pm, Dan Williams wrote:
> On Tue, Jun 23, 2020 at 2:43 AM Wei Yang
> <richard.weiyang@...ux.alibaba.com> wrote:
> >
> > For early sections, we assumes its memmap will never be partially
> > removed. But current behavior breaks this.
> 
> Where do we assume that?
> 
> The primary use case for this was mapping pmem that collides with
> System-RAM in the same 128MB section. That collision will certainly be
> depopulated on-demand depending on the state of the pmem device. So,
> I'm not understanding the problem or the benefit of this change.

I was also confused when review this patch, the patch log is a little
short and simple. From the current code, with SPARSE_VMEMMAP enabled, we
do build memmap for the whole memory section during boot, even though
some of them may be partially populated. We just mark the subsection map
for present pages. 

Later, if pmem device is mapped into the partially boot memory section,
we just fill the relevant subsection map, do return directly, w/o building
the memmap for it, in section_activate(). Because the memmap for the
unpresent RAM part have been there. I guess this is what Wei is trying to 
do to keep the behaviour be consistent for pmem device adding, or
pmem device removing and later adding again.

Please correct me if I am wrong.

To me, fixing it looks good. But a clear doc or code comment is
necessary so that people can understand the code with less time.
Leaving it as is doesn't cause harm. I personally tend to choose
the former.

	paging_init()
	    ->sparse_init()
	        ->sparse_init_nid()
	          {
                      ...
                      for_each_present_section_nr(pnum_begin, pnum) {
                          ...
                          map = __populate_section_memmap(pfn, PAGES_PER_SECTION,
                                     nid, NULL);
                          ...
                      }
                  }
             ...
             ->zone_sizes_init()
                 ->free_area_init()
                   {
                       for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
                           subsection_map_init(start_pfn, end_pfn - start_pfn);
                       }
                   {
		
         __add_pages()
             ->sparse_add_section()
                 ->section_activate()
                   {
                       ...
                       fill_subsection_map();
                       if (nr_pages < PAGES_PER_SECTION && early_section(ms))   <----------*********
                           return pfn_to_page(pfn);
                       ...
                   }
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ