lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 7 Feb 2020 11:36:36 +0800
From:   Baoquan He <bhe@...hat.com>
To:     Dan Williams <dan.j.williams@...el.com>
Cc:     Wei Yang <richardw.yang@...ux.intel.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Oscar Salvador <osalvador@...e.de>,
        Linux MM <linux-mm@...ck.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        David Hildenbrand <david@...hat.com>
Subject: Re: [PATCH 2/3] mm/sparsemem: get physical address to page struct
 instead of virtual address to pfn

On 02/06/20 at 07:21pm, Dan Williams wrote:
> On Thu, Feb 6, 2020 at 7:10 PM Baoquan He <bhe@...hat.com> wrote:
> >
> > Hi Dan,
> >
> > On 02/06/20 at 06:19pm, Dan Williams wrote:
> > > On Thu, Feb 6, 2020 at 3:17 PM Wei Yang <richardw.yang@...ux.intel.com> wrote:
> > > > diff --git a/mm/sparse.c b/mm/sparse.c
> > > > index b5da121bdd6e..56816f653588 100644
> > > > --- a/mm/sparse.c
> > > > +++ b/mm/sparse.c
> > > > @@ -888,7 +888,7 @@ int __meminit sparse_add_section(int nid, unsigned long start_pfn,
> > > >         /* Align memmap to section boundary in the subsection case */
> > > >         if (IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP) &&
> > > >                 section_nr_to_pfn(section_nr) != start_pfn)
> > > > -               memmap = pfn_to_kaddr(section_nr_to_pfn(section_nr));
> > > > +               memmap = pfn_to_page(section_nr_to_pfn(section_nr));
> > >
> > > Yes, this looks obviously correct. This might be tripping up
> > > makedumpfile. Do you see any practical effects of this bug? The kernel
> > > mostly avoids ->section_mem_map in the vmemmap case and in the
> > > !vmemmap case section_nr_to_pfn(section_nr) should always equal
> > > start_pfn.
> >
> > The practical effects is that the memmap for the first unaligned section will be lost
> > when destroy namespace to hot remove it. Because we encode the ->section_mem_map
> > into mem_section, and get memmap from the related mem_section to free it in
> > section_deactivate(). In fact in vmemmap, we don't need to encode the ->section_mem_map
> > with memmap.
> 
> Right, but can you actually trigger that in the SPARSEMEM_VMEMMAP=n case?

I think no, the lost memmap should only happen in vmemmap case.

> 
> > By the way, sub-section support is only valid in vmemmap case, right?
> 
> Yes.
> 
> > Seems yes from code, but I don't find any document to prove it.
> 
> check_pfn_span() enforces this requirement.

Thanks for your confirmation. Do you mind if I add some document
sentences somewhere make clear this?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ