lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGM2rearPLo+jdTkQ7rmgBFWg7W3F7KKPokAOeZtKUkiVtmodQ@mail.gmail.com>
Date:   Mon, 9 Jul 2018 10:31:29 -0400
From:   Pavel Tatashin <pasha.tatashin@...cle.com>
To:     dave.hansen@...el.com
Cc:     Steven Sistare <steven.sistare@...cle.com>,
        Daniel Jordan <daniel.m.jordan@...cle.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        kirill.shutemov@...ux.intel.com, Michal Hocko <mhocko@...e.com>,
        Linux Memory Management List <linux-mm@...ck.org>,
        dan.j.williams@...el.com, jack@...e.cz, jglisse@...hat.com,
        Souptick Joarder <jrdr.linux@...il.com>, bhe@...hat.com,
        gregkh@...uxfoundation.org, Vlastimil Babka <vbabka@...e.cz>,
        Wei Yang <richard.weiyang@...il.com>, rientjes@...gle.com,
        mingo@...nel.org, osalvador@...hadventures.net
Subject: Re: [PATCH v3 1/2] mm/sparse: add sparse_init_nid()

On Thu, Jul 5, 2018 at 9:39 AM Dave Hansen <dave.hansen@...el.com> wrote:
>
> On 07/02/2018 01:29 PM, Pavel Tatashin wrote:
> > On Mon, Jul 2, 2018 at 4:00 PM Dave Hansen <dave.hansen@...el.com> wrote:
> >>> +     unsigned long size = sizeof(struct page) * PAGES_PER_SECTION;
> >>> +     unsigned long pnum, map_index = 0;
> >>> +     void *vmemmap_buf_start;
> >>> +
> >>> +     size = ALIGN(size, PMD_SIZE) * map_count;
> >>> +     vmemmap_buf_start = __earlyonly_bootmem_alloc(nid, size,
> >>> +                                                   PMD_SIZE,
> >>> +                                                   __pa(MAX_DMA_ADDRESS));
> >>
> >> Let's not repeat the mistakes of the previous version of the code.
> >> Please explain why we are aligning this.  Also,
> >> __earlyonly_bootmem_alloc()->memblock_virt_alloc_try_nid_raw() claims to
> >> be aligning the size.  Do we also need to do it here?
> >>
> >> Yes, I know the old code did this, but this is the cost of doing a
> >> rewrite. :)
> >
> > Actually, I was thinking about this particular case when I was
> > rewriting this code. Here we align size before multiplying by
> > map_count aligns after memblock_virt_alloc_try_nid_raw(). So, we must
> > have both as they are different.
>
> That's a good point that they do different things.
>
> But, which behavior of the two different things is the one we _want_?

We definitely want the first one:
size = ALIGN(size, PMD_SIZE) * map_count;

The alignment in memblock is not strictly needed for this case, but it
already comes with memblock allocator.

>
> >>> +     if (vmemmap_buf_start) {
> >>> +             vmemmap_buf = vmemmap_buf_start;
> >>> +             vmemmap_buf_end = vmemmap_buf_start + size;
> >>> +     }
> >>
> >> It would be nice to call out that these are globals that other code
> >> picks up.
> >
> > I do not like these globals, they should have specific functions that
> > access them only, something:
> > static struct {
> >   buffer;
> >   buffer_end;
> > } vmemmap_buffer;
> > vmemmap_buffer_init() allocate buffer
> > vmemmap_buffer_alloc()  return NULL if buffer is empty
> > vmemmap_buffer_fini()
> >
> > Call vmemmap_buffer_init()  and vmemmap_buffer_fini()  from
> > sparse_populate_node() and
> > vmemmap_buffer_alloc() from vmemmap_alloc_block_buf().
> >
> > But, it should be a separate patch. If you would like I can add it to
> > this series, or submit separately.
>
> Seems like a nice cleanup, but I don't think it needs to be done here.
>
> >>> + * Return map for pnum section. sparse_populate_node() has populated memory map
> >>> + * in this node, we simply do pnum to struct page conversion.
> >>> + */
> >>> +struct page * __init sparse_populate_node_section(struct page *map_base,
> >>> +                                               unsigned long map_index,
> >>> +                                               unsigned long pnum,
> >>> +                                               int nid)
> >>> +{
> >>> +     return pfn_to_page(section_nr_to_pfn(pnum));
> >>> +}
> >>
> >> What is up with all of the unused arguments to this function?
> >
> > Because the same function is called from non-vmemmap sparse code.
>
> That's probably good to call out in the patch description if not there
> already.
>
> >>> diff --git a/mm/sparse.c b/mm/sparse.c
> >>> index d18e2697a781..c18d92b8ab9b 100644
> >>> --- a/mm/sparse.c
> >>> +++ b/mm/sparse.c
> >>> @@ -456,6 +456,43 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
> >>>                      __func__);
> >>>       }
> >>>  }
> >>> +
> >>> +static unsigned long section_map_size(void)
> >>> +{
> >>> +     return PAGE_ALIGN(sizeof(struct page) * PAGES_PER_SECTION);
> >>> +}
> >>
> >> Seems like if we have this, we should use it wherever possible, like
> >> sparse_populate_node().
> >
> > It is used in sparse_populate_node():
> >
> > 401 struct page * __init sparse_populate_node(unsigned long pnum_begin,
> > 406         return memblock_virt_alloc_try_nid_raw(section_map_size()
> > * map_count,
> > 407                                                PAGE_SIZE,
> > __pa(MAX_DMA_ADDRESS),
> > 408
> > BOOTMEM_ALLOC_ACCESSIBLE, nid);
>
> I missed the PAGE_ALIGN() until now.  That really needs a comment
> calling out how it's not really the map size but the *allocation* size
> of a single section's map.
>
> It probably also needs a name like section_memmap_allocation_size() or
> something to differentiate it from the *used* size.
>
> >>> +/*
> >>> + * Try to allocate all struct pages for this node, if this fails, we will
> >>> + * be allocating one section at a time in sparse_populate_node_section().
> >>> + */
> >>> +struct page * __init sparse_populate_node(unsigned long pnum_begin,
> >>> +                                       unsigned long pnum_end,
> >>> +                                       unsigned long map_count,
> >>> +                                       int nid)
> >>> +{
> >>> +     return memblock_virt_alloc_try_nid_raw(section_map_size() * map_count,
> >>> +                                            PAGE_SIZE, __pa(MAX_DMA_ADDRESS),
> >>> +                                            BOOTMEM_ALLOC_ACCESSIBLE, nid);
> >>> +}
> >>> +
> >>> +/*
> >>> + * Return map for pnum section. map_base is not NULL if we could allocate map
> >>> + * for this node together. Otherwise we allocate one section at a time.
> >>> + * map_index is the index of pnum in this node counting only present sections.
> >>> + */
> >>> +struct page * __init sparse_populate_node_section(struct page *map_base,
> >>> +                                               unsigned long map_index,
> >>> +                                               unsigned long pnum,
> >>> +                                               int nid)
> >>> +{
> >>> +     if (map_base) {
> >>> +             unsigned long offset = section_map_size() * map_index;
> >>> +
> >>> +             return (struct page *)((char *)map_base + offset);
> >>> +     }
> >>> +     return sparse_mem_map_populate(pnum, nid, NULL);
> >>
> >> Oh, you have a vmemmap and non-vmemmap version.
> >>
> >> BTW, can't the whole map base calculation just be replaced with:
> >>
> >>         return &map_base[PAGES_PER_SECTION * map_index];
> >
> > Unfortunately no.  Because map_base might be allocated in chunks
> > larger than PAGES_PER_SECTION * sizeof(struct page). See: PAGE_ALIGN()
> > in section_map_size
>
> Good point.
>
> Oh, well, can you at least get rid of the superfluous "(char *)" cast?
> That should make the whole thing a bit less onerous.

I will see what can be done, if it is not going to be cleaner, I will
keep the cast.

Thank you,
Pavel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ