lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190726101136.GA26721@linux>
Date:   Fri, 26 Jul 2019 12:11:40 +0200
From:   Oscar Salvador <osalvador@...e.de>
To:     David Hildenbrand <david@...hat.com>
Cc:     akpm@...ux-foundation.org, dan.j.williams@...el.com,
        pasha.tatashin@...een.com, mhocko@...e.com,
        anshuman.khandual@....com, Jonathan.Cameron@...wei.com,
        vbabka@...e.cz, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 2/5] mm: Introduce a new Vmemmap page-type

On Fri, Jul 26, 2019 at 11:41:46AM +0200, David Hildenbrand wrote:
> > static void __meminit __init_single_page(struct page *page, unsigned long pfn,
> >                                 unsigned long zone, int nid)
> > {
> >         if (PageVmemmap(page))
> >                 /*
> >                  * Vmemmap pages need to preserve their state.
> >                  */
> >                 goto preserve_state;
> 
> Can you be sure there are no false positives? (if I remember correctly,
> this memory might be completely uninitialized - I might be wrong)

Normal pages reaching this point will be uninitialized or 
poisoned-initialized.

Vmemmap pages are initialized to 0 in mhp_mark_vmemmap_pages,
before reaching here.

For the false positive to be effective, page should be reserved, and 
page->type would have to have a specific value.
If we feel unsure about this, I could add a new kind of check for only
this situation, where we initialize another field of struct page
to another specific/magic value, so we will have three checks only at
this stage.

> 
> > 
> >         mm_zero_struct_page(page);
> >         page_mapcount_reset(page);
> >         INIT_LIST_HEAD(&page->lru);
> > preserve_state:
> >         init_page_count(page);
> >         set_page_links(page, zone, nid, pfn);
> >         page_cpupid_reset_last(page);
> >         page_kasan_tag_reset(page);
> > 
> > So, vmemmap pages will fall within the same zone as the range we are adding,
> > that does not change.
> 
> I wonder if that is the right thing to do, hmmmm, because they are
> effectively not part of that zone (not online)
> 
> Will have a look at the details :)

I might be wrong here, but last time I checked, pages that are used for memmaps
at boot time (not hotplugged), are still linked to some zone.

Will have to double check though.

If that is not case, it would be easier, but I am afraid it is.


-- 
Oscar Salvador
SUSE L3

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ