lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 10 May 2017 09:24:19 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Pasha Tatashin <pasha.tatashin@...cle.com>
Cc:     linux-kernel@...r.kernel.org, sparclinux@...r.kernel.org,
        linux-mm@...ck.org, linuxppc-dev@...ts.ozlabs.org,
        linux-s390@...r.kernel.org, borntraeger@...ibm.com,
        heiko.carstens@...ibm.com, davem@...emloft.net
Subject: Re: [v3 0/9] parallelized "struct page" zeroing

On Tue 09-05-17 14:54:50, Pasha Tatashin wrote:
[...]
> >The implementation just looks too large to what I would expect. E.g. do
> >we really need to add zero argument to the large part of the memblock
> >API? Wouldn't it be easier to simply export memblock_virt_alloc_internal
> >(or its tiny wrapper memblock_virt_alloc_core) and move the zeroing
> >outside to its 2 callers? A completely untested scratched version at the
> >end of the email.
> 
> I am OK, with this change. But, I do not really see a difference between:
> 
> memblock_virt_alloc_raw()
> and
> memblock_virt_alloc_core()
> 
> In both cases we use memblock_virt_alloc_internal(), but the only difference
> is that in my case we tell memblock_virt_alloc_internal() to zero the pages
> if needed, and in your case the other two callers are zeroing it. I like
> moving memblock_dbg() inside memblock_virt_alloc_internal()

Well, I didn't object to this particular part. I was mostly concerned
about
http://lkml.kernel.org/r/1494003796-748672-4-git-send-email-pasha.tatashin@oracle.com
and the "zero" argument for other functions. I guess we can do without
that. I _think_ that we should simply _always_ initialize the page at the
__init_single_page time rather than during the allocation. That would
require dropping __GFP_ZERO for non-memblock allocations. Or do you
think we could regress for single threaded initialization?

> >Also it seems that this is not 100% correct either as it only cares
> >about VMEMMAP while DEFERRED_STRUCT_PAGE_INIT might be enabled also for
> >SPARSEMEM. This would suggest that we would zero out pages twice,
> >right?
> 
> Thank you, I will check this combination before sending out the next patch.
> 
> >
> >A similar concern would go to the memory hotplug patch which will
> >fall back to the slab/page allocator IIRC. On the other hand
> >__init_single_page is shared with the hotplug code so again we would
> >initialize 2 times.
> 
> Correct, when memory it hotplugged, to gain the benefit of this fix, and
> also not to regress by actually double zeroing "struct pages" we should not
> zero it out. However, I do not really have means to test it.

It should be pretty easy to test with kvm, but I can help with testing
on the real HW as well.

Thanks!
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ