lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dc49a38c-7be6-5f32-94f5-0de82ed77b53@redhat.com>
Date:   Thu, 22 Oct 2020 10:04:50 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Mike Kravetz <mike.kravetz@...cle.com>,
        Michal Hocko <mhocko@...e.com>
Cc:     "Guilherme G. Piccoli" <gpiccoli@...onical.com>,
        linux-mm@...ck.org, kernel-hardening@...ts.openwall.com,
        linux-hardening@...r.kernel.org,
        linux-security-module@...r.kernel.org, kernel@...ccoli.net,
        cascardo@...onical.com, Alexander Potapenko <glider@...gle.com>,
        James Morris <jamorris@...ux.microsoft.com>,
        Kees Cook <keescook@...omium.org>
Subject: Re: [PATCH] mm, hugetlb: Avoid double clearing for hugetlb pages

On 22.10.20 01:32, Mike Kravetz wrote:
> On 10/21/20 4:31 AM, Michal Hocko wrote:
>> On Wed 21-10-20 11:50:48, David Hildenbrand wrote:
>>> On 21.10.20 08:15, Michal Hocko wrote:
>>>> On Tue 20-10-20 16:19:06, Guilherme G. Piccoli wrote:
>>>>> On 20/10/2020 05:20, Michal Hocko wrote:
>>>>>>
>>>>>> Yes zeroying is quite costly and that is to be expected when the feature
>>>>>> is enabled. Hugetlb like other allocator users perform their own
>>>>>> initialization rather than go through __GFP_ZERO path. More on that
>>>>>> below.
>>>>>>
>>>>>> Could you be more specific about why this is a problem. Hugetlb pool is
>>>>>> usualy preallocatd once during early boot. 24s for 65GB of 2MB pages
>>>>>> is non trivial amount of time but it doens't look like a major disaster
>>>>>> either. If the pool is allocated later it can take much more time due to
>>>>>> memory fragmentation.
>>>>>>
>>>>>> I definitely do not want to downplay this but I would like to hear about
>>>>>> the real life examples of the problem.
>>>>>
>>>>> Indeed, 24s of delay (!) is not so harmful for boot time, but...64G was
>>>>> just my simple test in a guest, the real case is much worse! It aligns
>>>>> with Mike's comment, we have complains of minute-like delays, due to a
>>>>> very big pool of hugepages being allocated.
>>>>
>>>> The cost of page clearing is mostly a constant overhead so it is quite
>>>> natural to see the time scaling with the number of pages. That overhead
>>>> has to happen at some point of time. Sure it is more visible when
>>>> allocating during boot time resp. when doing pre-allocation during
>>>> runtime. The page fault path would be then faster. The overhead just
>>>> moves to a different place. So I am not sure this is really a strong
>>>> argument to hold.
>>>
>>> We have people complaining that starting VMs backed by hugetlbfs takes
>>> too long, they would much rather have that initialization be done when
>>> booting the hypervisor ...
>>
>> I can imagine. Everybody would love to have a free lunch ;) But more
>> seriously, the overhead of the initialization is unavoidable. The memory
>> has to be zeroed out by definition and somebody has to pay for that.
>> Sure one can think of a deferred context to do that but this just
>> spreads  the overhead out to the overall system overhead.
>>
>> Even if the zeroying is done during the allocation time then it is the
>> first user who can benefit from that. Any reuse of the hugetlb pool has
>> to reinitialize again.
> 
> I remember a conversation with some of our database people who thought
> it best for their model if hugetlb pages in the pool were already clear
> so that no initialization was done at fault time.  Of course, this requires
> clearing at page free time.  In their model, they thought it better to pay
> the price at allocation (pool creation) time and free time so that faults
> would be as fast as possible.
> 
> I wonder if the VMs backed by hugetlbfs pages would benefit from this
> behavior as well?

So what VMMs like qemu already do is prealloc/prefault all hugetlbfs
memory (if told to, because it's not desired when overcommitting memory)
- relevant for low-latency applications and similar.

https://github.com/qemu/qemu/blob/67e8498937866b49b513e3acadef985c15f44fb5/util/oslib-posix.c#L561

That's why starting a VM backed by a lot of huge pages is slow when
prefaulting: you wait until everything was zeroed before booting the VM.

> 
> If we track the initialized state (clean or not) of huge pages in the
> pool as suggested in Michal's skeleton of a patch, we 'could' then allow
> users to choose when hugetlb page clearing is done.

Right, in case of QEMU if there are zeroed pages
a) prealloc would be faster
b) page faults would be faster

Also we could do hugetlb page clearing from a background thread/process,
as also mentioned by Michal.

> 
> None of that would address the original point of this thread, the global
> init_on_alloc parameter.

Yes, but I guess we're past that: whatever leaves the buddy shall be
zeroed out. That's the whole point of that security hardening mechanism.


-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ