lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 11 Sep 2018 09:52:25 -0400
From:   Waiman Long <longman@...hat.com>
To:     Daniel Jordan <daniel.m.jordan@...cle.com>,
        John Hubbard <jhubbard@...dia.com>,
        linux-kernel@...r.kernel.org,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        Aaron Lu <aaron.lu@...el.com>, alex.kogan@...cle.com,
        akpm@...ux-foundation.org, boqun.feng@...il.com, brouer@...hat.com,
        dave.dice@...cle.com, Dhaval Giani <dhaval.giani@...cle.com>,
        ktkhai@...tuozzo.com, ldufour@...ux.vnet.ibm.com,
        Pavel.Tatashin@...rosoft.com, paulmck@...ux.vnet.ibm.com,
        shady.issa@...cle.com, tariqt@...lanox.com, tglx@...utronix.de,
        tim.c.chen@...el.com, vbabka@...e.cz, yang.shi@...ux.alibaba.com,
        shy828301@...il.com, Huang Ying <ying.huang@...el.com>,
        subhra.mazumdar@...cle.com,
        Steven Sistare <steven.sistare@...cle.com>, jwadams@...gle.com,
        ashwinch@...gle.com, sqazi@...gle.com,
        Shakeel Butt <shakeelb@...gle.com>, walken@...gle.com,
        rientjes@...gle.com, junaids@...gle.com,
        Neha Agarwal <nehaagarwal@...gle.com>
Subject: Re: Plumbers 2018 - Performance and Scalability Microconference

On 09/10/2018 08:29 PM, Daniel Jordan wrote:
> On 9/10/18 1:34 PM, John Hubbard wrote:
>> On 9/10/18 10:20 AM, Davidlohr Bueso wrote:
>>> On Mon, 10 Sep 2018, Waiman Long wrote:
>>>> On 09/08/2018 12:13 AM, John Hubbard wrote:
>> [...]
>>>>> It's also interesting that there are two main huge page systems
>>>>> (THP and Hugetlbfs), and I sometimes
>>>>> wonder the obvious thing to wonder: are these sufficiently
>>>>> different to warrant remaining separate,
>>>>> long-term?  Yes, I realize they're quite different in some ways,
>>>>> but still, one wonders. :)
>>>>
>>>> One major difference between hugetlbfs and THP is that the former
>>>> has to
>>>> be explicitly managed by the applications that use it whereas the
>>>> latter
>>>> is done automatically without the applications being aware that THP is
>>>> being used at all. Performance wise, THP may or may not increase
>>>> application performance depending on the exact memory access pattern,
>>>> though the chance is usually higher that an application will benefit
>>>> than suffer from it.
>>>>
>>>> If an application know what it is doing, using hughtblfs can boost
>>>> performance more than it can ever achieved by THP. Many large
>>>> enterprise
>>>> applications, like Oracle DB, are using hugetlbfs and explicitly
>>>> disable
>>>> THP. So unless THP can improve its performance to a level that is
>>>> comparable to hugetlbfs, I won't see the later going away.
>>>
>>> Yep, there are a few non-trivial workloads out there that flat out
>>> discourage
>>> thp, ie: redis to avoid latency issues.
>>>
>>
>> Yes, the need for guaranteed, available-now huge pages in some cases is
>> understood. That's not the quite same as saying that there have to be
>> two different
>> subsystems, though. Nor does it even necessarily imply that the pool
>> has to be
>> reserved in the same way as hugetlbfs does it...exactly.
>>
>> So I'm wondering if THP behavior can be made to mimic hugetlbfs
>> enough (perhaps
>> another option, in addition to "always, never, madvise") that we
>> could just use
>> THP in all cases. But the "transparent" could become a sliding scale
>> that could
>> go all the way down to "opaque" (hugetlbfs behavior).
>
> Leaving the interface aside, the idea that we could deduplicate
> redundant parts of the hugetlbfs and THP implementations, without
> user-visible change, seems promising.

That I think it is good idea if it can be done.

Thanks,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ