lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e8cf6227-003d-8a82-8b4d-07176b43810c@oracle.com>
Date:   Mon, 16 Oct 2017 13:32:45 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Michal Hocko <mhocko@...nel.org>
Cc:     Guy Shattah <sguy@...lanox.com>,
        Christopher Lameter <cl@...ux.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, linux-api@...r.kernel.org,
        Marek Szyprowski <m.szyprowski@...sung.com>,
        Michal Nazarewicz <mina86@...a86.com>,
        "Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Anshuman Khandual <khandual@...ux.vnet.ibm.com>,
        Laura Abbott <labbott@...hat.com>,
        Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [RFC PATCH 3/3] mm/map_contig: Add mmap(MAP_CONTIG) support

On 10/16/2017 11:07 AM, Michal Hocko wrote:
> On Mon 16-10-17 10:43:38, Mike Kravetz wrote:
>> Just to be clear, the posix standard talks about a typed memory object.
>> The suggested implementation has one create a connection to the memory
>> object to receive a fd, then use mmap as usual to get a mapping backed
>> by contiguous pages/memory.  Of course, this type of implementation is
>> not a requirement.
> 
> I am not sure that POSIC standard for typed memory is easily
> implementable in Linux. Does any OS actually implement this API?

A quick search only reveals Blackberry QNX and PlayBook OS.

Also somewhat related.  In a earlier thread someone pointed out this
out of tree module used for contiguous allocations in SOC (and other?)
environments.  It even has the option of making use of CMA.
http://processors.wiki.ti.com/index.php/CMEM_Overview

>> However, this type of implementation looks quite a
>> bit like hugetlbfs today.
>> - Both require opening a special file/device, and then calling mmap on
>>   the returned fd.  You can technically use mmap(MAP_HUGETLB), but that
>>   still ends up using hugetbfs.  BTW, there was resistance to adding the
>>   MAP_HUGETLB flag to mmap.
> 
> And I think we shouldn't really shape any API based on hugetlb.

Agree.  I only wanted to point out the similarities.
But, it does make me wonder how much of a benefit hugetlb 1G pages would
make in the the RDMA performance comparison.  The table in the presentation
show a average speedup of something like 27% (or so) for contiguous allocation
which I assume are 2GB in size.  Certainly, using hugetlb is not the ideal
case, just wondering if it does help and how much.

>> - Allocation of contiguous memory is much like 'on demand' allocation of
>>   huge pages.  There are some (not many) users that use this model.  They
>>   attempt to allocate huge pages on demand, and if not available fall back
>>   to base pages.  This is how contiguous allocations would need to work.
>>   Of course, most hugetlbfs users pre-allocate pages for their use, and
>>   this 'might' be something useful for contiguous allocations as well.
> 
> But there is still admin configuration required to consume memory from
> the pool or overcommit that pool.
> 
>> I wonder if going down the path of a separate devide/filesystem/etc for
>> contiguous allocations might be a better option.  It would keep the
>> implementation somewhat separate.  However, I would then be afraid that
>> we end up with another 'separate/special vm' as in the case of hugetlbfs
>> today.
> 
> That depends on who is actually going to use the contiguous memory. If
> we are talking about drivers to communication to the userspace then
> using driver specific fd with its mmap implementation then we do not
> need any special fs nor a seperate infrastructure. Well except for a
> library function to handle the MM side of the thing.

If we embed this functionality into device specific mmap calls it will
closely tie the usage to the devices.  However, don't we still have to
worry about potential interaction with other parts of the mm as you mention
below?  I guess that would be the library function and how it is used
by drivers.

-- 
Mike Kravetz

> If we really need a general purpose physical contiguous memory allocator
> then I would agree that using MAP_ flag might be a way to go but that
> would require a very careful consideration of who is allowed to allocate
> and how much/large blocks. I do not see a good fit to conveying that
> information to the kernel right now. Moreover, and most importantly, I
> haven't heard any sound usecase for such a functionality in the first
> place. There is some hand waving about performance but there are no real
> numbers to back those claims AFAIK. Not to mention a serious
> consideration of potential consequences of the whole MM.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ