lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ca2d4ea4-e875-475a-6094-1ac58bc0b544@redhat.com>
Date:   Mon, 16 Aug 2021 14:20:43 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     Khalid Aziz <khalid.aziz@...cle.com>,
        "Longpeng (Mike, Cloud Infrastructure Service Product Dept.)" 
        <longpeng2@...wei.com>, Steven Sistare <steven.sistare@...cle.com>,
        Anthony Yznaga <anthony.yznaga@...cle.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "Gonglei (Arei)" <arei.gonglei@...wei.com>
Subject: Re: [RFC PATCH 0/5] madvise MADV_DOEXEC

On 16.08.21 14:07, Matthew Wilcox wrote:
> On Mon, Aug 16, 2021 at 10:02:22AM +0200, David Hildenbrand wrote:
>>> Mappings within this address range behave as if they were shared
>>> between threads, so a write to a MAP_PRIVATE mapping will create a
>>> page which is shared between all the sharers. The first process that
>>> declares an address range mshare'd can continue to map objects in the
>>> shared area. All other processes that want mshare'd access to this
>>> memory area can do so by calling mshare(). After this call, the
>>> address range given by mshare becomes a shared range in its address
>>> space. Anonymous mappings will be shared and not COWed.
>>
>> Did I understand correctly that you want to share actual page tables between
>> processes and consequently different MMs? That sounds like a very bad idea.
> 
> That is the entire point.  Consider a machine with 10,000 instances
> of an application running (process model, not thread model).  If each
> application wants to map 1TB of RAM using 2MB pages, that's 4MB of page
> tables per process or 40GB of RAM for the whole machine.

What speaks against 1 GB pages then?

> 
> There's a reason hugetlbfs was enhanced to allow this page table sharing.
> I'm not a fan of the implementation as it gets some locks upside down,
> so this is an attempt to generalise the concept beyond hugetlbfs.

Who do we account the page tables to? What are MADV_DONTNEED semantics? 
Who cleans up the page tables? What happens during munmap? How does the 
rmap even work? How to we actually synchronize page table walkers?

See how hugetlbfs just doesn't raise these problems because we are 
sharing pages and not page tables?

TBH, I quite dislike just thinking about sharing page tables between 
processes.

> 
> Think of it like partial threading.  You get to share some parts, but not
> all, of your address space with your fellow processes.  Obviously you
> don't want to expose this to random other processes, only to other
> instances of yourself being run as the same user.

Sounds like a nice way to over-complicate MM to optimize for some 
special use cases. I know, I'm probably wrong. :)

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ