lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4ac061cf-9b98-4831-9058-a3cb0e743dd6@oracle.com>
Date: Mon, 27 Jan 2025 15:59:30 -0800
From: Anthony Yznaga <anthony.yznaga@...cle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: willy@...radead.org, markhemm@...glemail.com, viro@...iv.linux.org.uk,
        david@...hat.com, khalid@...nel.org, jthoughton@...gle.com,
        corbet@....net, dave.hansen@...el.com, kirill@...temov.name,
        luto@...nel.org, brauner@...nel.org, arnd@...db.de,
        ebiederm@...ssion.com, catalin.marinas@....com, mingo@...hat.com,
        peterz@...radead.org, liam.howlett@...cle.com,
        lorenzo.stoakes@...cle.com, vbabka@...e.cz, jannh@...gle.com,
        hannes@...xchg.org, mhocko@...nel.org, roman.gushchin@...ux.dev,
        shakeel.butt@...ux.dev, muchun.song@...ux.dev, tglx@...utronix.de,
        cgroups@...r.kernel.org, x86@...nel.org, linux-doc@...r.kernel.org,
        linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, mhiramat@...nel.org, rostedt@...dmis.org,
        vasily.averin@...ux.dev, xhao@...ux.alibaba.com, pcc@...gle.com,
        neilb@...e.de, maz@...nel.org
Subject: Re: [PATCH 00/20] Add support for shared PTEs across processes


On 1/27/25 2:33 PM, Andrew Morton wrote:
> On Fri, 24 Jan 2025 15:54:34 -0800 Anthony Yznaga <anthony.yznaga@...cle.com> wrote:
>
>> Memory pages shared between processes require page table entries
>> (PTEs) for each process. Each of these PTEs consume some of
>> the memory and as long as the number of mappings being maintained
>> is small enough, this space consumed by page tables is not
>> objectionable. When very few memory pages are shared between
>> processes, the number of PTEs to maintain is mostly constrained by
>> the number of pages of memory on the system. As the number of shared
>> pages and the number of times pages are shared goes up, amount of
>> memory consumed by page tables starts to become significant. This
>> issue does not apply to threads. Any number of threads can share the
>> same pages inside a process while sharing the same PTEs. Extending
>> this same model to sharing pages across processes can eliminate this
>> issue for sharing across processes as well.
>>
>> ...
>>
>> API
>> ===
>>
>> mshare does not introduce a new API. It instead uses existing APIs
>> to implement page table sharing. The steps to use this feature are:
>>
>> 1. Mount msharefs on /sys/fs/mshare -
>>          mount -t msharefs msharefs /sys/fs/mshare
>>
>> 2. mshare regions have alignment and size requirements. Start
>>     address for the region must be aligned to an address boundary and
>>     be a multiple of fixed size. This alignment and size requirement
>>     can be obtained by reading the file /sys/fs/mshare/mshare_info
>>     which returns a number in text format. mshare regions must be
>>     aligned to this boundary and be a multiple of this size.
>>
>> 3. For the process creating an mshare region:
>>          a. Create a file on /sys/fs/mshare, for example -
>>                  fd = open("/sys/fs/mshare/shareme",
>>                                  O_RDWR|O_CREAT|O_EXCL, 0600);
>>
>>          b. Establish the starting address and size of the region
>>                  struct mshare_info minfo;
>>
>>                  minfo.start = TB(2);
>>                  minfo.size = BUFFER_SIZE;
>>                  ioctl(fd, MSHAREFS_SET_SIZE, &minfo)
>>
>>          c. Map some memory in the region
>>                  struct mshare_create mcreate;
>>
>>                  mcreate.addr = TB(2);
>>                  mcreate.size = BUFFER_SIZE;
>>                  mcreate.offset = 0;
>>                  mcreate.prot = PROT_READ | PROT_WRITE;
>>                  mcreate.flags = MAP_ANONYMOUS | MAP_SHARED | MAP_FIXED;
>>                  mcreate.fd = -1;
>>
>>                  ioctl(fd, MSHAREFS_CREATE_MAPPING, &mcreate)
> I'm not really understanding why step a exists.  It's basically an
> mmap() so why can't this be done within step d?

One way to think of it is that step d establishes a window to the mshare 
region and the objects mapped within it.

Discussions on earlier iterations of mshare pushed back strongly on 
introducing special casing in the mmap path to redirect mmaps that fell 
within an mshare region to map into an mshare mm. Even then it gets 
messier for munmap, i.e. does an unmap of the whole range mean unmap the 
window or unmap the objects within it.

>
>>          d. Map the mshare region into the process
>>                  mmap((void *)TB(2), BUF_SIZE, PROT_READ | PROT_WRITE,
>>                          MAP_SHARED, fd, 0);
>>
>>          e. Write and read to mshared region normally.
>>
>> 4. For processes attaching an mshare region:
>>          a. Open the file on msharefs, for example -
>>                  fd = open("/sys/fs/mshare/shareme", O_RDWR);
>>
>>          b. Get information about mshare'd region from the file:
>>                  struct mshare_info minfo;
>>
>>                  ioctl(fd, MSHAREFS_GET_SIZE, &minfo);
>>
>>          c. Map the mshare'd region into the process
>>                  mmap(minfo.start, minfo.size,
>>                          PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
>>
>> 5. To delete the mshare region -
>>                  unlink("/sys/fs/mshare/shareme");
>>
> The userspace intergace is the thing we should initially consider.  I'm
> having ancient memories of hugetlbfs.  Over time it was seen that
> hugetlbfs was too standalone and huge pages became more (and more (and
> more (and more))) integrated into regular MM code.  Can we expect a
> similar evolution with pte-shared memory and if so, is this the correct
> interface to be starting out with?

I don't know. This is an approach that has been refined through a number 
of discussions, but I'm certainly open to alternatives.


Anthony


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ