[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a6f26afa-e4df-4519-8287-39ec3eab181d@oracle.com>
Date: Tue, 28 Jan 2025 11:40:34 -0800
From: Anthony Yznaga <anthony.yznaga@...cle.com>
To: David Hildenbrand <david@...hat.com>, akpm@...ux-foundation.org,
willy@...radead.org, markhemm@...glemail.com, viro@...iv.linux.org.uk,
khalid@...nel.org
Cc: jthoughton@...gle.com, corbet@....net, dave.hansen@...el.com,
kirill@...temov.name, luto@...nel.org, brauner@...nel.org,
arnd@...db.de, ebiederm@...ssion.com, catalin.marinas@....com,
mingo@...hat.com, peterz@...radead.org, liam.howlett@...cle.com,
lorenzo.stoakes@...cle.com, vbabka@...e.cz, jannh@...gle.com,
hannes@...xchg.org, mhocko@...nel.org, roman.gushchin@...ux.dev,
shakeel.butt@...ux.dev, muchun.song@...ux.dev, tglx@...utronix.de,
cgroups@...r.kernel.org, x86@...nel.org, linux-doc@...r.kernel.org,
linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, mhiramat@...nel.org, rostedt@...dmis.org,
vasily.averin@...ux.dev, xhao@...ux.alibaba.com, pcc@...gle.com,
neilb@...e.de, maz@...nel.org
Subject: Re: [PATCH 00/20] Add support for shared PTEs across processes
On 1/28/25 1:36 AM, David Hildenbrand wrote:
>> API
>> ===
>>
>> mshare does not introduce a new API. It instead uses existing APIs
>> to implement page table sharing. The steps to use this feature are:
>>
>> 1. Mount msharefs on /sys/fs/mshare -
>> mount -t msharefs msharefs /sys/fs/mshare
>>
>> 2. mshare regions have alignment and size requirements. Start
>> address for the region must be aligned to an address boundary and
>> be a multiple of fixed size. This alignment and size requirement
>> can be obtained by reading the file /sys/fs/mshare/mshare_info
>> which returns a number in text format. mshare regions must be
>> aligned to this boundary and be a multiple of this size.
>>
>> 3. For the process creating an mshare region:
>> a. Create a file on /sys/fs/mshare, for example -
>> fd = open("/sys/fs/mshare/shareme",
>> O_RDWR|O_CREAT|O_EXCL, 0600);
>>
>> b. Establish the starting address and size of the region
>> struct mshare_info minfo;
>>
>> minfo.start = TB(2);
>> minfo.size = BUFFER_SIZE;
>> ioctl(fd, MSHAREFS_SET_SIZE, &minfo)
>
> We could set the size using ftruncate, just like for any other file.
> It would have to be the first thing after creating the file, and
> before we allow any other modifications.
I'll look into this.
>
> Idealy, we'd be able to get rid of the "start", use something
> resaonable (e.g., TB(2)) internally, and allow processes to mmap() it
> at different (suitably-aligned) addresses.
>
> I recall we discussed that in the past. Did you stumble over real
> blockers such that we really must mmap() the file at the same address
> in all processes? I recall some things around TLB flushing, but not
> sure. So we might have to stick to an mmap address for now.
It's not hard to implement this. It does have the affect that rmap walks
will find the internal VA rather than the actual VA for a given process.
For TLB flushing this isn't a problem for the current implementation
because all TLBs are flushed entirely. I don't know if there might be
other complications. It does mean that an offset rather than address
should be used when creating a mapping as you point out below.
>
> When using fallocate/stat to set/query the file size, we could end up
> with:
>
> /*
> * Set the address where this file can be mapped into processes. Other
> * addresses are not supported for now, and mmap will fail. Changing the
> * mmap address after mappings were already created is not supported.
> */
> MSHAREFS_SET_MMAP_ADDRESS
> MSHAREFS_GET_MMAP_ADDRESS
I'll look into this, too.
>
>
>>
>> c. Map some memory in the region
>> struct mshare_create mcreate;
>>
>> mcreate.addr = TB(2);
>
> Can we use the offset into the virtual file instead? We should be able
> to perform that translation internally fairly easily I assume.
Yes, an offset would be preferable. Especially if mapping the same file
at different VAs is implemented.
>
>> mcreate.size = BUFFER_SIZE;
>> mcreate.offset = 0;
>> mcreate.prot = PROT_READ | PROT_WRITE;
>> mcreate.flags = MAP_ANONYMOUS | MAP_SHARED | MAP_FIXED;
>> mcreate.fd = -1;
>>
>> ioctl(fd, MSHAREFS_CREATE_MAPPING, &mcreate)
>
> Would examples with multiple mappings work already in this version?
>
> Did you experiment with other mappings (e.g., ordinary shared file
> mappings), and what are the blockers to make that fly?
Yes, multiple mappings works. And it's straightforward to make shared
file mappings work. I have a patch where I basically just copied code
from ksys_mmap_pgoff() into msharefs_create_mapping(). Needs some
refactoring and finessing to make it a real patch.
>
>>
>> d. Map the mshare region into the process
>> mmap((void *)TB(2), BUF_SIZE, PROT_READ | PROT_WRITE,
>> MAP_SHARED, fd, 0);
>>
>> e. Write and read to mshared region normally.
>>
>> 4. For processes attaching an mshare region:
>> a. Open the file on msharefs, for example -
>> fd = open("/sys/fs/mshare/shareme", O_RDWR);
>>
>> b. Get information about mshare'd region from the file:
>> struct mshare_info minfo;
>>
>> ioctl(fd, MSHAREFS_GET_SIZE, &minfo);
>>
>> c. Map the mshare'd region into the process
>> mmap(minfo.start, minfo.size,
>> PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
>>
>> 5. To delete the mshare region -
>> unlink("/sys/fs/mshare/shareme");
>>
>
> I recall discussions around cgroup accounting, OOM handling etc. I
> thought the conclusion was that we need an "mshare process" where the
> memory is accounted to, and once that process is killed (e.g., OOM),
> it must tear down all mappings/pages etc.
>
> How does your design currently look like in that regard? E.g., how can
> OOM handling make progress, how is cgroup accounting handled?
There was some discussion on this at last year's LSF/MM, but it seemed
more like ideas rather than a conclusion on an approach. In any case,
tearing down everything if an owning process is killed does not work for
our internal use cases, and I think that was mentioned somewhere in
discussions. Plus it seems to me that yanking the mappings away from the
unsuspecting non-owner processes could be quite catastrophic. Shouldn't
an mshare virtual file be treated like any other in-memory file? Or do
such files get zapped somehow by OOM? Not saying we shouldn't do
anything for OOM, but I'm not sure what the answer is.
Cgroups are tricky. At the mm alignment meeting last year a use case was
brought up where it would be desirable to have all pagetable pages
charged to one memcg rather than have them charged on a first touch
basis. It was proposed that perhaps an mshare file could associated with
a cgroup at the time it is created. I have figured out a way to do this
but I'm not versed enough in cgroups to know if the approach is viable.
The last three patches provided this functionality as well as
functionality that ensures a newly faulted in page is charged to the
current process. If everything, pagetable and faulted pages, should be
charged to the same cgroup then more work is definitely required.
Hopefully this provides enough context to move towards a complete solution.
Anthony
Powered by blists - more mailing lists