lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <839e82f7-2c54-d1ef-8371-0a332a4cb447@redhat.com>
Date:   Wed, 4 Aug 2021 20:49:14 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Peter Xu <peterx@...hat.com>,
        Tiberiu A Georgescu <tiberiu.georgescu@...anix.com>
Cc:     akpm@...ux-foundation.org, viro@...iv.linux.org.uk,
        christian.brauner@...ntu.com, ebiederm@...ssion.com,
        adobriyan@...il.com, songmuchun@...edance.com, axboe@...nel.dk,
        vincenzo.frascino@....com, catalin.marinas@....com,
        peterz@...radead.org, chinwen.chang@...iatek.com,
        linmiaohe@...wei.com, jannh@...gle.com, apopple@...dia.com,
        linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-mm@...ck.org, ivan.teterevkov@...anix.com,
        florian.schmidt@...anix.com, carl.waldspurger@...anix.com,
        jonathan.davies@...anix.com
Subject: Re: [PATCH 0/1] pagemap: swap location for shared pages

On 04.08.21 20:33, Peter Xu wrote:
> Hi, Tiberiu,
> 
> On Fri, Jul 30, 2021 at 04:08:25PM +0000, Tiberiu A Georgescu wrote:
>> This patch follows up on a previous RFC:
>> 20210714152426.216217-1-tiberiu.georgescu@...anix.com
>>
>> When a page allocated using the MAP_SHARED flag is swapped out, its pagemap
>> entry is cleared. In many cases, there is no difference between swapped-out
>> shared pages and newly allocated, non-dirty pages in the pagemap interface.
>>
>> Example pagemap-test code (Tested on Kernel Version 5.14-rc3):
>>      #define NPAGES (256)
>>      /* map 1MiB shared memory */
>>      size_t pagesize = getpagesize();
>>      char *p = mmap(NULL, pagesize * NPAGES, PROT_READ | PROT_WRITE,
>>      		   MAP_ANONYMOUS | MAP_SHARED, -1, 0);
>>      /* Dirty new pages. */
>>      for (i = 0; i < PAGES; i++)
>>      	p[i * pagesize] = i;
>>
>> Run the above program in a small cgroup, which causes swapping:
>>      /* Initialise cgroup & run a program */
>>      $ echo 512K > foo/memory.limit_in_bytes
>>      $ echo 60 > foo/memory.swappiness
>>      $ cgexec -g memory:foo ./pagemap-test
>>
>> Check the pagemap report. Example of the current expected output:
>>      $ dd if=/proc/$PID/pagemap ibs=8 skip=$(($VADDR / $PAGESIZE)) count=$COUNT | hexdump -C
>>      00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
>>      *
>>      00000710  e1 6b 06 00 00 00 00 a1  9e eb 06 00 00 00 00 a1  |.k..............|
>>      00000720  6b ee 06 00 00 00 00 a1  a5 a4 05 00 00 00 00 a1  |k...............|
>>      00000730  5c bf 06 00 00 00 00 a1  90 b6 06 00 00 00 00 a1  |\...............|
>>
>> The first pagemap entries are reported as zeroes, indicating the pages have
>> never been allocated while they have actually been swapped out.
>>
>> This patch addresses the behaviour and modifies pte_to_pagemap_entry() to
>> make use of the XArray associated with the virtual memory area struct
>> passed as an argument. The XArray contains the location of virtual pages in
>> the page cache, swap cache or on disk. If they are in either of the caches,
>> then the original implementation still works. If not, then the missing
>> information will be retrieved from the XArray.
>>
>> Performance
>> ============
>> I measured the performance of the patch on a single socket Xeon E5-2620
>> machine, with 128GiB of RAM and 128GiB of swap storage. These were the
>> steps taken:
>>
>>    1. Run example pagemap-test code on a cgroup
>>      a. Set up cgroup with limit_in_bytes=4GiB and swappiness=60;
>>      b. allocate 16GiB (about 4 million pages);
>>      c. dirty 0,50 or 100% of pages;
>>      d. do this for both private and shared memory.
>>    2. Run `dd if=<PAGEMAP> ibs=8 skip=$(($VADDR / $PAGESIZE)) count=4194304`
>>       for each possible configuration above
>>      a.  3 times for warm up;
>>      b. 10 times to measure performance.
>>         Use `time` or another performance measuring tool.
>>
>> Results (averaged over 10 iterations):
>>                 +--------+------------+------------+
>>                 | dirty% |  pre patch | post patch |
>>                 +--------+------------+------------+
>>   private|anon  |     0% |      8.15s |      8.40s |
>>                 |    50% |     11.83s |     12.19s |
>>                 |   100% |     12.37s |     12.20s |
>>                 +--------+------------+------------+
>>    shared|anon  |     0% |      8.17s |      8.18s |
>>                 |    50% | (*) 10.43s |     37.43s |
>>                 |   100% | (*) 10.20s |     38.59s |
>>                 +--------+------------+------------+
>>
>> (*): reminder that pre-patch produces incorrect pagemap entries for swapped
>>       out pages.
>>
>>  From run to run the above results are stable (mostly <1% stderr).
>>
>> The amount of time it takes for a full read of the pagemap depends on the
>> granularity used by dd to read the pagemap file. Even though the access is
>> sequential, the script only reads 8 bytes at a time, running pagemap_read()
>> COUNT times (one time for each page in a 16GiB area).
>>
>> To reduce overhead, we can use batching for large amounts of sequential
>> access. We can make dd read multiple page entries at a time,
>> allowing the kernel to make optimisations and yield more throughput.
>>
>> Performance in real time (seconds) of
>> `dd if=<PAGEMAP> ibs=8*$BATCH skip=$(($VADDR / $PAGESIZE / $BATCH))
>> count=$((4194304 / $BATCH))`:
>> +---------------------------------+ +---------------------------------+
>> |     Shared, Anon, 50% dirty     | |     Shared, Anon, 100% dirty    |
>> +-------+------------+------------+ +-------+------------+------------+
>> | Batch |  Pre-patch | Post-patch | | Batch |  Pre-patch | Post-patch |
>> +-------+------------+------------+ +-------+------------+------------+
>> |     1 | (*) 10.43s |     37.43s | |     1 | (*) 10.20s |     38.59s |
>> |     2 | (*)  5.25s |     18.77s | |     2 | (*)  5.15s |     19.37s |
>> |     4 | (*)  2.63s |      9.42s | |     4 | (*)  2.63s |      9.74s |
>> |     8 | (*)  1.38s |      4.80s | |     8 | (*)  1.35s |      4.94s |
>> |    16 | (*)  0.73s |      2.46s | |    16 | (*)  0.72s |      2.54s |
>> |    32 | (*)  0.40s |      1.31s | |    32 | (*)  0.41s |      1.34s |
>> |    64 | (*)  0.25s |      0.72s | |    64 | (*)  0.24s |      0.74s |
>> |   128 | (*)  0.16s |      0.43s | |   128 | (*)  0.16s |      0.44s |
>> |   256 | (*)  0.12s |      0.28s | |   256 | (*)  0.12s |      0.29s |
>> |   512 | (*)  0.10s |      0.21s | |   512 | (*)  0.10s |      0.22s |
>> |  1024 | (*)  0.10s |      0.20s | |  1024 | (*)  0.10s |      0.21s |
>> +-------+------------+------------+ +-------+------------+------------+
>>
>> To conclude, in order to make the most of the underlying mechanisms of
>> pagemap and xarray, one should be using batching to achieve better
>> performance.
> 
> So what I'm still a bit worried is whether it will regress some existing users.
> Note that existing users can try to read pagemap in their own way; we can't
> expect all the userspaces to change their behavior due to a kernel change.

Then let's provide a way to enable the new behavior for a process if we 
don't find another way to extract that information. I would actually 
prefer finding a different interface for that, because with such things 
the "pagemap" no longer expresses which pages are currently mapped. 
Shared memory is weird.

> 
> Meanwhile, from the numbers, it seems to show a 4x speed down due to looking up
> the page cache no matter the size of ibs=.  IOW I don't see a good way to avoid
> that overhead, so no way to have the userspace run as fast as before.
> 
> Also note that it's not only affecting the PM_SWAP users; it potentially
> affects all the /proc/pagemap users as long as there're file-backed memory on
> the read region of pagemap, which is very sane to happen.
> 
> That's why I think if we want to persist it, we should still consider starting
> from the pte marker idea.

TBH, I tend to really dislike the PTE marker idea. IMHO, we shouldn't 
store any state information regarding shared memory in per-process page 
tables: it just doesn't make too much sense.

And this is similar to SOFTDIRTY or UFFD_WP bits: this information 
actually belongs to the shared file ("did *someone* write to this page", 
"is *someone* interested into changes to that page", "is there 
something"). I know, that screams for a completely different design in 
respect to these features.

I guess we start learning the hard way that shared memory is just 
different and requires different interfaces than per-process page table 
interfaces we have (pagemap, userfaultfd).

I didn't have time to explore any alternatives yet, but I wonder if 
tracking such stuff per an actual fd/memfd and not via process page 
tables is actually the right and clean approach. There are certainly 
many issues to solve, but conceptually to me it feels more natural to 
have these shared memory features not mangled into process page tables.

> 
> I do plan to move the pte marker idea forward unless that'll be NACKed upstream
> for some other reason, because that seems to be the only way for uffd-wp to
> support file based memories; no matter with a new swp type or with special swap
> pte.  I am even thinking about whether I should propose that with PM_SWAP first
> because that seems to be a simpler scenario than uffd-wp (which will get the
> rest uffd-wp patches involved then), then we can have a shared infrastructure.
> But haven't thought deeper than that.
> 
> Thanks,
> 


-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ