lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkrkbibtwM4=Y4Ek-og2ERZNnKHTeVO==-JudyO=0R0JLw@mail.gmail.com>
Date:   Tue, 14 Feb 2023 10:25:15 -0800
From:   Yang Shi <shy828301@...il.com>
To:     David Hildenbrand <david@...hat.com>
Cc:     Chih-En Lin <shiyn.lin@...il.com>,
        Pasha Tatashin <pasha.tatashin@...een.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Qi Zheng <zhengqi.arch@...edance.com>,
        "Matthew Wilcox (Oracle)" <willy@...radead.org>,
        Christophe Leroy <christophe.leroy@...roup.eu>,
        John Hubbard <jhubbard@...dia.com>,
        Nadav Amit <namit@...are.com>, Barry Song <baohua@...nel.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Masami Hiramatsu <mhiramat@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Mark Rutland <mark.rutland@....com>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Jiri Olsa <jolsa@...nel.org>,
        Namhyung Kim <namhyung@...nel.org>,
        Peter Xu <peterx@...hat.com>, Vlastimil Babka <vbabka@...e.cz>,
        "Zach O'Keefe" <zokeefe@...gle.com>,
        Yun Zhou <yun.zhou@...driver.com>,
        Hugh Dickins <hughd@...gle.com>,
        Suren Baghdasaryan <surenb@...gle.com>,
        Yu Zhao <yuzhao@...gle.com>, Juergen Gross <jgross@...e.com>,
        Tong Tiangen <tongtiangen@...wei.com>,
        Liu Shixin <liushixin2@...wei.com>,
        Anshuman Khandual <anshuman.khandual@....com>,
        Li kunyu <kunyu@...china.com>,
        Minchan Kim <minchan@...nel.org>,
        Miaohe Lin <linmiaohe@...wei.com>,
        Gautam Menghani <gautammenghani201@...il.com>,
        Catalin Marinas <catalin.marinas@....com>,
        Mark Brown <broonie@...nel.org>, Will Deacon <will@...nel.org>,
        Vincenzo Frascino <Vincenzo.Frascino@....com>,
        Thomas Gleixner <tglx@...utronix.de>,
        "Eric W. Biederman" <ebiederm@...ssion.com>,
        Andy Lutomirski <luto@...nel.org>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        "Liam R. Howlett" <Liam.Howlett@...cle.com>,
        Fenghua Yu <fenghua.yu@...el.com>,
        Andrei Vagin <avagin@...il.com>,
        Barret Rhoden <brho@...gle.com>,
        Michal Hocko <mhocko@...e.com>,
        "Jason A. Donenfeld" <Jason@...c4.com>,
        Alexey Gladkov <legion@...nel.org>,
        linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-mm@...ck.org, linux-trace-kernel@...r.kernel.org,
        linux-perf-users@...r.kernel.org,
        Dinglan Peng <peng301@...due.edu>,
        Pedro Fonseca <pfonseca@...due.edu>,
        Jim Huang <jserv@...s.ncku.edu.tw>,
        Huichun Feng <foxhoundsk.tw@...il.com>
Subject: Re: [PATCH v4 00/14] Introduce Copy-On-Write to Page Table

On Tue, Feb 14, 2023 at 9:39 AM David Hildenbrand <david@...hat.com> wrote:
>
> On 14.02.23 18:23, Yang Shi wrote:
> > On Tue, Feb 14, 2023 at 1:58 AM David Hildenbrand <david@...hat.com> wrote:
> >>
> >> On 10.02.23 18:20, Chih-En Lin wrote:
> >>> On Fri, Feb 10, 2023 at 11:21:16AM -0500, Pasha Tatashin wrote:
> >>>>>>> Currently, copy-on-write is only used for the mapped memory; the child
> >>>>>>> process still needs to copy the entire page table from the parent
> >>>>>>> process during forking. The parent process might take a lot of time and
> >>>>>>> memory to copy the page table when the parent has a big page table
> >>>>>>> allocated. For example, the memory usage of a process after forking with
> >>>>>>> 1 GB mapped memory is as follows:
> >>>>>>
> >>>>>> For some reason, I was not able to reproduce performance improvements
> >>>>>> with a simple fork() performance measurement program. The results that
> >>>>>> I saw are the following:
> >>>>>>
> >>>>>> Base:
> >>>>>> Fork latency per gigabyte: 0.004416 seconds
> >>>>>> Fork latency per gigabyte: 0.004382 seconds
> >>>>>> Fork latency per gigabyte: 0.004442 seconds
> >>>>>> COW kernel:
> >>>>>> Fork latency per gigabyte: 0.004524 seconds
> >>>>>> Fork latency per gigabyte: 0.004764 seconds
> >>>>>> Fork latency per gigabyte: 0.004547 seconds
> >>>>>>
> >>>>>> AMD EPYC 7B12 64-Core Processor
> >>>>>> Base:
> >>>>>> Fork latency per gigabyte: 0.003923 seconds
> >>>>>> Fork latency per gigabyte: 0.003909 seconds
> >>>>>> Fork latency per gigabyte: 0.003955 seconds
> >>>>>> COW kernel:
> >>>>>> Fork latency per gigabyte: 0.004221 seconds
> >>>>>> Fork latency per gigabyte: 0.003882 seconds
> >>>>>> Fork latency per gigabyte: 0.003854 seconds
> >>>>>>
> >>>>>> Given, that page table for child is not copied, I was expecting the
> >>>>>> performance to be better with COW kernel, and also not to depend on
> >>>>>> the size of the parent.
> >>>>>
> >>>>> Yes, the child won't duplicate the page table, but fork will still
> >>>>> traverse all the page table entries to do the accounting.
> >>>>> And, since this patch expends the COW to the PTE table level, it's not
> >>>>> the mapped page (page table entry) grained anymore, so we have to
> >>>>> guarantee that all the mapped page is available to do COW mapping in
> >>>>> the such page table.
> >>>>> This kind of checking also costs some time.
> >>>>> As a result, since the accounting and the checking, the COW PTE fork
> >>>>> still depends on the size of the parent so the improvement might not
> >>>>> be significant.
> >>>>
> >>>> The current version of the series does not provide any performance
> >>>> improvements for fork(). I would recommend removing claims from the
> >>>> cover letter about better fork() performance, as this may be
> >>>> misleading for those looking for a way to speed up forking. In my
> >>>
> >>>   From v3 to v4, I changed the implementation of the COW fork() part to do
> >>> the accounting and checking. At the time, I also removed most of the
> >>> descriptions about the better fork() performance. Maybe it's not enough
> >>> and still has some misleading. I will fix this in the next version.
> >>> Thanks.
> >>>
> >>>> case, I was looking to speed up Redis OSS, which relies on fork() to
> >>>> create consistent snapshots for driving replicates/backups. The O(N)
> >>>> per-page operation causes fork() to be slow, so I was hoping that this
> >>>> series, which does not duplicate the VA during fork(), would make the
> >>>> operation much quicker.
> >>>
> >>> Indeed, at first, I tried to avoid the O(N) per-page operation by
> >>> deferring the accounting and the swap stuff to the page fault. But,
> >>> as I mentioned, it's not suitable for the mainline.
> >>>
> >>> Honestly, for improving the fork(), I have an idea to skip the per-page
> >>> operation without breaking the logic. However, this will introduce the
> >>> complicated mechanism and may has the overhead for other features. It
> >>> might not be worth it. It's hard to strike a balance between the
> >>> over-complicated mechanism with (probably) better performance and data
> >>> consistency with the page status. So, I would focus on the safety and
> >>> stable approach at first.
> >>
> >> Yes, it is most probably possible, but complexity, robustness and
> >> maintainability have to be considered as well.
> >>
> >> Thanks for implementing this approach (only deduplication without other
> >> optimizations) and evaluating it accordingly. It's certainly "cleaner",
> >> such that we only have to mess with unsharing and not with other
> >> accounting/pinning/mapcount thingies. But it also highlights how
> >> intrusive even this basic deduplication approach already is -- and that
> >> most benefits of the original approach requires even more complexity on top.
> >>
> >> I am not quite sure if the benefit is worth the price (I am not to
> >> decide and I would like to hear other options).
> >>
> >> My quick thoughts after skimming over the core parts of this series
> >>
> >> (1) forgetting to break COW on a PTE in some pgtable walker feels quite
> >>       likely (meaning that it might be fairly error-prone) and forgetting
> >>       to break COW on a PTE table, accidentally modifying the shared
> >>       table.
> >> (2) break_cow_pte() can fail, which means that we can fail some
> >>       operations (possibly silently halfway through) now. For example,
> >>       looking at your change_pte_range() change, I suspect it's wrong.
> >> (3) handle_cow_pte_fault() looks quite complicated and needs quite some
> >>       double-checking: we temporarily clear the PMD, to reset it
> >>       afterwards. I am not sure if that is correct. For example, what
> >>       stops another page fault stumbling over that pmd_none() and
> >>       allocating an empty page table? Maybe there are some locking details
> >>       missing or they are very subtle such that we better document them. I
> >>      recall that THP played quite some tricks to make such cases work ...
> >>
> >>>
> >>>>> Actually, at the RFC v1 and v2, we proposed the version of skipping
> >>>>> those works, and we got a significant improvement. You can see the
> >>>>> number from RFC v2 cover letter [1]:
> >>>>> "In short, with 512 MB mapped memory, COW PTE decreases latency by 93%
> >>>>> for normal fork"
> >>>>
> >>>> I suspect the 93% improvement (when the mapcount was not updated) was
> >>>> only for VAs with 4K pages. With 2M mappings this series did not
> >>>> provide any benefit is this correct?
> >>>
> >>> Yes. In this case, the COW PTE performance is similar to the normal
> >>> fork().
> >>
> >>
> >> The thing with THP is, that during fork(), we always allocate a backup
> >> PTE table, to be able to PTE-map the THP whenever we have to. Otherwise
> >> we'd have to eventually fail some operations we don't want to fail --
> >> similar to the case where break_cow_pte() could fail now due to -ENOMEM
> >> although we really don't want to fail (e.g., change_pte_range() ).
> >>
> >> I always considered that wasteful, because in many scenarios, we'll
> >> never ever split a THP and possibly waste memory.
> >
> > When you say "split THP", do you mean split the compound page to base
> > pages? IIUC the backup PTE table page is used to guarantee the PMD
> > split (just convert pmd mapped THP to PTE-mapped but not split the
> > compound page) succeed. You may already notice there is no return
> > value for PMD split.
>
> Yes, as I raised in my other reply.
>
> >
> > The PMD split may be called quite often, for example, MADV_DONTNEED,
> > mbind, mlock, and even in memory reclamation context  (THP swap).
>
> Yes, but with a single MADV_DONTNEED call you cannot PTE-map more than 2
> THP (all other overlapped THP will get zapped). Same with most other
> operations.

My point is there may be multiple processes calling PMD split on
different THPs at the same time.

>
> There are corner cases, though. I recall that s390x/kvm wants to break
> all THP in a given VMA range. But that operation could safely fail if we
> can't do that.

I'm supposed that is THP split (split the compound page), it may fail.

>
> Certainly needs some investigation, that's most probably why it hasn't
> been done yet.
>
> >
> >>
> >> Optimizing that for THP (e.g., don't always allocate backup THP, have
> >> some global allocation backup pool for splits + refill when
> >> close-to-empty) might provide similar fork() improvements, both in speed
> >> and memory consumption when it comes to anonymous memory.
> >
> > It might work. But may be much more complicated than what you thought
> > when handling multiple parallel PMD splits.
>
>
> I consider the whole PTE-table linking to THPs complicated enough to
> eventually replace it by something differently complicated that wastes
> less memory ;)

Maybe...

>
> --
> Thanks,
>
> David / dhildenb
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ