lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 10 Feb 2023 11:21:16 -0500
From:   Pasha Tatashin <pasha.tatashin@...een.com>
To:     Chih-En Lin <shiyn.lin@...il.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Qi Zheng <zhengqi.arch@...edance.com>,
        David Hildenbrand <david@...hat.com>,
        "Matthew Wilcox (Oracle)" <willy@...radead.org>,
        Christophe Leroy <christophe.leroy@...roup.eu>,
        John Hubbard <jhubbard@...dia.com>,
        Nadav Amit <namit@...are.com>, Barry Song <baohua@...nel.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Masami Hiramatsu <mhiramat@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Mark Rutland <mark.rutland@....com>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Jiri Olsa <jolsa@...nel.org>,
        Namhyung Kim <namhyung@...nel.org>,
        Yang Shi <shy828301@...il.com>, Peter Xu <peterx@...hat.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        "Zach O'Keefe" <zokeefe@...gle.com>,
        Yun Zhou <yun.zhou@...driver.com>,
        Hugh Dickins <hughd@...gle.com>,
        Suren Baghdasaryan <surenb@...gle.com>,
        Yu Zhao <yuzhao@...gle.com>, Juergen Gross <jgross@...e.com>,
        Tong Tiangen <tongtiangen@...wei.com>,
        Liu Shixin <liushixin2@...wei.com>,
        Anshuman Khandual <anshuman.khandual@....com>,
        Li kunyu <kunyu@...china.com>,
        Minchan Kim <minchan@...nel.org>,
        Miaohe Lin <linmiaohe@...wei.com>,
        Gautam Menghani <gautammenghani201@...il.com>,
        Catalin Marinas <catalin.marinas@....com>,
        Mark Brown <broonie@...nel.org>, Will Deacon <will@...nel.org>,
        Vincenzo Frascino <Vincenzo.Frascino@....com>,
        Thomas Gleixner <tglx@...utronix.de>,
        "Eric W. Biederman" <ebiederm@...ssion.com>,
        Andy Lutomirski <luto@...nel.org>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        "Liam R. Howlett" <Liam.Howlett@...cle.com>,
        Fenghua Yu <fenghua.yu@...el.com>,
        Andrei Vagin <avagin@...il.com>,
        Barret Rhoden <brho@...gle.com>,
        Michal Hocko <mhocko@...e.com>,
        "Jason A. Donenfeld" <Jason@...c4.com>,
        Alexey Gladkov <legion@...nel.org>,
        linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-mm@...ck.org, linux-trace-kernel@...r.kernel.org,
        linux-perf-users@...r.kernel.org,
        Dinglan Peng <peng301@...due.edu>,
        Pedro Fonseca <pfonseca@...due.edu>,
        Jim Huang <jserv@...s.ncku.edu.tw>,
        Huichun Feng <foxhoundsk.tw@...il.com>
Subject: Re: [PATCH v4 00/14] Introduce Copy-On-Write to Page Table

> > > Currently, copy-on-write is only used for the mapped memory; the child
> > > process still needs to copy the entire page table from the parent
> > > process during forking. The parent process might take a lot of time and
> > > memory to copy the page table when the parent has a big page table
> > > allocated. For example, the memory usage of a process after forking with
> > > 1 GB mapped memory is as follows:
> >
> > For some reason, I was not able to reproduce performance improvements
> > with a simple fork() performance measurement program. The results that
> > I saw are the following:
> >
> > Base:
> > Fork latency per gigabyte: 0.004416 seconds
> > Fork latency per gigabyte: 0.004382 seconds
> > Fork latency per gigabyte: 0.004442 seconds
> > COW kernel:
> > Fork latency per gigabyte: 0.004524 seconds
> > Fork latency per gigabyte: 0.004764 seconds
> > Fork latency per gigabyte: 0.004547 seconds
> >
> > AMD EPYC 7B12 64-Core Processor
> > Base:
> > Fork latency per gigabyte: 0.003923 seconds
> > Fork latency per gigabyte: 0.003909 seconds
> > Fork latency per gigabyte: 0.003955 seconds
> > COW kernel:
> > Fork latency per gigabyte: 0.004221 seconds
> > Fork latency per gigabyte: 0.003882 seconds
> > Fork latency per gigabyte: 0.003854 seconds
> >
> > Given, that page table for child is not copied, I was expecting the
> > performance to be better with COW kernel, and also not to depend on
> > the size of the parent.
>
> Yes, the child won't duplicate the page table, but fork will still
> traverse all the page table entries to do the accounting.
> And, since this patch expends the COW to the PTE table level, it's not
> the mapped page (page table entry) grained anymore, so we have to
> guarantee that all the mapped page is available to do COW mapping in
> the such page table.
> This kind of checking also costs some time.
> As a result, since the accounting and the checking, the COW PTE fork
> still depends on the size of the parent so the improvement might not
> be significant.

The current version of the series does not provide any performance
improvements for fork(). I would recommend removing claims from the
cover letter about better fork() performance, as this may be
misleading for those looking for a way to speed up forking. In my
case, I was looking to speed up Redis OSS, which relies on fork() to
create consistent snapshots for driving replicates/backups. The O(N)
per-page operation causes fork() to be slow, so I was hoping that this
series, which does not duplicate the VA during fork(), would make the
operation much quicker.

> Actually, at the RFC v1 and v2, we proposed the version of skipping
> those works, and we got a significant improvement. You can see the
> number from RFC v2 cover letter [1]:
> "In short, with 512 MB mapped memory, COW PTE decreases latency by 93%
> for normal fork"

I suspect the 93% improvement (when the mapcount was not updated) was
only for VAs with 4K pages. With 2M mappings this series did not
provide any benefit is this correct?

>
> However, it might break the existing logic of the refcount/mapcount of
> the page and destabilize the system.

This makes sense.

> [1] https://lore.kernel.org/linux-mm/20220927162957.270460-1-shiyn.lin@gmail.com/T/#me2340d963c2758a2561c39cb3baf42c478dfe548
> [2] https://lore.kernel.org/linux-mm/20220927162957.270460-1-shiyn.lin@gmail.com/T/#mbc33221f00c7cf3d71839b45fc23862a5dac3014

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ