lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yd1et2VFOX4xxgly@dhcp22.suse.cz>
Date:   Tue, 11 Jan 2022 11:40:55 +0100
From:   Michal Hocko <mhocko@...e.com>
To:     Jesse Barnes <jsbarnes@...gle.com>
Cc:     Yu Zhao <yuzhao@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Andi Kleen <ak@...ux.intel.com>,
        Catalin Marinas <catalin.marinas@....com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Hillf Danton <hdanton@...a.com>, Jens Axboe <axboe@...nel.dk>,
        Johannes Weiner <hannes@...xchg.org>,
        Jonathan Corbet <corbet@....net>,
        Matthew Wilcox <willy@...radead.org>,
        Mel Gorman <mgorman@...e.de>,
        Michael Larabel <Michael@...haellarabel.com>,
        Rik van Riel <riel@...riel.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Will Deacon <will@...nel.org>,
        Ying Huang <ying.huang@...el.com>,
        linux-arm-kernel@...ts.infradead.org, linux-doc@...r.kernel.org,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        linux-mm@...ck.org, page-reclaim@...gle.com,
        X86 ML <x86@...nel.org>
Subject: Re: [PATCH v6 0/9] Multigenerational LRU Framework

On Mon 10-01-22 14:46:08, Jesse Barnes wrote:
> > > > 2. There have been none that came with the testing/benchmarking
> > > >    coverage as this one did. Please point me to some if I'm mistaken,
> > > >    and I'll gladly match them.
> > >
> > > I do appreciate your numbers but you should realize that this is an area
> > > that is really hard to get any conclusive testing for.
> >
> > Fully agreed. That's why we started a new initiative, and we hope more
> > people will following these practices:
> > 1. All results in this area should be reported with at least standard
> >    deviations, or preferably confidence intervals.
> > 2. Real applications should be benchmarked (with synthetic load
> >    generator), not just synthetic benchmarks (not real applications).
> > 3. A wide range of devices should be covered, i.e., servers, desktops,
> >    laptops and phones.
> >
> > I'm very confident to say our benchmark reports were hold to the
> > highest standards. We have worked with MariaDB (company), EnterpriseDB
> > (Postgres), Redis (company), etc. on these reports. They have copies
> > of these reports (PDF version):
> > https://linux-mm.googlesource.com/benchmarks/
> >
> > We welcome any expert in those applications to examine our reports,
> > and we'll be happy to run any other benchmarks or same benchmarks with
> > different configurations that anybody thinks it's important and we've
> > missed.
> 
> I really think this gets at the heart of the issue with mm
> development, and is one of the reasons it's been extra frustrating to
> not have an MM conf for the past couple of years; I think sorting out
> how we measure & proceed on changes would be easier done f2f.  E.g.
> concluding with a consensus that if something doesn't regress on X, Y,
> and Z, and has reasonably maintainable and readable code, we should
> merge it and try it out.

I am fully with you on that! I hope we can have LSFMM this year finally.

> But since f2f isn't an option until 2052 at the earliest...

Let's be more optimistic than that ;)

> I understand the desire for an "incremental approach that gets us from
> A->B".  In the abstract it sounds great.  However, with a change like
> this one, I think it's highly likely that such a path would be
> littered with regressions both large and small, and would probably be
> more difficult to reason about than the relatively clean design of
> MGLRU.

There are certainly things that do not make much sense to split up of
course. On the other hand the patchset is making a lot of decisions and
assumptions that are neither documented in the code nor in the
changelog. From my past experience these are really problematic from a
long term maintenance POV. We are struggling with those already because
changelog tend to be much more coarse in the past yet the code stays
with us and we have been really "great" at not touching many of those
because "something might break". This results in the complexity grow and
further maintenance burden.

> On top of that, I don't think we'll get the kind of user
> feedback we need for something like this *without* merging it.  Yu has
> done a tremendous job collecting data here (and the results are really
> incredible), but I think we can all agree that without extensive
> testing in the field with all sorts of weird codes, we're not going to
> find the problematic behaviors we're concerned about.

This is understood.

> So unless we want to eschew big mm changes entirely (we shouldn't!
> look at net or scheduling for how important big rewrites are to
> progress), I think we should be open to experimenting with new stuff.
> We can always revert if things get too unwieldy.

As long as the patchset doesn't include new user visible interfaces
which have proven to be really hard to revert.

> None of this is to say that there may not be lots more comments on the
> code or potential fixes/changes to incorporate before merging; I'm
> mainly arguing about the mindset we should have to changes like this,
> not all the stuff the community is already really good at (i.e.
> testing and reviewing code on a nuts & bolts level).

>From my reading of this and previous discussions I have gathered that
there was no opposition just for the sake of it. There have been very
specific questions regarding the implementation and/or future plans to
address issues expressed in the past.

So far I have only managed to check the memcg and oom integration
finding some issues there. All of them should be fixable reasonably
easily but it also points that a deep dive into this is really
necessary.

I have also raised questions about future maintainability of the
resulting code. As you could have noticed the review power in the MM
community is lacking behind and we tend to have more code producers than
reviewers and maintainers.
Not to mention other things like page flags depletion which is something
we have been struggling for quite some time already.

All that being said there is a lot of work for such a large change to be
merged.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ