lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200922154951.GX29330@paulmck-ThinkPad-P72>
Date:   Tue, 22 Sep 2020 08:49:51 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Michal Hocko <mhocko@...e.com>
Cc:     Uladzislau Rezki <urezki@...il.com>,
        LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
        linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Thomas Gleixner <tglx@...utronix.de>,
        "Theodore Y . Ts'o" <tytso@....edu>,
        Joel Fernandes <joel@...lfernandes.org>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>,
        Mel Gorman <mgorman@...e.de>
Subject: Re: [RFC-PATCH 2/4] mm: Add __rcu_alloc_page_lockless() func.

On Tue, Sep 22, 2020 at 09:50:02AM +0200, Michal Hocko wrote:
> [Cc Mel - the thread starts http://lkml.kernel.org/r/20200918194817.48921-1-urezki@gmail.com]
> 
> On Mon 21-09-20 21:48:19, Uladzislau Rezki wrote:
> > Hello, Michal.
> > 
> > > >
> > > > Yes, I do well remember that you are unhappy with this approach.
> > > > Unfortunately, thus far, there is no solution that makes all developers
> > > > happy.  You might be glad to hear that we are also looking into other
> > > > solutions, each of which makes some other developers unhappy.  So we
> > > > are at least not picking on you alone.  :-/
> > > 
> > > No worries I do not feel like a whipping boy here. But do expect me to
> > > argue against the approach. I would also appreciate it if there was some
> > > more information on other attempts, why they have failed. E.g. why
> > > pre-allocation is not an option that works well enough in most
> > > reasonable workloads.
> > Pre-allocating has some drawbacks:
> > 
> > a) It is impossible to predict how many pages will be required to
> >    cover a demand that is controlled by different workloads on
> >    various systems.
> 
> Yes, this is not trivial but not a rocket science either. Remember that
> you are relying on a very dumb watermark based pcp pool from the
> allocator. Mimicing a similar implementation shouldn't be all that hard
> and you will get your own pool which doesn't affect other page allocator
> users as much as a bonus.
> 
> > b) Memory overhead since we do not know how much pages should be
> >    preloaded: 100, 200 or 300
> 
> Does anybody who really needs this optimization actually cares about 300
> pages?

That would be 100-300 (maybe more) pages -per- -CPU-, so yes, some people
will care quite deeply about this.

							Thanx, Paul

> > As for memory overhead, it is important to reduce it because of
> > embedded devices like phones, where a low memory condition is a
> > big issue. In that sense pre-allocating is something that we strongly
> > would like to avoid.
> 
> How big "machines" are we talking about here? I would expect that really
> tiny machines would have hard times to really fill up thousands of pages
> with pointers to free...
> 
> Would a similar scaling as the page allocator feasible. Really I mostly
> do care about shared nature of the pcp allocator list that one user can
> easily monopolize with this API.
> 
> > > I would also appreciate some more thoughts why we
> > > need to optimize for heavy abusers of RCU (like close(open) extremes).
> > > 
> > I think here is a small misunderstanding. Please note, that is not only
> > about performance and corner cases. There is a single argument support
> > of the kvfree_rcu(ptr), where maintaining an array in time is needed.
> > The fallback of the single argument case is extrimely slow.
> 
> This should be part of the changelog.
> > 
> > Single-argument details is here: https://lkml.org/lkml/2020/4/28/1626
> 
> Error 501
> 
> > > > > I strongly agree with Thomas http://lkml.kernel.org/r/87tux4kefm.fsf@nanos.tec.linutronix.de
> > > > > that this optimization is not aiming at reasonable workloads. Really, go
> > > > > with pre-allocated buffer and fallback to whatever slow path you have
> > > > > already. Exposing more internals of the allocator is not going to do any
> > > > > good for long term maintainability.
> > > > 
> > > > I suggest that you carefully re-read the thread following that email.
> > > 
> > > I clearly remember Thomas not being particularly happy that you optimize
> > > for a corner case. I do not remember there being a consensus that this
> > > is the right approach. There was some consensus that this is better than
> > > a gfp flag. Still quite bad though if you ask me.
> > > 
> > > > Given a choice between making users unhappy and making developers
> > > > unhappy, I will side with the users each and every time.
> > > 
> > > Well, let me rephrase. It is not only about me (as a developer) being
> > > unhappy but also all the side effects this would have for users when
> > > performance of their favorite workload declines for no apparent reason
> > > just because pcp caches are depleted by an unrelated process.
> > >
> > If depleted, we have a special worker that charge it. From the other hand,
> > the pcplist can be depleted by its nature, what _is_ not wrong. But just
> > in case we secure it since you had a concern about it.
> 
> pcp free lists should ever get empty when we run out of memory and need
> to reclaim. Otherwise they are constantly refilled/rebalanced on demand.
> The fact that you are refilling them from outside just suggest that you
> are operating on a wrong layer. Really, create your own pool of pages
> and rebalance them based on the workload.
> 
> > Could you please specify a real test case or workload you are talking about?
> 
> I am not a performance expert but essentially any memory allocator heavy
> workload might notice. I am pretty sure Mel would tell you more.
> 
> -- 
> Michal Hocko
> SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ