[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <22a0156e-f74f-51c8-b7fd-9b5a375d7c81@kernel.dk>
Date: Tue, 7 Sep 2021 11:18:21 -0600
From: Jens Axboe <axboe@...nel.dk>
To: Tejun Heo <tj@...nel.org>, Roman Gushchin <guro@...com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
kernel test robot <oliver.sang@...el.com>,
Vasily Averin <vvs@...tuozzo.com>,
Shakeel Butt <shakeelb@...gle.com>,
Alexander Viro <viro@...iv.linux.org.uk>,
Alexey Dobriyan <adobriyan@...il.com>,
Andrei Vagin <avagin@...il.com>,
Borislav Petkov <bp@...en8.de>, Borislav Petkov <bp@...e.de>,
Christian Brauner <christian.brauner@...ntu.com>,
Dmitry Safonov <0x7f454c46@...il.com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
"J. Bruce Fields" <bfields@...ldses.org>,
Jeff Layton <jlayton@...nel.org>,
Jiri Slaby <jirislaby@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Kirill Tkhai <ktkhai@...tuozzo.com>,
Michal Hocko <mhocko@...nel.org>,
Oleg Nesterov <oleg@...hat.com>,
Serge Hallyn <serge@...lyn.com>,
Thomas Gleixner <tglx@...utronix.de>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Yutian Yang <nglaive@...il.com>,
Zefan Li <lizefan.x@...edance.com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
kernel test robot <lkp@...el.com>,
"Huang, Ying" <ying.huang@...el.com>,
Feng Tang <feng.tang@...el.com>,
Zhengjun Xing <zhengjun.xing@...ux.intel.com>
Subject: Re: [memcg] 0f12156dff: will-it-scale.per_process_ops -33.6%
regression
On 9/7/21 11:14 AM, Tejun Heo wrote:
> Hello,
>
> On Tue, Sep 07, 2021 at 10:11:21AM -0700, Roman Gushchin wrote:
>> There are two polar cases:
>> 1) a big number of relatively short-living allocations, which lifetime is well
>> bounded (e.g. by a lifetime of a task),
>> 2) a relatively small number of long-living allocations, which lifetime
>> is potentially indefinite (e.g. struct mount).
>>
>> We can't use the same approach for both cases, otherwise we'll run into either
>> performance or garbage collection problems (which also lead to performance
>> problems, but delayed).
>
> Wouldn't a front cache which expires after some seconds catch both cases?
A purely time based approach might be problematic, as you can allocate a
LOT of data in a short amount of time. Heuristics probably need to be a
hybrid of "time much time has passed" OR "we're over the front cache
threshold in terms of deferred accounting". But yes, I don't see why
we'd necessarily need different approaches for short vs long life times.
--
Jens Axboe
Powered by blists - more mailing lists