lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 18 Aug 2020 12:26:16 +0200 From: peterz@...radead.org To: Chris Down <chris@...isdown.name> Cc: Michal Hocko <mhocko@...e.com>, Waiman Long <longman@...hat.com>, Andrew Morton <akpm@...ux-foundation.org>, Johannes Weiner <hannes@...xchg.org>, Vladimir Davydov <vdavydov.dev@...il.com>, Jonathan Corbet <corbet@....net>, Alexey Dobriyan <adobriyan@...il.com>, Ingo Molnar <mingo@...nel.org>, Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot <vincent.guittot@...aro.org>, linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org, linux-fsdevel@...r.kernel.org, cgroups@...r.kernel.org, linux-mm@...ck.org Subject: Re: [RFC PATCH 0/8] memcg: Enable fine-grained per process memory control On Tue, Aug 18, 2020 at 11:17:56AM +0100, Chris Down wrote: > I'd ask that you understand a bit more about the tradeoffs and intentions of > the patch before rushing in to declare its failure, considering it works > just fine :-) > > Clamping the maximal time allows the application to take some action to > remediate the situation, while still being slowed down significantly. 2 > seconds per allocation batch is still absolutely plenty for any use case > I've come across. If you have evidence it isn't, then present that instead > of vague notions of "wrongness". There is no feedback from the freeing rate, therefore it cannot be correct in maintaining a maximum amount of pages. 0.5 pages / sec is still non-zero, and if the free rate is 0, you'll crawl across whatever limit was set without any bounds. This is math 101. It's true that I haven't been paying attention to mm in a while, but I was one of the original authors of the I/O dirty balancing, I do think I understand how these things work.
Powered by blists - more mailing lists