[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200520164037.e3598bc902e39415f4c263e7@linux-foundation.org>
Date: Wed, 20 May 2020 16:40:37 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Chris Down <chris@...isdown.name>
Cc: Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...com
Subject: Re: [PATCH] mm, memcg: unify reclaim retry limits with page
allocator
On Wed, 20 May 2020 17:31:42 +0100 Chris Down <chris@...isdown.name> wrote:
> Reclaim retries have been set to 5 since the beginning of time in
> 66e1707bc346 ("Memory controller: add per cgroup LRU and reclaim").
> However, we now have a generally agreed-upon standard for page reclaim:
> MAX_RECLAIM_RETRIES (currently 16), added many years later in
> 0a0337e0d1d1 ("mm, oom: rework oom detection").
>
> In the absence of a compelling reason to declare an OOM earlier in memcg
> context than page allocator context, it seems reasonable to supplant
> MEM_CGROUP_RECLAIM_RETRIES with MAX_RECLAIM_RETRIES, making the page
> allocator and memcg internals more similar in semantics when reclaim
> fails to produce results, avoiding premature OOMs or throttling.
>
> ...
>
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -73,9 +73,6 @@ EXPORT_SYMBOL(memory_cgrp_subsys);
>
> struct mem_cgroup *root_mem_cgroup __read_mostly;
>
> -/* The number of times we should retry reclaim failures before giving up. */
hm, what tree is this against?
> -#define MEM_CGROUP_RECLAIM_RETRIES 5
> -
> /* Socket memory accounting disabled? */
> static bool cgroup_memory_nosocket;
>
> @@ -2386,7 +2383,7 @@ void mem_cgroup_handle_over_high(void)
> unsigned long pflags;
> unsigned long nr_reclaimed;
> unsigned int nr_pages = current->memcg_nr_pages_over_high;
> - int nr_retries = MEM_CGROUP_RECLAIM_RETRIES;
> + int nr_retries = MAX_RECLAIM_RETRIES;
I can't seem to find a tree in which mem_cgroup_handle_over_high() has
a local `nr_retries'.
Powered by blists - more mailing lists