[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ilswwh1x.fsf@email.froward.int.ebiederm.org>
Date: Wed, 02 Mar 2022 08:43:54 -0600
From: "Eric W. Biederman" <ebiederm@...ssion.com>
To: Luis Chamberlain <mcgrof@...nel.org>
Cc: Shakeel Butt <shakeelb@...gle.com>,
Colin Ian King <colin.king@...onical.com>,
NeilBrown <neilb@...e.de>, Vasily Averin <vvs@...tuozzo.com>,
Vlastimil Babka <vbabka@...e.cz>,
Michal Hocko <mhocko@...e.com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Linux MM <linux-mm@...ck.org>, netdev@...r.kernel.org,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, Tejun Heo <tj@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Eric Dumazet <edumazet@...gle.com>,
Kees Cook <keescook@...omium.org>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
David Ahern <dsahern@...nel.org>, linux-kernel@...r.kernel.org,
kernel@...nvz.org
Subject: Re: [PATCH RFC] net: memcg accounting for veth devices
Luis Chamberlain <mcgrof@...nel.org> writes:
> On Tue, Mar 01, 2022 at 02:50:06PM -0600, Eric W. Biederman wrote:
>> I really have not looked at this pids controller.
>>
>> So I am not certain I understand your example here but I hope I have
>> answered your question.
>
> During experimentation with the above stress-ng test case, I saw tons
> of thread just waiting to do exit:
You increment the count of concurrent threads after a no return function
in do_exit. Since the increment is never reached the count always goes
down and eventually the warning prints.
> diff --git a/kernel/exit.c b/kernel/exit.c
> index 80c4a67d2770..653ca7ebfb58 100644
> --- a/kernel/exit.c
> +++ b/kernel/exit.c
> @@ -730,11 +730,24 @@ static void check_stack_usage(void)
> static inline void check_stack_usage(void) {}
> #endif
>
> +/* Approx more than twice max_threads */
> +#define MAX_EXIT_CONCURRENT (1<<17)
> +static atomic_t exit_concurrent_max = ATOMIC_INIT(MAX_EXIT_CONCURRENT);
> +static DECLARE_WAIT_QUEUE_HEAD(exit_wq);
> +
> void __noreturn do_exit(long code)
> {
> struct task_struct *tsk = current;
> int group_dead;
>
> + if (atomic_dec_if_positive(&exit_concurrent_max) < 0) {
> + pr_warn_ratelimited("exit: exit_concurrent_max (%u) close to 0 (max : %u), throttling...",
> + atomic_read(&exit_concurrent_max),
> + MAX_EXIT_CONCURRENT);
> + wait_event(exit_wq,
> + atomic_dec_if_positive(&exit_concurrent_max) >= 0);
> + }
> +
> /*
> * We can get here from a kernel oops, sometimes with preemption off.
> * Start by checking for critical errors.
> @@ -881,6 +894,9 @@ void __noreturn do_exit(long code)
>
> lockdep_free_task(tsk);
> do_task_dead();
The function do_task_dead never returns.
> +
> + atomic_inc(&exit_concurrent_max);
> + wake_up(&exit_wq);
> }
> EXPORT_SYMBOL_GPL(do_exit);
>
> diff --git a/kernel/ucount.c b/kernel/ucount.c
> index 4f5613dac227..980ffaba1ac5 100644
> --- a/kernel/ucount.c
> +++ b/kernel/ucount.c
> @@ -238,6 +238,8 @@ struct ucounts *inc_ucount(struct user_namespace *ns, kuid_t uid,
> long max;
> tns = iter->ns;
> max = READ_ONCE(tns->ucount_max[type]);
> + if (atomic_long_read(&iter->ucount[type]) > max/16)
> + cond_resched();
> if (!atomic_long_inc_below(&iter->ucount[type], max))
> goto fail;
> }
Eric
Powered by blists - more mailing lists