[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210405201807.4ee7778d@gandalf.local.home>
Date: Mon, 5 Apr 2021 20:18:07 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Waiman Long <longman@...hat.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Bharata B Rao <bharata@...ux.vnet.ibm.com>,
Phil Auld <pauld@...hat.com>,
Daniel Thompson <daniel.thompson@...aro.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4] sched/debug: Use sched_debug_lock to serialize use
of cgroup_path[] only
On Mon, 5 Apr 2021 19:42:03 -0400
Waiman Long <longman@...hat.com> wrote:
> +/*
> + * All the print_cpu() callers from sched_debug_show() will be allowed
> + * to contend for sched_debug_lock and use group_path[] as their SEQ_printf()
> + * calls will be much faster. However only one print_cpu() caller from
> + * sysrq_sched_debug_show() which outputs to the console will be allowed
> + * to use group_path[]. Another parallel console writer will have to use
> + * a shorter stack buffer instead. Since the console output will be garbled
> + * anyway, truncation of some cgroup paths shouldn't be a big issue.
> + */
> +#define SEQ_printf_task_group_path(m, tg, fmt...) \
> +{ \
> + unsigned long flags; \
> + int token = m ? TOKEN_NA \
> + : xchg_acquire(&console_token, TOKEN_NONE); \
> + \
> + if (token == TOKEN_NONE) { \
> + char buf[128]; \
> + task_group_path(tg, buf, sizeof(buf)); \
> + SEQ_printf(m, fmt, buf); \
> + } else { \
> + spin_lock_irqsave(&sched_debug_lock, flags); \
> + task_group_path(tg, group_path, sizeof(group_path)); \
> + SEQ_printf(m, fmt, group_path); \
> + spin_unlock_irqrestore(&sched_debug_lock, flags); \
> + if (token == TOKEN_ACQUIRED) \
> + smp_store_release(&console_token, token); \
> + } \
> }
> #endif
And you said my suggestion was complex!
I'll let others review this.
-- Steve
Powered by blists - more mailing lists