[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87060553-2e09-2e2a-13a2-a91345d6df30@codeaurora.org>
Date: Wed, 9 May 2018 16:33:24 +0530
From: Vinayak Menon <vinmenon@...eaurora.org>
To: Johannes Weiner <hannes@...xchg.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-block@...r.kernel.org,
cgroups@...r.kernel.org
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...uxfoundation.org>,
Tejun Heo <tj@...nel.org>,
Balbir Singh <bsingharora@...il.com>,
Mike Galbraith <efault@....de>,
Oliver Yang <yangoliver@...com>,
Shakeel Butt <shakeelb@...gle.com>,
xxx xxx <x.qendo@...il.com>,
Taras Kondratiuk <takondra@...co.com>,
Daniel Walker <danielwa@...co.com>,
Ruslan Ruslichenko <rruslich@...co.com>, kernel-team@...com
Subject: Re: [PATCH 6/7] psi: pressure stall information for CPU, memory, and
IO
On 5/8/2018 2:31 AM, Johannes Weiner wrote:
> +static void psi_group_update(struct psi_group *group, int cpu, u64 now,
> + unsigned int clear, unsigned int set)
> +{
> + enum psi_state state = PSI_NONE;
> + struct psi_group_cpu *groupc;
> + unsigned int *tasks;
> + unsigned int to, bo;
> +
> + groupc = per_cpu_ptr(group->cpus, cpu);
> + tasks = groupc->tasks;
> +
> + /* Update task counts according to the set/clear bitmasks */
> + for (to = 0; (bo = ffs(clear)); to += bo, clear >>= bo) {
> + int idx = to + (bo - 1);
> +
> + if (tasks[idx] == 0 && !psi_bug) {
> + printk_deferred(KERN_ERR "psi: task underflow! cpu=%d idx=%d tasks=[%u %u %u %u]\n",
> + cpu, idx, tasks[0], tasks[1],
> + tasks[2], tasks[3]);
> + psi_bug = 1;
> + }
> + tasks[idx]--;
> + }
> + for (to = 0; (bo = ffs(set)); to += bo, set >>= bo)
> + tasks[to + (bo - 1)]++;
> +
> + /* Time in which tasks wait for the CPU */
> + state = PSI_NONE;
> + if (tasks[NR_RUNNING] > 1)
> + state = PSI_SOME;
> + time_state(&groupc->res[PSI_CPU], state, now);
> +
> + /* Time in which tasks wait for memory */
> + state = PSI_NONE;
> + if (tasks[NR_MEMSTALL]) {
> + if (!tasks[NR_RUNNING] ||
> + (cpu_curr(cpu)->flags & PF_MEMSTALL))
> + state = PSI_FULL;
> + else
> + state = PSI_SOME;
> + }
> + time_state(&groupc->res[PSI_MEM], state, now);
> +
> + /* Time in which tasks wait for IO */
> + state = PSI_NONE;
> + if (tasks[NR_IOWAIT]) {
> + if (!tasks[NR_RUNNING])
> + state = PSI_FULL;
> + else
> + state = PSI_SOME;
> + }
> + time_state(&groupc->res[PSI_IO], state, now);
> +
> + /* Time in which tasks are non-idle, to weigh the CPU in summaries */
> + if (groupc->nonidle)
> + groupc->nonidle_time += now - groupc->nonidle_start;
> + groupc->nonidle = tasks[NR_RUNNING] ||
> + tasks[NR_IOWAIT] || tasks[NR_MEMSTALL];
> + if (groupc->nonidle)
> + groupc->nonidle_start = now;
> +
> + /* Kick the stats aggregation worker if it's gone to sleep */
> + if (!delayed_work_pending(&group->clock_work))
This causes a crash when the work is scheduled before system_wq is up. In my case when the first
schedule was called from kthreadd. And I had to do this to make it work.
if (keventd_up() && !delayed_work_pending(&group->clock_work))
> + schedule_delayed_work(&group->clock_work, MY_LOAD_FREQ);
> +}
> +
> +void psi_task_change(struct task_struct *task, u64 now, int clear, int set)
> +{
> + struct cgroup *cgroup, *parent;
unused variables
Thanks,
Vinayak
Powered by blists - more mailing lists