[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47beeaa6-74aa-35d0-2808-e5c54be854a6@codeaurora.org>
Date: Wed, 23 May 2018 18:49:25 +0530
From: Vinayak Menon <vinmenon@...eaurora.org>
To: Johannes Weiner <hannes@...xchg.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-block@...r.kernel.org, cgroups@...r.kernel.org,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...uxfoundation.org>,
Tejun Heo <tj@...nel.org>,
Balbir Singh <bsingharora@...il.com>,
Mike Galbraith <efault@....de>,
Oliver Yang <yangoliver@...com>,
Shakeel Butt <shakeelb@...gle.com>,
xxx xxx <x.qendo@...il.com>,
Taras Kondratiuk <takondra@...co.com>,
Daniel Walker <danielwa@...co.com>,
Ruslan Ruslichenko <rruslich@...co.com>, kernel-team@...com
Subject: Re: [PATCH 6/7] psi: pressure stall information for CPU, memory, and
IO
On 5/23/2018 6:47 PM, Johannes Weiner wrote:
> On Wed, May 09, 2018 at 04:33:24PM +0530, Vinayak Menon wrote:
>> On 5/8/2018 2:31 AM, Johannes Weiner wrote:
>>> + /* Kick the stats aggregation worker if it's gone to sleep */
>>> + if (!delayed_work_pending(&group->clock_work))
>> This causes a crash when the work is scheduled before system_wq is up. In my case when the first
>> schedule was called from kthreadd. And I had to do this to make it work.
>> if (keventd_up() && !delayed_work_pending(&group->clock_work))
>>
>>> + schedule_delayed_work(&group->clock_work, MY_LOAD_FREQ);
> I was trying to figure out how this is possible, and it didn't make
> sense because we do initialize the system_wq way before kthreadd.
>
> Did you by any chance backport this to a pre-4.10 kernel which does
> not have 3347fa092821 ("workqueue: make workqueue available early
> during boot") yet?
Sorry I did not mention that. I was trying on 4.9 kernel. It's clear now. Thanks.
>>> +void psi_task_change(struct task_struct *task, u64 now, int clear, int set)
>>> +{
>>> + struct cgroup *cgroup, *parent;
>> unused variables
> They're used in the next patch, I'll fix that up.
>
> Thanks
Powered by blists - more mailing lists