[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190129104431.GJ28467@hirez.programming.kicks-ass.net>
Date: Tue, 29 Jan 2019 11:44:31 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: gregkh@...uxfoundation.org, tj@...nel.org, lizefan@...wei.com,
hannes@...xchg.org, axboe@...nel.dk, dennis@...nel.org,
dennisszhou@...il.com, mingo@...hat.com, akpm@...ux-foundation.org,
corbet@....net, cgroups@...r.kernel.org, linux-mm@...ck.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...roid.com
Subject: Re: [PATCH v3 5/5] psi: introduce psi monitor
On Thu, Jan 24, 2019 at 01:15:18PM -0800, Suren Baghdasaryan wrote:
> static void psi_update_work(struct work_struct *work)
> {
> struct delayed_work *dwork;
> struct psi_group *group;
> + bool first_pass = true;
> + u64 next_update;
> + u32 change_mask;
> + int polling;
> bool nonidle;
> + u64 now;
>
> dwork = to_delayed_work(work);
> group = container_of(dwork, struct psi_group, clock_work);
>
> + now = sched_clock();
> +
> + mutex_lock(&group->update_lock);
actually acquiring a mutex can take a fairly long while; would it not
make more sense to take the @now timestanp _after_ it, instead of
before?
Powered by blists - more mailing lists