[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180509102100.GN12217@hirez.programming.kicks-ass.net>
Date: Wed, 9 May 2018 12:21:00 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Johannes Weiner <hannes@...xchg.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-block@...r.kernel.org, cgroups@...r.kernel.org,
Ingo Molnar <mingo@...hat.com>,
Andrew Morton <akpm@...uxfoundation.org>,
Tejun Heo <tj@...nel.org>,
Balbir Singh <bsingharora@...il.com>,
Mike Galbraith <efault@....de>,
Oliver Yang <yangoliver@...com>,
Shakeel Butt <shakeelb@...gle.com>,
xxx xxx <x.qendo@...il.com>,
Taras Kondratiuk <takondra@...co.com>,
Daniel Walker <danielwa@...co.com>,
Vinayak Menon <vinmenon@...eaurora.org>,
Ruslan Ruslichenko <rruslich@...co.com>, kernel-team@...com
Subject: Re: [PATCH 6/7] psi: pressure stall information for CPU, memory, and
IO
On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote:
> +/**
> + * psi_memstall_enter - mark the beginning of a memory stall section
> + * @flags: flags to handle nested sections
> + *
> + * Marks the calling task as being stalled due to a lack of memory,
> + * such as waiting for a refault or performing reclaim.
> + */
> +void psi_memstall_enter(unsigned long *flags)
> +{
> + struct rq_flags rf;
> + struct rq *rq;
> +
> + *flags = current->flags & PF_MEMSTALL;
> + if (*flags)
> + return;
> + /*
> + * PF_MEMSTALL setting & accounting needs to be atomic wrt
> + * changes to the task's scheduling state, otherwise we can
> + * race with CPU migration.
> + */
> + local_irq_disable();
> + rq = this_rq();
> + raw_spin_lock(&rq->lock);
> + rq_pin_lock(rq, &rf);
Given that churn in sched.h, you seen rq_lock() and friends.
Either write this like:
local_irq_disable();
rq = this_rq();
rq_lock(rq, &rf);
Or instroduce "rq = this_rq_lock_irq()", which we could also use in
do_sched_yield().
> + update_rq_clock(rq);
> +
> + current->flags |= PF_MEMSTALL;
> + psi_task_change(current, rq_clock(rq), 0, TSK_MEMSTALL);
> +
> + rq_unpin_lock(rq, &rf);
> + raw_spin_unlock(&rq->lock);
> + local_irq_enable();
That's called rq_unlock_irq().
> +}
> +
> +/**
> + * psi_memstall_leave - mark the end of an memory stall section
> + * @flags: flags to handle nested memdelay sections
> + *
> + * Marks the calling task as no longer stalled due to lack of memory.
> + */
> +void psi_memstall_leave(unsigned long *flags)
> +{
> + struct rq_flags rf;
> + struct rq *rq;
> +
> + if (*flags)
> + return;
> + /*
> + * PF_MEMSTALL clearing & accounting needs to be atomic wrt
> + * changes to the task's scheduling state, otherwise we could
> + * race with CPU migration.
> + */
> + local_irq_disable();
> + rq = this_rq();
> + raw_spin_lock(&rq->lock);
> + rq_pin_lock(rq, &rf);
> +
> + update_rq_clock(rq);
> +
> + current->flags &= ~PF_MEMSTALL;
> + psi_task_change(current, rq_clock(rq), TSK_MEMSTALL, 0);
> +
> + rq_unpin_lock(rq, &rf);
> + raw_spin_unlock(&rq->lock);
> + local_irq_enable();
> +}
Idem.
Powered by blists - more mailing lists