[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191120115142.GA89662@gmail.com>
Date: Wed, 20 Nov 2019 12:51:42 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Frederic Weisbecker <frederic@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Jacek Anaszewski <jacek.anaszewski@...il.com>,
Wanpeng Li <wanpengli@...cent.com>,
"Rafael J . Wysocki" <rjw@...ysocki.net>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Rik van Riel <riel@...riel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Yauheni Kaliuta <yauheni.kaliuta@...hat.com>,
Viresh Kumar <viresh.kumar@...aro.org>,
Pavel Machek <pavel@....cz>
Subject: Re: [PATCH 1/6] sched/cputime: Support other fields on
kcpustat_field()
* Frederic Weisbecker <frederic@...nel.org> wrote:
> Provide support for user, nice, guest and guest_nice fields through
> kcpustat_field().
>
> Whether we account the delta to a nice or not nice field is decided on
> top of the nice value snapshot taken at the time we call kcpustat_field().
> If the nice value of the task has been changed since the last vtime
> update, we may have inacurrate distribution of the nice VS unnice
> cputime.
>
> However this is considered as a minor issue compared to the proper fix
> that would involve interrupting the target on nice updates, which is
> undesired on nohz_full CPUs.
>
> Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
> Cc: Yauheni Kaliuta <yauheni.kaliuta@...hat.com>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: Rik van Riel <riel@...riel.com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Wanpeng Li <wanpengli@...cent.com>
> Cc: Ingo Molnar <mingo@...nel.org>
> ---
> kernel/sched/cputime.c | 53 +++++++++++++++++++++++++++++++++---------
> 1 file changed, 42 insertions(+), 11 deletions(-)
>
> diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> index e0cd20693ef5..b2cf544e2109 100644
> --- a/kernel/sched/cputime.c
> +++ b/kernel/sched/cputime.c
> @@ -912,11 +912,21 @@ void task_cputime(struct task_struct *t, u64 *utime, u64 *stime)
> } while (read_seqcount_retry(&vtime->seqcount, seq));
> }
>
> +static u64 kcpustat_user_vtime(struct vtime *vtime)
> +{
> + if (vtime->state == VTIME_USER)
> + return vtime->utime + vtime_delta(vtime);
> + else if (vtime->state == VTIME_GUEST)
> + return vtime->gtime + vtime_delta(vtime);
> + return 0;
> +}
> +
> static int kcpustat_field_vtime(u64 *cpustat,
> - struct vtime *vtime,
> + struct task_struct *tsk,
> enum cpu_usage_stat usage,
> int cpu, u64 *val)
> {
> + struct vtime *vtime = &tsk->vtime;
> unsigned int seq;
> int err;
>
> @@ -946,9 +956,36 @@ static int kcpustat_field_vtime(u64 *cpustat,
>
> *val = cpustat[usage];
>
> - if (vtime->state == VTIME_SYS)
> - *val += vtime->stime + vtime_delta(vtime);
> -
> + /*
> + * Nice VS unnice cputime accounting may be inaccurate if
> + * the nice value has changed since the last vtime update.
> + * But proper fix would involve interrupting target on nice
> + * updates which is a no go on nohz_full.
Well, we actually already interrupt the target in both sys_nice() and
sys_setpriority() etc. syscall variants: we call set_user_nice() which
calls resched_curr() and the task is sent an IPI and runs through a
reschedule.
But ... I do agree that this kind of granularity of nice/non-nice
accounting doesn't really matter in practice: the changing of nice values
is a relatively low frequency operation on most systems.
But nevertheless the comment should probably be updated to reflect this.
Thanks,
Ingo
Powered by blists - more mailing lists