[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181120132512.GQ2131@hirez.programming.kicks-ass.net>
Date: Tue, 20 Nov 2018 14:25:12 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Frederic Weisbecker <frederic@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Wanpeng Li <wanpengli@...cent.com>,
Thomas Gleixner <tglx@...utronix.de>,
Yauheni Kaliuta <yauheni.kaliuta@...hat.com>,
Ingo Molnar <mingo@...nel.org>, Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH 04/25] vtime: Spare a seqcount lock/unlock cycle on
context switch
On Wed, Nov 14, 2018 at 03:45:48AM +0100, Frederic Weisbecker wrote:
So I definitely like avoiding that superfluous atomic op, however:
> @@ -730,19 +728,25 @@ static void vtime_account_guest(struct task_struct *tsk,
> }
> }
>
> +static void __vtime_account_kernel(struct task_struct *tsk,
> + struct vtime *vtime)
Your last patch removed a __function, and now you're adding one back :/
> {
> /* We might have scheduled out from guest path */
> if (tsk->flags & PF_VCPU)
> vtime_account_guest(tsk, vtime);
> else
> vtime_account_system(tsk, vtime);
> +}
> +
> +void vtime_account_kernel(struct task_struct *tsk)
> +{
> + struct vtime *vtime = &tsk->vtime;
> +
> + if (!vtime_delta(vtime))
> + return;
> +
See here the fast path (is it worth it?)
> + write_seqcount_begin(&vtime->seqcount);
> + __vtime_account_kernel(tsk, vtime);
> write_seqcount_end(&vtime->seqcount);
> }
> +void vtime_task_switch_generic(struct task_struct *prev)
> {
> struct vtime *vtime = &prev->vtime;
And observe a distinct lack of that same fast path..
>
> write_seqcount_begin(&vtime->seqcount);
> + if (is_idle_task(prev))
> + vtime_account_idle(prev);
> + else
> + __vtime_account_kernel(prev, vtime);
> vtime->state = VTIME_INACTIVE;
> write_seqcount_end(&vtime->seqcount);
>
> --
> 2.7.4
>
Powered by blists - more mailing lists