lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 25 Sep 2019 16:42:21 +0200
From:   Frederic Weisbecker <frederic@...nel.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Wanpeng Li <wanpengli@...cent.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Yauheni Kaliuta <yauheni.kaliuta@...hat.com>,
        Ingo Molnar <mingo@...nel.org>, Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH 04/25] vtime: Spare a seqcount lock/unlock cycle on
 context switch

Sorry to answer 10 month later. You certainly have lost track of this. I have.
I'm re-issuing this patchset but more piecewise to make the review easier.

In case you still care, I'm answering your comments below. But you can skip
that and wait for the new version that I'm about to post.


On Tue, Nov 20, 2018 at 02:25:12PM +0100, Peter Zijlstra wrote:
> On Wed, Nov 14, 2018 at 03:45:48AM +0100, Frederic Weisbecker wrote:
> 
> So I definitely like avoiding that superfluous atomic op, however:
> 
> > @@ -730,19 +728,25 @@ static void vtime_account_guest(struct task_struct *tsk,
> >  	}
> >  }
> >  
> > +static void __vtime_account_kernel(struct task_struct *tsk,
> > +				   struct vtime *vtime)
> 
> Your last patch removed a __function, and now you're adding one back :/

Yes, in fact I removed a __function to avoid having two in the end.

I can't think of a better name. vtime_account_kernel_locked() maybe,
but it's not event locked, it's seqcount write.

> 
> >  {
> >  	/* We might have scheduled out from guest path */
> >  	if (tsk->flags & PF_VCPU)
> >  		vtime_account_guest(tsk, vtime);
> >  	else
> >  		vtime_account_system(tsk, vtime);
> > +}
> > +
> > +void vtime_account_kernel(struct task_struct *tsk)
> > +{
> > +	struct vtime *vtime = &tsk->vtime;
> > +
> > +	if (!vtime_delta(vtime))
> > +		return;
> > +
> 
> See here the fast path (is it worth it?)

Might be worth testing if that fast path is often hit indeed. With
any sensible clock we should at least have a few nanosecs to account.

> 
> > +	write_seqcount_begin(&vtime->seqcount);
> > +	__vtime_account_kernel(tsk, vtime);
> >  	write_seqcount_end(&vtime->seqcount);
> >  }
> 
> > +void vtime_task_switch_generic(struct task_struct *prev)
> >  {
> >  	struct vtime *vtime = &prev->vtime;
> 
> And observe a distinct lack of that same fast path..

Right but in any case we have to lock (seqcount) here
since at least vtime->state has to be set to VTIME_INACTIVE.

> 
> >  
> >  	write_seqcount_begin(&vtime->seqcount);
> > +	if (is_idle_task(prev))
> > +		vtime_account_idle(prev);
> > +	else
> > +		__vtime_account_kernel(prev, vtime);
> >  	vtime->state = VTIME_INACTIVE;
> >  	write_seqcount_end(&vtime->seqcount);
> >  
> > -- 
> > 2.7.4
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ