lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1533314448.28585.101.camel@surriel.com>
Date:   Fri, 03 Aug 2018 12:40:48 -0400
From:   Rik van Riel <riel@...riel.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     linux-kernel@...r.kernel.org, kernel-team@...com, mingo@...nel.org,
        luto@...nel.org, x86@...nel.org, efault@....de,
        dave.hansen@...el.com
Subject: Re: [PATCH 11/11] mm,sched: conditionally skip lazy TLB mm
 refcounting

On Fri, 2018-08-03 at 17:56 +0200, Peter Zijlstra wrote:
> On Wed, Aug 01, 2018 at 06:02:55AM -0400, Rik van Riel wrote:
> > Conditionally skip lazy TLB mm refcounting. When an architecture
> > has
> > CONFIG_ARCH_NO_ACTIVE_MM_REFCOUNTING enabled, an mm that is used in
> > lazy TLB mode anywhere will get shot down from exit_mmap, and there
> > in no need to incur the cache line bouncing overhead of refcounting
> > a lazy TLB mm.
> > 
> > Implement this by moving the refcounting of a lazy TLB mm to helper
> > functions, which skip the refcounting when it is not necessary.
> > 
> > Deal with use_mm and unuse_mm by fully splitting out the
> > refcounting
> > of the lazy TLB mm a kernel thread may have when entering use_mm
> > from
> > the refcounting of the mm that use_mm is about to start using.
> 
> 
> > @@ -2803,16 +2803,29 @@ context_switch(struct rq *rq, struct
> > task_struct *prev,
> >  	 * membarrier after storing to rq->curr, before returning
> > to
> >  	 * user-space.
> >  	 */
> > +	/*
> > +	 * kernel -> kernel	lazy + transfer active
> > +	 *   user -> kernel	lazy + grab_lazy_mm active
> > +	 *
> > +	 * kernel ->   user	switch + drop_lazy_mm active
> > +	 *   user ->   user	switch
> > +	 */
> > +	if (!mm) {				// to kernel
> >  		next->active_mm = oldmm;
> >  		enter_lazy_tlb(oldmm, next);
> > +
> > +		if (prev->mm)			// from user
> > +			grab_lazy_mm(oldmm);
> > +		else
> > +			prev->active_mm = NULL;
> > +	} else {				// to user
> >  		switch_mm_irqs_off(oldmm, mm, next);
> >  
> > +		if (!prev->mm) {		// from kernel
> > +			/* will drop_lazy_mm() in
> > finish_task_switch(). */
> > +			rq->prev_mm = oldmm;
> > +			prev->active_mm = NULL;
> > +		}
> >  	}
> 
> So this still confuses the heck out of me; and the Changelog doesn't
> seem to even mention it. You still track and swizzle ->active_mm but
> no
> longer refcount it.
> 
> Why can't we skip the ->active_mm swizzle and keep ->active_mm ==
> ->mm.
> 
> Doing the swizzle but not the refcount just makes me itch.

I am working on that now, it adds another 7-8
patches on top of this series.

The big question is, do we want this optimization
to wait for further cleanups, or should we run with
code that seems to be stable right now, and put
additional cleanups and enhancements on top of it
later?

The end result should be the same.

-- 
All Rights Reversed.
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ