lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 10 Jun 2015 10:21:07 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	Mel Gorman <mgorman@...e.de>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Rik van Riel <riel@...hat.com>,
	Hugh Dickins <hughd@...gle.com>,
	Minchan Kim <minchan@...nel.org>,
	Dave Hansen <dave.hansen@...el.com>,
	Andi Kleen <andi@...stfloor.org>,
	H Peter Anvin <hpa@...or.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Linux-MM <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries
 after unmapping pages


* Mel Gorman <mgorman@...e.de> wrote:

> On Wed, Jun 10, 2015 at 09:47:04AM +0200, Ingo Molnar wrote:
> > 
> > * Mel Gorman <mgorman@...e.de> wrote:
> > 
> > > --- a/include/linux/sched.h
> > > +++ b/include/linux/sched.h
> > > @@ -1289,6 +1289,18 @@ enum perf_event_task_context {
> > >  	perf_nr_task_contexts,
> > >  };
> > >  
> > > +/* Track pages that require TLB flushes */
> > > +struct tlbflush_unmap_batch {
> > > +	/*
> > > +	 * Each bit set is a CPU that potentially has a TLB entry for one of
> > > +	 * the PFNs being flushed. See set_tlb_ubc_flush_pending().
> > > +	 */
> > > +	struct cpumask cpumask;
> > > +
> > > +	/* True if any bit in cpumask is set */
> > > +	bool flush_required;
> > > +};
> > > +
> > >  struct task_struct {
> > >  	volatile long state;	/* -1 unrunnable, 0 runnable, >0 stopped */
> > >  	void *stack;
> > > @@ -1648,6 +1660,10 @@ struct task_struct {
> > >  	unsigned long numa_pages_migrated;
> > >  #endif /* CONFIG_NUMA_BALANCING */
> > >  
> > > +#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
> > > +	struct tlbflush_unmap_batch *tlb_ubc;
> > > +#endif
> > 
> > Please embedd this constant size structure in task_struct directly so that the 
> > whole per task allocation overhead goes away:
> > 
> 
> That puts a structure (72 bytes in the config I used) within the task struct 
> even when it's not required. On a lightly loaded system direct reclaim will not 
> be active and for some processes, it'll never be active. It's very wasteful.

For certain values of 'very'.

 - 72 bytes suggests that you have NR_CPUS set to 512 or so? On a kernel sized to 
   such large systems with 1000 active tasks we are talking about about +72K of 
   RAM...

 - Furthermore, by embedding it it gets packed better with neighboring task_struct 
   fields, while by allocating it dynamically it's a separate cache line wasted.

 - Plus by allocating it separately you spend two cachelines on it: each slab will 
   be at least cacheline aligned, and 72 bytes will allocate 128 bytes. So when 
   this gets triggered you've just wasted some more RAM.

 - I mean, if it had dynamic size, or was arguably huge. But this is just a 
   cpumask and a boolean!

 - The cpumask will be dynamic if you increase the NR_CPUS count any more than 
   that - in which case embedding the structure is the right choice again.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ