lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xm26egf34cje.fsf@sword-of-the-dawn.mtv.corp.google.com>
Date:	Thu, 03 Dec 2015 10:23:01 -0800
From:	bsegall@...gle.com
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Waiman Long <Waiman.Long@....com>, Ingo Molnar <mingo@...hat.com>,
	linux-kernel@...r.kernel.org, Yuyang Du <yuyang.du@...el.com>,
	Paul Turner <pjt@...gle.com>,
	Morten Rasmussen <morten.rasmussen@....com>,
	Scott J Norton <scott.norton@....com>,
	Douglas Hatch <doug.hatch@....com>
Subject: Re: [PATCH v2 2/3] sched/fair: Move hot load_avg into its own cacheline

Peter Zijlstra <peterz@...radead.org> writes:

> On Thu, Dec 03, 2015 at 09:56:02AM -0800, bsegall@...gle.com wrote:
>> Peter Zijlstra <peterz@...radead.org> writes:
>
>> > @@ -7402,11 +7405,12 @@ void __init sched_init(void)
>> >  #endif /* CONFIG_RT_GROUP_SCHED */
>> >  
>> >  #ifdef CONFIG_CGROUP_SCHED
>> > +	task_group_cache = KMEM_CACHE(task_group, 0);
>> > +
>> >  	list_add(&root_task_group.list, &task_groups);
>> >  	INIT_LIST_HEAD(&root_task_group.children);
>> >  	INIT_LIST_HEAD(&root_task_group.siblings);
>> >  	autogroup_init(&init_task);
>> > -
>> >  #endif /* CONFIG_CGROUP_SCHED */
>> >  
>> >  	for_each_possible_cpu(i) {
>> > --- a/kernel/sched/sched.h
>> > +++ b/kernel/sched/sched.h
>> > @@ -248,7 +248,12 @@ struct task_group {
>> >  	unsigned long shares;
>> >  
>> >  #ifdef	CONFIG_SMP
>> > -	atomic_long_t load_avg;
>> > +	/*
>> > +	 * load_avg can be heavily contended at clock tick time, so put
>> > +	 * it in its own cacheline separated from the fields above which
>> > +	 * will also be accessed at each tick.
>> > +	 */
>> > +	atomic_long_t load_avg ____cacheline_aligned;
>> >  #endif
>> >  #endif
>> >  
>> 
>> This loses the cacheline-alignment for task_group, is that ok?
>
> I'm a bit dense (its late) can you spell that out? Did you mean me
> killing SLAB_HWCACHE_ALIGN? That should not matter because:
>
> #define KMEM_CACHE(__struct, __flags) kmem_cache_create(#__struct,\
> 		sizeof(struct __struct), __alignof__(struct __struct),\
> 		(__flags), NULL)
>
> picks up the alignment explicitly.
>
> And struct task_group having one cacheline aligned member, means that
> the alignment of the composite object (the struct proper) must be an
> integer multiple of this (typically 1).

Ah, yeah, I forgot about this, my fault.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ