lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 6 Jul 2007 10:41:44 -0400
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	Andi Kleen <andi@...stfloor.org>
Cc:	Steven Rostedt <rostedt@...dmis.org>,
	Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [RFC] Thread Migration Preemption

* Andi Kleen (andi@...stfloor.org) wrote:
> Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca> writes:
> 
> > Thread Migration Preemption
> > 
> > This patch adds the ability to protect critical sections from migration to
> > another CPU without disabling preemption.
> 
> Good idea.
> 
> I sometimes think we could have avoided _much_ trouble
> if that had been always default for processes running 
> in kernel space.
>  

I haven't thought about making it the default for kernel space
preemption, but yes, it would make sense.

> > This will be useful to minimize the amount of preemption disabling for the -rt
> > patch. It will help leveraging improvements brought by the local_t types in
> > asm/local.h (see Documentation/local_ops.txt). Note that the updates done to
> > variables protected by migration_disable must be either atomic or protected from
> > concurrent updates done by other threads.
> > 
> > Typical use:
> > 
> > migration_disable();
> > local_inc(&__get_cpu_var(&my_local_t_var));
> > migration_enable();
> 
> It seems strange to have a new interface for this. We already 
> have get_cpu()/put_cpu(). So why not use that?
> 

Because get/put_cpu() implicitly insure mutual exclusion between threads
on the same CPU by disabling preemption. migration_disable() does not:
this is why I only do local atomic operations on these variables.


> >  	unsigned long		flags;		/* low level flags */
> >  	__u32			cpu;
> >  	__s32			preempt_count;	/* 0 => preemptable, <0 => BUG */
> > +	int			migration_count;/* 0: can migrate, <0 => BUG */
> 
> Can you turn preempt_count into a short first and use a short? That should be enough
> and cache line usage wouldn't be increased. That's ok on x86; on RISCs 
> int might be faster
> 

using a short instead of an int on modern x86 will cause pipeline stalls
due to partial register use. Also, it won't really reduce the cache line
usage, since it is followed by an unsigned long; gcc structure alignment
will put padding instead of the integer, which does not buy us anything
space-wise. If you find me a case on some architectures where it
improves performances, I will be more than happy to change it, but as
things are, an int seems as efficient, and even more due to partial
register stalls on x86, than a short.

Regards,

Mathieu

> 
> -Andi

-- 
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ