lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080428084904.GA27056@osiris.boeblingen.de.ibm.com>
Date:	Mon, 28 Apr 2008 10:49:04 +0200
From:	Heiko Carstens <heiko.carstens@...ibm.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Gautham R Shenoy <ego@...ibm.com>, Paul Jackson <pj@....com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched: missing locking in sched_domains code

On Mon, Apr 28, 2008 at 10:32:22AM +0200, Ingo Molnar wrote:
> 
> * Heiko Carstens <heiko.carstens@...ibm.com> wrote:
> 
> >  /* doms_cur_mutex serializes access to doms_cur[] array */
> >  static DEFINE_MUTEX(doms_cur_mutex);
> >  
> > +static inline void lock_doms_cur(void)
> > +{
> > +	mutex_lock(&doms_cur_mutex);
> > +}
> 
> > @@ -7813,8 +7811,10 @@ int arch_reinit_sched_domains(void)
> >  	int err;
> >  
> >  	get_online_cpus();
> > +	lock_doms_cur();
> 
> thanks, that looks a lot more clean already. May i ask for another 
> thing, if you are hacking on this anyway? Please get rid of the 
> lock_doms_cur() complication now that it's not conditional - an open 
> coded mutex_lock(&sched_doms_mutex) looks more readable - it gives a 
> clear idea about what's happening. Also, please rename sched_doms_mutex 
> to something less tongue-twisting - such as sched_domains_mutex. Hm?

Your wish is my order:

Subject: [PATCH] sched: fix locking in arch_reinit_sched_domains

From: Heiko Carstens <heiko.carstens@...ibm.com>

Concurrent calls to detach_destroy_domains and arch_init_sched_domains
were prevented by the old scheduler subsystem cpu hotplug mutex. When
this got converted to get_online_cpus() the locking got broken.
Unlike before now several processes can concurrently enter the critical
sections that were protected by the old lock.

So use the already present doms_cur_mutex to protect these sections again.

Cc: Gautham R Shenoy <ego@...ibm.com>
Cc: Ingo Molnar <mingo@...e.hu>
Cc: Paul Jackson <pj@....com>
Signed-off-by: Heiko Carstens <heiko.carstens@...ibm.com>
---
 kernel/sched.c |   20 +++++---------------
 1 file changed, 5 insertions(+), 15 deletions(-)

Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -309,7 +309,7 @@ static DEFINE_PER_CPU(struct rt_rq, init
 static DEFINE_SPINLOCK(task_group_lock);
 
 /* doms_cur_mutex serializes access to doms_cur[] array */
-static DEFINE_MUTEX(doms_cur_mutex);
+static DEFINE_MUTEX(sched_domains_mutex);
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
 #ifdef CONFIG_USER_SCHED
@@ -358,21 +358,9 @@ static inline void set_task_rq(struct ta
 #endif
 }
 
-static inline void lock_doms_cur(void)
-{
-	mutex_lock(&doms_cur_mutex);
-}
-
-static inline void unlock_doms_cur(void)
-{
-	mutex_unlock(&doms_cur_mutex);
-}
-
 #else
 
 static inline void set_task_rq(struct task_struct *p, unsigned int cpu) { }
-static inline void lock_doms_cur(void) { }
-static inline void unlock_doms_cur(void) { }
 
 #endif	/* CONFIG_GROUP_SCHED */
 
@@ -7755,7 +7743,7 @@ void partition_sched_domains(int ndoms_n
 {
 	int i, j;
 
-	lock_doms_cur();
+	mutex_lock(&sched_domains_mutex);
 
 	/* always unregister in case we don't destroy any domains */
 	unregister_sched_domain_sysctl();
@@ -7804,7 +7792,7 @@ match2:
 
 	register_sched_domain_sysctl();
 
-	unlock_doms_cur();
+	mutex_unlock(&sched_domains_mutex);
 }
 
 #if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
@@ -7813,8 +7801,10 @@ int arch_reinit_sched_domains(void)
 	int err;
 
 	get_online_cpus();
+	mutex_lock(&sched_domains_mutex);
 	detach_destroy_domains(&cpu_online_map);
 	err = arch_init_sched_domains(&cpu_online_map);
+	mutex_unlock(&sched_domains_mutex);
 	put_online_cpus();
 
 	return err;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ