lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20160524170417.GA11670@linux.vnet.ibm.com> Date: Tue, 24 May 2016 10:04:17 -0700 From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> To: peterz@...radead.org Cc: umgwanakikbuti@...il.com, mingo@...nel.org, linux-kernel@...r.kernel.org, bsegall@...gle.com, matt@...eblueprint.co.uk, morten.rasmussen@....com, pjt@...gle.com, tglx@...utronix.de, byungchul.park@....com, ahh@...gle.com Subject: Re: [patch] sched/fair: Move se->vruntime normalization state into struct sched_entity On Mon, May 23, 2016 at 2:19 AM +0200, Peter Zijlstra wrote: > On Sun, May 22, 2016 at 09:00:01AM +0200, Mike Galbraith wrote: > > On Sat, 2016-05-21 at 21:00 +0200, Mike Galbraith wrote: > > > On Sat, 2016-05-21 at 16:04 +0200, Mike Galbraith wrote: > > > > > > > Wakees that were not migrated/normalized eat an unwanted min_vruntime, > > > > and likely take a size XXL latency hit. Big box running master bled > > > > profusely under heavy load until I turned TTWU_QUEUE off. > > > > May as well make it official and against master.today. Fly or die > > little patchlet. > > > > sched/fair: Move se->vruntime normalization state into struct sched_entity > > Does this work? This gets rid of the additional lost wakeups introduced during the merge window, thank you! The pre-existing low-probability lost wakeups still persist, sad to say Can't have everything, I guess. Tested-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com> > --- > include/linux/sched.h | 1 + > kernel/sched/core.c | 18 +++++++++++------- > 2 files changed, 12 insertions(+), 7 deletions(-) > > diff --git a/include/linux/sched.h b/include/linux/sched.h > index 1b43b45a22b9..a2001e01b3df 100644 > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -1534,6 +1534,7 @@ struct task_struct { > unsigned sched_reset_on_fork:1; > unsigned sched_contributes_to_load:1; > unsigned sched_migrated:1; > + unsigned sched_remote_wakeup:1; > unsigned :0; /* force alignment to the next boundary */ > > /* unserialized, strictly 'current' */ > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 404c0784b1fc..7f2cae4620c7 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -1768,13 +1768,15 @@ void sched_ttwu_pending(void) > cookie = lockdep_pin_lock(&rq->lock); > > while (llist) { > + int wake_flags = 0; > + > p = llist_entry(llist, struct task_struct, wake_entry); > llist = llist_next(llist); > - /* > - * See ttwu_queue(); we only call ttwu_queue_remote() when > - * its a x-cpu wakeup. > - */ > - ttwu_do_activate(rq, p, WF_MIGRATED, cookie); > + > + if (p->sched_remote_wakeup) > + wake_flags = WF_MIGRATED; > + > + ttwu_do_activate(rq, p, wake_flags, cookie); > } > > lockdep_unpin_lock(&rq->lock, cookie); > @@ -1819,10 +1821,12 @@ void scheduler_ipi(void) > irq_exit(); > } > > -static void ttwu_queue_remote(struct task_struct *p, int cpu) > +static void ttwu_queue_remote(struct task_struct *p, int cpu, int > wake_flags) > { > struct rq *rq = cpu_rq(cpu); > > + p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED); > + > if (llist_add(&p->wake_entry, &cpu_rq(cpu)->wake_list)) { > if (!set_nr_if_polling(rq->idle)) > smp_send_reschedule(cpu); > @@ -1869,7 +1873,7 @@ static void ttwu_queue(struct task_struct *p, int > cpu, int wake_flags) > #if defined(CONFIG_SMP) > if (sched_feat(TTWU_QUEUE) && !cpus_share_cache(smp_processor_id(), > cpu)) { > sched_clock_cpu(cpu); /* sync clocks x-cpu */ > - ttwu_queue_remote(p, cpu); > + ttwu_queue_remote(p, cpu, wake_flags); > return; > } > #endif
Powered by blists - more mailing lists