lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1355190830.17101.280.camel@gandalf.local.home>
Date:	Mon, 10 Dec 2012 20:53:50 -0500
From:	Steven Rostedt <rostedt@...dmis.org>
To:	frank.rowand@...sony.com
Cc:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	linux-rt-users <linux-rt-users@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Carsten Emde <C.Emde@...dl.org>,
	John Kacur <jkacur@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Clark Williams <clark.williams@...il.com>,
	Ingo Molnar <mingo@...nel.org>
Subject: Re: [RFC][PATCH RT 3/4] sched/rt: Use IPI to trigger RT task push
 migration instead of pulling

On Mon, 2012-12-10 at 17:15 -0800, Frank Rowand wrote:

> I should have also mentioned some previous experience using IPIs to
> avoid runq lock contention on wake up.  Someone encountered IPI
> storms when using the TTWU_QUEUE feature, thus it defaults to off
> for CONFIG_PREEMPT_RT_FULL:
> 
>   #ifndef CONFIG_PREEMPT_RT_FULL
>   /*
>    * Queue remote wakeups on the target CPU and process them
>    * using the scheduler IPI. Reduces rq->lock contention/bounces.
>    */
>   SCHED_FEAT(TTWU_QUEUE, true)
>   #else
>   SCHED_FEAT(TTWU_QUEUE, false)
> 

Interesting, but I'm wondering if this also does it for every wakeup? If
you have 1000 tasks waking up on another CPU, this could potentially
send out 1000 IPIs. The number of IPIs here looks to be # of tasks
waking up, and perhaps more than that, as there could be multiple
instances that try to wake up the same task.

Now this patch set, the # of IPIs is limited to the # of CPUs. If you
have 4 CPUs, you'll get a storm of 3 IPIs. That's a big difference.

Now we could even add a flag, and do a test_and_set on it, and send out
an IPI iff the flag wasn't set before. Have the target CPU clear the
flag and then do the pushing. This would limit even further the IPIs
needed to be sent. I didn't add this yet, because I wanted to test the
current code first, and only add this if there is an issue with too many
IPIs.

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ