[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50b88e33-110f-c67a-671a-47c67017a563@amazon.de>
Date: Sat, 27 Oct 2018 01:44:33 +0200
From: Jan H. Schönherr <jschoenh@...zon.de>
To: Subhra Mazumdar <subhra.mazumdar@...cle.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org
Subject: [RFC 00/60] Coscheduling for Linux
On 19/10/2018 02.26, Subhra Mazumdar wrote:
> Hi Jan,
Hi. Sorry for the delay.
> On 9/7/18 2:39 PM, Jan H. Schönherr wrote:
>> The collective context switch from one coscheduled set of tasks to another
>> -- while fast -- is not atomic. If a use-case needs the absolute guarantee
>> that all tasks of the previous set have stopped executing before any task
>> of the next set starts executing, an additional hand-shake/barrier needs to
>> be added.
>>
> Do you know how much is the delay? i.e what is overlap time when a thread
> of new group starts executing on one HT while there is still thread of
> another group running on the other HT?
The delay is roughly equivalent to the IPI latency, if we're just talking
about coscheduling at SMT level: one sibling decides to schedule another
group, sends an IPI to the other sibling(s), and may already start
executing a task of that other group, before the IPI is received on the
other end.
Now, there are some things that may delay processing an IPI, but in those
cases the target CPU isn't executing user code.
I've yet to produce some current numbers for SMT-only coscheduling. An
older ballpark number I have is about 2 microseconds for the collective
context switch of one hierarchy level, but take that with a grain of salt.
Regards
Jan
Powered by blists - more mailing lists