lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 06 Feb 2008 10:56:06 -0800
From:	Max Krasnyanskiy <maxk@...lcomm.com>
To:	Mark Hounschell <dmarkh@....rr.com>
CC:	LKML <linux-kernel@...r.kernel.org>, torvalds@...ux-foundation.org,
	mingo@...e.hu, Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Paul Jackson <pj@....com>, Mark Hounschell <markh@...pro.net>,
	linux-rt-users@...r.kernel.org
Subject: Re: cpuisol: CPU isolation extensions (take 2)

CC'ing linux-rt-users because I think my explanation below may be interesting for the 
RT folks.

Mark Hounschell wrote:
> Max Krasnyanskiy wrote:
> 
>> With CPU isolation
>> it's very easy to achieve single digit usec worst case and around 200
>> nsec average response times on off-the-shelf
>> multi- processor/core systems (vanilla kernel plus these patches) even
>> under exteme system load. 
> 
> Hi Max, could you elaborate on what sort events your response times are
> from?

Sure. As I mentioned before I'm working with our legal team on releasing hard RT engine 
that uses isolated CPUs. You can think of that engine as a giant SW PLL. 
It requires a time source that it locks on to. For example the time source can be the 
kernel clock (gtod), some kind of memory mapped counter, or some external event. 
In my case the HW sends me an Ethernet packet every 24 millisecond. 
Once the PLL locks onto the timesource the engine executes a predefined "timeline". 
The timeline basically specifies tasks with offsets in nanoseconds from the start of 
the cycle (ie "at 100 nsec run task1", "at 15000 run task2", etc). The tasks are just 
callbacks.
The jitter in running those tasks is what I meant by "response time". Essentially it's 
a polling design where SW knows precisely when to expect an event. It's not a general
purpose solution but works beautifully for things like wireless PHY/MAC layers were the
framing structure is very deterministic and must be strictly enforced. It works for other 
applications as well once you get your head wrapped around the idea :). ie That you do 
not get interrupts for every single event, the SW already knows when that even will come.
btw The engine also enforces the deadlines. For example it knows right away if a task is
late and it knows exactly how late. That helps in debugging, a lot :).

The other option is to run normal pthreads on the isolated CPUs. As long as the threads
are carefully designed not to do certain things you can get very decent worst case latencies 
(10-12 usec on Opterons and Core2) even with vanilla kernels (patched with the isolation 
patches of course) because all the latency sources have been removed from those CPUs.

Max
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ