lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 11 Nov 2010 18:19:10 -0500
From:	Kyle Moffett <kyle@...fetthome.net>
To:	john stultz <johnstul@...ibm.com>,
	Thomas Gleixner <tglx@...utronix.de>
Cc:	Alexander Shishkin <virtuoso@...nd.org>, Valdis.Kletnieks@...edu,
	linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	"H. Peter Anvin" <hpa@...or.com>,
	Kay Sievers <kay.sievers@...y.org>, Greg KH <gregkh@...e.de>,
	Chris Friesen <chris.friesen@...band.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	"Kirill A. Shutemov" <kirill@...temov.name>
Subject: Re: [PATCHv6 0/7] system time changes notification

On Thu, Nov 11, 2010 at 17:50, Thomas Gleixner <tglx@...utronix.de> wrote:
> On Thu, 11 Nov 2010, Kyle Moffett wrote:
>> What about maybe adding device nodes for various kinds of "clock"
>> devices?  You could then do:
>>
>> #define CLOCK_FD 0x80000000
>> fd = open("/dev/clock/realtime", O_RDWR);
>> poll(fd);
>> clock_gettime(CLOCK_FD|fd, &ts);
>
> That won't work due to the posix-cputimers occupying the negative
> number space already.

Hmm, looks like the manpages clock_gettime(2) et. al. need updating,
they don't mention anything at all about negative clockids.  The same
thing could still be done with, EG:

#define CLOCK_FD 0x40000000


On Thu, Nov 11, 2010 at 17:36, john stultz <johnstul@...ibm.com> wrote:
> On Thu, 2010-11-11 at 17:11 -0500, Kyle Moffett wrote:
>> On Thu, Nov 11, 2010 at 16:16, Thomas Gleixner <tglx@...utronix.de> wrote:
>> > 2) Can't we use existing notification stuff like uevents or such ?
>>
>> What about maybe adding device nodes for various kinds of "clock"
>> devices?  You could then do:
>>
>> #define CLOCK_FD 0x80000000
>> fd = open("/dev/clock/realtime", O_RDWR);
>> poll(fd);
>> clock_gettime(CLOCK_FD|fd, &ts);
>
> Ehh.. I'm not a huge fan of creating dynamic ids for what are static
> clocksources (REALTIME, MONOTONIC, etc).
>
> That said...
>
>> [...]
>>
>> This would also enable the folks who want to support things like PHY
>> hardware clocks (for very-low-latency ethernet timestamping).  It
>> would resolve the enumeration problem; instead of 0, 1, 2, ... as
>> constants, they would show up in sysfs and be open()able.  Ideally you
>> would be able to set up ntpd to slew the "realtime" clock by following
>> a particular hardware clock, or vice versa.
>
> This is very similar in spirit to what's being done by Richard Cochran's
> dynamic clock devices code: http://lwn.net/Articles/413332/

Hmm, I've just been poking around and thinking about an extension of
this concept.  Right now we have:

/sys/devices/system/clocksource
/sys/devices/system/clocksource/clocksource0
/sys/devices/system/clocksource/clocksource0/current_clocksource
/sys/devices/system/clocksource/clocksource0/available_clocksource

Could we actually register the separate clocksources (hpet, acpi_pm,
etc) in the device model properly?

Then consider the possibility of creating "virtual clocksources" which
are measured against an existing clocksource.  They could be
independently slewed and adjusted relative to the parent clocksource.
Then the "UTS namespace" feature could also affect the current
clocksource used for CLOCK_MONOTONIC, etc.

You could perform various forms of time-sensitive software testing
without causing problems for a "make" process running elsewhere on the
system.  You could test the operation of various kinds of software
across large jumps or long periods of time (at a highly accelerated
rate) without impacting your development environment.

One really nice example would be testing "ntpd" itself; you could run
a known-good "ntpd" in the base system to maintain a very stable
clock, then simulate all kinds of terrifyingly bad clock hardware and
kernel problems (sudden frequency changes, etc) in a container.  This
kind of stuff can currently only be easily simulated with specialized
hardware.

You could also improve "container-based" virtualization, allowing
perceived "CPU-time" to be slewed based on the cgroup.  IE: Processes
inside of a container allocated only "33%" of one CPU might see their
"CPU-time" accrue 3 times faster than a process outside of the
container, as though the process was the only thing running on the
system.  Running "top" inside of the container might show 100% CPU
even though the hardware is at 33% utilization, or 200% CPU if the
container is currently bursting much higher.

Cheers,
Kyle Moffett
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ