lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 09 Apr 2022 22:32:21 +0200
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Kurt Kanzenbach <kurt@...utronix.de>,
        John Stultz <john.stultz@...aro.org>,
        Stephen Boyd <sboyd@...nel.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ingo Molnar <mingo@...hat.com>,
        Jonathan Corbet <corbet@....net>
Cc:     Richard Cochran <richardcochran@...il.com>,
        linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
        Kurt Kanzenbach <kurt@...utronix.de>
Subject: Re: [PATCH 1/3] timekeeping: Introduce fast accessor to clock tai

On Sat, Apr 09 2022 at 10:12, Kurt Kanzenbach wrote:
> Introduce fast/NMI safe accessor to clock tai for tracing. The Linux kernel
> tracing infrastructure has support for using different clocks to generate
> timestamps for trace events. Especially in TSN networks it's useful to have TAI
> as trace clock, because the application scheduling is done in accordance to the
> network time, which is based on TAI. With a tai trace_clock in place, it becomes
> very convenient to correlate network activity with Linux kernel application
> traces.
>
> Use the same implementation as ktime_get_boot_fast_ns() does by reading the
> monotonic time and adding the TAI offset. The same limitations as for the fast
> boot implementation apply. The TAI offset may change at run time e.g., by
> setting the time or using adjtimex() with an offset. However, these kind of
> offset changes are rare events. Nevertheless, the user has to be aware and deal
> with it in post processing.
>
> An alternative approach would be to use the same implementation as
> ktime_get_real_fast_ns() does. However, this requires to add an additional u64
> member to the tk_read_base struct. This struct together with a seqcount is
> designed to fit into a single cache line on 64 bit architectures. Adding a new
> member would violate this constraint.
>
> Signed-off-by: Kurt Kanzenbach <kurt@...utronix.de>

Nice changelog!

Reviewed-by: Thomas Gleixner <tglx@...utronix.de>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ