Shorten the seqcount write hold region to the actual update of the timekeeper and the related data (e.g vsyscall). On a contemporary x86 system this reduces the maximum latencies on Preempt-RT from 8us to 4us on the non-timekeeping cores. Signed-off-by: Thomas Gleixner --- kernel/time/timekeeping.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) Index: linux-3.6/kernel/time/timekeeping.c =================================================================== --- linux-3.6.orig/kernel/time/timekeeping.c +++ linux-3.6/kernel/time/timekeeping.c @@ -1190,7 +1190,6 @@ static void update_wall_time(void) unsigned long flags; raw_spin_lock_irqsave(&timekeeper_lock, flags); - write_seqcount_begin(&timekeeper_seq); /* Make sure we're fully resumed: */ if (unlikely(timekeeping_suspended)) @@ -1242,6 +1241,7 @@ static void update_wall_time(void) */ accumulate_nsecs_to_secs(tk); + write_seqcount_begin(&timekeeper_seq); /* Update clock->cycle_last with the new value */ clock->cycle_last = tk->cycle_last; /* @@ -1256,9 +1256,8 @@ static void update_wall_time(void) */ memcpy(real_tk, tk, sizeof(*tk)); timekeeping_update(real_tk, false, false); - -out: write_seqcount_end(&timekeeper_seq); +out: raw_spin_unlock_irqrestore(&timekeeper_lock, flags); } -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/