[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1304703767.3066.211.camel@edumazet-laptop>
Date: Fri, 06 May 2011 19:42:47 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Andi Kleen <andi@...stfloor.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
john stultz <johnstul@...ibm.com>,
lkml <linux-kernel@...r.kernel.org>,
Paul Mackerras <paulus@...ba.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Anton Blanchard <anton@...ba.org>, Ingo Molnar <mingo@...e.hu>
Subject: Re: [RFC] time: xtime_lock is held too long
Le vendredi 06 mai 2011 à 18:59 +0200, Andi Kleen a écrit :
> If you have a better way to make it faster please share it.
Ideally we could use RCU :)
Have whatever state hold in one structure (possibly big, it doesnt
matter) and switch pointer in writer once everything is setup in new
structure.
struct {
struct timespec xtime;
struct timespec wall_to_monotonic;
...
} time_keep_blob;
struct time_keep_blob __rcu *xtime_cur;
ktime_t ktime_get(void)
{
const struct time_keep_blob *xp;
s64 secs, nsecs;
rcu_read_lock();
xp = rcu_dereference(xtime_cur);
secs = xp->xtime.tv_sec + xp->wall_to_monotonic.tv_sec;
nsecs = xp->xtime.tv_nsec + xp->wall_to_monotonic.tv_nsec;
nsecs += timekeeping_get_ns(xp);
rcu_read_unlock();
return ktime_add_ns(ktime_set(secs, 0), nsecs);
}
I dont know timekeeping details, maybe its necessary to loop if
xtime_cur changes :
ktime_t ktime_get(void)
{
const struct time_keep_blob *xp;
s64 secs, nsecs;
rcu_read_lock();
do {
xp = rcu_dereference(xtime_cur);
secs = xp->xtime.tv_sec + xp->wall_to_monotonic.tv_sec;
nsecs = xp->xtime.tv_nsec + xp->wall_to_monotonic.tv_nsec;
nsecs += timekeeping_get_ns(xp);
} while (rcu_dereference(xtime_cur) != xp);
rcu_read_unlock();
return ktime_add_ns(ktime_set(secs, 0), nsecs);
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists