[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20151006113244.GE3798@hzzhang-OptiPlex-9020.sh.intel.com>
Date: Tue, 6 Oct 2015 19:32:44 +0800
From: Haozhong Zhang <haozhong.zhang@...el.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Radim Krčmář <rkrcmar@...hat.com>,
David Matlack <dmatlack@...gle.com>, kvm@...r.kernel.org,
Gleb Natapov <gleb@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
Joerg Roedel <joro@...tes.org>,
Wanpeng Li <wanpeng.li@...ux.intel.com>,
Xiao Guangrong <guangrong.xiao@...ux.intel.com>,
Mihai Donțu <mdontu@...defender.com>,
Andy Lutomirski <luto@...nel.org>,
Kai Huang <kai.huang@...ux.intel.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 04/12] KVM: x86: Replace call-back set_tsc_khz() with a
common function
On Tue, Oct 06, 2015 at 12:40:49PM +0200, Paolo Bonzini wrote:
>
>
> On 06/10/2015 06:06, Haozhong Zhang wrote:
> > Alternatively, it's also possible to follow David's comment to use
> > divq on x86_64 to keep both precision and safety. On i386, it just
> > falls back to above truncating approach.
>
> khz is just 32 bits, so we can do a 96/32 division. And because this is
> a slow path, we can code a generic u64*u32/u32 function and use it to do
> (1 << kvm_tsc_scaling_ratio_frac_bits) * khz / tsc_khz:
>
This is much better! Thanks Paolo! I'll use this mul_u64_u32_shr() in
the next version.
> diff --git a/include/linux/math64.h b/include/linux/math64.h
> index c45c089bfdac..5b70af4fa386 100644
> --- a/include/linux/math64.h
> +++ b/include/linux/math64.h
> @@ -142,6 +142,13 @@ static inline u64 mul_u64_u32_shr(u64 a, u32 mul,
> unsigned int shift)
> }
> #endif /* mul_u64_u32_shr */
>
> +#ifndef mul_u64_u32_div
> +static inline u64 mul_u64_u32_div(u64 x, u32 num, u32 den)
> +{
> + return (u64)(((unsigned __int128)a * mul) / den);
> +}
> +#endif
> +
> #else
>
> #ifndef mul_u64_u32_shr
> @@ -161,6 +168,35 @@ static inline u64 mul_u64_u32_shr(u64 a, u32 mul,
> unsigned int shift)
> }
> #endif /* mul_u64_u32_shr */
>
> +#ifndef mul_u64_u32_div
> +static inline u64 mul_u64_u32_div(u64 a, u32 num, u32 den)
> +{
> + union {
> + u64 ll;
> + struct {
> +#ifdef __BIG_ENDIAN
> + u32 high, low;
> +#else
> + u32 low, high;
> +#endif
> + } l;
> + } u, rl, rh;
> +
> + u.ll = a;
> + rl.ll = (u64)u.l.low * num;
> + rh.ll = (u64)u.l.high * num + rl.l.high;
> +
> + /* Bits 32-63 of the result will be in rh.l.low. */
> + rl.l.high = do_div(rh.ll, den);
> +
> + /* Bits 0-31 of the result will be in rl.l.low. */
> + do_div(rl.ll, den);
> +
> + rl.l.high = rh.l.low;
> + return rl.ll;
> +}
> +#endif
> +
> #endif
>
> #endif /* _LINUX_MATH64_H */
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists