lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f6913c3669e156372c3d8e94946f2ec0dfc97020.camel@redhat.com>
Date:   Mon, 24 May 2021 20:49:13 +0300
From:   Maxim Levitsky <mlevitsk@...hat.com>
To:     Ilias Stamatis <ilstam@...zon.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, pbonzini@...hat.com
Cc:     seanjc@...gle.com, vkuznets@...hat.com, wanpengli@...cent.com,
        jmattson@...gle.com, joro@...tes.org, zamsden@...il.com,
        mtosatti@...hat.com, dwmw@...zon.co.uk
Subject: Re: [PATCH v3 01/12] math64.h: Add mul_s64_u64_shr()

On Fri, 2021-05-21 at 11:24 +0100, Ilias Stamatis wrote:
> This function is needed for KVM's nested virtualization. The nested TSC
> scaling implementation requires multiplying the signed TSC offset with
> the unsigned TSC multiplier.
> 
> Signed-off-by: Ilias Stamatis <ilstam@...zon.com>
> ---
>  include/linux/math64.h | 19 +++++++++++++++++++
>  1 file changed, 19 insertions(+)
> 
> diff --git a/include/linux/math64.h b/include/linux/math64.h
> index 66deb1fdc2ef..2928f03d6d46 100644
> --- a/include/linux/math64.h
> +++ b/include/linux/math64.h
> @@ -3,6 +3,7 @@
>  #define _LINUX_MATH64_H
>  
>  #include <linux/types.h>
> +#include <linux/math.h>
>  #include <vdso/math64.h>
>  #include <asm/div64.h>
>  
> @@ -234,6 +235,24 @@ static inline u64 mul_u64_u64_shr(u64 a, u64 b, unsigned int shift)
>  
>  #endif
>  
> +#ifndef mul_s64_u64_shr
> +static inline u64 mul_s64_u64_shr(s64 a, u64 b, unsigned int shift)
> +{
> +	u64 ret;
> +
> +	/*
> +	 * Extract the sign before the multiplication and put it back
> +	 * afterwards if needed.
> +	 */
> +	ret = mul_u64_u64_shr(abs(a), b, shift);
> +
> +	if (a < 0)
> +		ret = -((s64) ret);
> +
> +	return ret;
> +}
> +#endif /* mul_s64_u64_shr */
> +
>  #ifndef mul_u64_u32_div
>  static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor)
>  {

Reviewed-by: Maxim Levitsky <mlevitsk@...hat.com>

Best regards,
	Maxim Levitsky

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ