lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F9717E6.8030506@amacapital.net>
Date:	Tue, 24 Apr 2012 14:15:18 -0700
From:	Andy Lutomirski <luto@...capital.net>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
CC:	linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Juri Lelli <juri.lelli@...il.com>
Subject: Re: [RFC][PATCH 0/3] gcc work-around and math128

On 04/24/2012 09:10 AM, Peter Zijlstra wrote:
> Hi all,
> 
> The SCHED_DEADLINE review resulted in the following three patches;
> 
> The first is a cleanup of various copies of the same GCC loop optimization
> work-around. I don't think this patch is too controversial, at worst I've
> picked a wrong name, but I wanted to get it out there in case people
> know more sites.
> 
> The second two implement a few u128 operations so we can do 128bit math.. I
> know a few people will die a little inside, but having nanosecond granularity
> time accounting leads to very big numbers very quickly and when you need to
> multiply them 64bit really isn't that much.

I played with some of this stuff awhile ago, and for timekeeping, it
seemed like a 64x32->96 bit multiply followed by a right shift was
enough, and that operation is a lot faster on 32-bit architectures than
a full 64x64->128 multiply.  Something like:

uint64_t mul_64_32_shift(uint64_t a, uint32_t mult, uint32_t shift)
{
  return (uint64_t)( ((__uint128_t)a * (__uint128_t)mult) >> shift );
}

or (untested, but compilable 32-bit gcc)

uint64_t mul_64_32_shift(uint64_t a, uint32_t mult, uint32_t shift)
{
  uint64_t part1 = ((a & 0xFFFFFFFFULL) * mult) >> shift;
  uint64_t part2 = ((a >> 32) * mult) << (32 - shift);
  return part1 + part2;
}

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ