lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1181054562.6180.77.camel@localhost.localdomain>
Date:	Tue, 05 Jun 2007 15:42:42 +0100
From:	Richard Purdie <richard@...nedhand.com>
To:	LKML <linux-kernel@...r.kernel.org>
Subject: Problems (a bug?) with UINT_MAX from kernel.h

The kernel uses UINT_MAX defined from kernel.h in a variety of places.

While looking at the behaviour of the LZO code, I noticed it seemed to
think an int was 8 bytes large on my 32 bit i386 machine. It isn't but
why did it think that?

kernel.h says:

#define INT_MAX		((int)(~0U>>1))
#define INT_MIN		(-INT_MAX - 1)
#define UINT_MAX	(~0U)
#define LONG_MAX	((long)(~0UL>>1))
#define LONG_MIN	(-LONG_MAX - 1)
#define ULONG_MAX	(~0UL)
#define LLONG_MAX	((long long)(~0ULL>>1))
#define LLONG_MIN	(-LLONG_MAX - 1)
#define ULLONG_MAX	(~0ULL)

If I try to compile the code fragment below, I see the error:

#define UINT_MAX	(~0U)
#if (0xffffffffffffffff == UINT_MAX)
  #error argh
#endif

I've tested this on several systems with a variety of gcc versions with
the same result. I've tried various other ways of testing this all with
the same conclusion, UINT_MAX is wrong.

The *LONG* definitions above should work as gcc is forced to a certain
type. Where just 0U is specified, I don't think it will work as intended
as gcc seems to automatically increase the type to fit the value and
avoid truncation ending up with a long long.

If I change the above to:

/* Handle GCC = 3.2 */
#if !defined(__INT_MAX__)
#define INT_MAX		0x7fffffff
#else
#define INT_MAX		(__INT_MAX__)
#endif
#define INT_MIN		(-INT_MAX - 1)
#define UINT_MAX	((INT_MAX<<1)+1)

I get the expected result of an int being 4 bytes long. Is there a
better solution? Its probably better that whats there now but could
break a machine using gcc 3.2 that doesn't have int size = 4 bytes...

(gcc <= 3.2 doesn't define __INT_MAX__)

Richard


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ