lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 02 Aug 2008 16:55:18 -0500
From:	Roger Heflin <rogerheflin@...il.com>
To:	linasvepstas@...il.com
CC:	Alistair John Strachan <alistair@...zero.co.uk>,
	linux-kernel@...r.kernel.org
Subject: Re: amd64 sata_nv (massive) memory corruption

Linas Vepstas wrote:
> 2008/8/1 Alistair John Strachan <alistair@...zero.co.uk>:
>> On Friday 01 August 2008 18:30:34 Linas Vepstas wrote:
>>> Hi,
>>>
>>> I'm seeing strong, easily reproducible (and silent) corruption on a
>>> sata-attached
>>> disk drive on an amd64 board.  It might be the disk itself, but I
>>> doubt it; googling
>>> suggests that its somehow iommu-related but I cannot confirm this.
>> Nowhere do you explicitly say you have memtest86'ed the RAM.
> 
> It passes memtest86+ just fine. The system has been in heavy
> use doing big science calculations on big datasets (multi-gigabyte)
> for months; these do not get corrupted when copied/moved around
> on the old parallel IDE disk, nor moving/copying on an NFS mount
> to a file server. Only the SATA disk is misbehaving.

That MB uses DDR2 so I don't know if this is useful or not, I saw the issue on 
MB's using DDR.

I have seen issues when using all 4 dimm slots on a number of MB's that only 
appear to show up on DMA when using fast dual core cpus, if the CPU is slower 
things work just fine, and if you don't do heavy use of network or disk things 
are just fine.   And these machines would pass memtest without any issues.

You might try slowing down the cpu to the slowest and see if you can still 
duplicate it, if you cannot, bring the speed up a step and retest, if it only 
happens at the highest speed, it might be something similar.   In the end the 
solution was to have the MB maker add an option in the bios to slow down the 
ram, in the DDR case we had 4 double sided dimms (8 loads on the CPU) and AMD 
documents said DDR memory with 6 or more loads needed to be running at 333 and 
not 400, and as I said I don't know if it also applies to DDR2 in a similar way. 
   Note that if we used a slower dual core cpu it did not push things hard 
enough to show the error either, I believe we had the issues with 280/285's but 
not with 275's and lower (these were dual socket boards, with 4 dimms on each 
cpu, 8 loads per cpu).

                                  Roger
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ