lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <46a7b8a5.iyNdV9o+A3Vzog2D%dougthompson@xmission.com>
Date:	Wed, 25 Jul 2007 14:55:01 -0600
From:	dougthompson@...ssion.com
To:	greg@...ah.com, ralf@...ux-mips.org, egor@...emi.com,
	dougthompson@...ssion.com, alan@...rguk.ukuu.org.uk,
	linux-kernel@...r.kernel.org, akpm@...ux-foundation.org
Subject: [PATCH 3/4] include asm-mips add missing edac h file

From:	Doug Thompson <dougthompson@...ssion.h>

EDAC has a foundation to perform software memory scrubbing, but it
requires a per architecture (atomic_scrub) function for performing an atomic
update operation.  Under X86, this is done with a 

lock:  add  [addr],0

in the file asm-x86/edac.h

This patch provides the MIPS arch with that atomic function, atomic_scrub() in

asm-mips/edac.h

Cc:             Alan Cox <alan@...rguk.ukuu.org.uk>
Cc:		Ralf Baechle <ralf@...ux-mips.org>
Signed-off-by:	Doug Thompson <dougthompson@...ssion.com>
---
 edac.h |   35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

Index: linux-2.6.23-rc1/include/asm-mips/edac.h
===================================================================
--- /dev/null
+++ linux-2.6.23-rc1/include/asm-mips/edac.h
@@ -0,0 +1,35 @@
+#ifndef ASM_EDAC_H
+#define ASM_EDAC_H
+
+/* ECC atomic, DMA, SMP and interrupt safe scrub function */
+
+static __inline__ void atomic_scrub(void *va, u32 size)
+{
+	unsigned long *virt_addr = va;
+	unsigned long temp;
+	u32 i;
+
+	for (i = 0; i < size / sizeof(unsigned long); i++, virt_addr++) {
+
+		/*
+		 * Very carefully read and write to memory atomically
+		 * so we are interrupt, DMA and SMP safe.
+		 *
+		 * Intel: asm("lock; addl $0, %0"::"m"(*virt_addr));
+		 */
+
+		__asm__ __volatile__ (
+		"       .set    mips3                                   \n"
+		"1:     ll      %0, %1          # atomic_add            \n"
+		"       ll      %0, %1          # atomic_add            \n"
+		"       addu    %0, $0                                  \n"
+		"       sc      %0, %1                                  \n"
+		"       beqz    %0, 1b                                  \n"
+		"       .set    mips0                                   \n"
+		: "=&r" (temp), "=m" (*virt_addr)
+		: "m" (*virt_addr));
+
+	}
+}
+
+#endif
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ