lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 01 Apr 2009 19:13:38 +0200
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Ingo Molnar <mingo@...e.hu>
CC:	Jeremy Fitzhardinge <jeremy@...p.org>,
	Tejun Heo <htejun@...il.com>,
	linux kernel <linux-kernel@...r.kernel.org>,
	Linux Netdev List <netdev@...r.kernel.org>,
	Joe Perches <joe@...ches.com>,
	Rusty Russell <rusty@...tcorp.com.au>
Subject: Re: [PATCH] x86: percpu_to_op() misses memory and flags clobbers

Ingo Molnar a écrit :
> * Eric Dumazet <dada1@...mosbay.com> wrote:
> 
>> Jeremy Fitzhardinge a écrit :
>>> Eric Dumazet wrote:
>>>> While playing with new percpu_{read|write|add|sub} stuff in network tree,
>>>> I found x86 asm was a litle bit optimistic.
>>>>
>>>> We need to tell gcc that percpu_{write|add|sub|or|xor} are modyfing
>>>> memory and possibly eflags. We could add another parameter to
>>>> percpu_to_op()
>>>> to separate the plain "mov" case (not changing eflags),
>>>> but let keep it simple for the moment.
>>>>   
>>> Did you observe an actual failure that this patch fixed?
>>>
>> Not in current tree, as we dont use yet percpu_xxxx() very much.
>>
>> If deployed for SNMP mibs with hundred of call sites,
>> can you guarantee it will work as is ?
> 
> Do we "guarantee" it for you? No.
> 
> Is it expected to work just fine? Yes.
> 
> Are there any known bugs in this area? No.

Good to know. So I shut up. I am a jerk and should blindly trust
linux kernel, sorry.

> 
> Will we fix it if it's demonstrated to be broken? Of course! :-)
> 
> [ Btw., it's definitely cool that you will make heavy use for it for 
>   SNMP mib statistics - please share with us your experiences with 
>   the facilities - good or bad experiences alike! ]

I tried but I miss kind of an indirect percpu_add() function.

because of Net namespaces, mibs are dynamically allocated, and
current percpu_add() works on static percpu only (because of added
per_cpu__ prefix)

#define percpu_add(var, val)   percpu_to_op("add", per_cpu__##var, val)

I tried adding :

#define dyn_percpu_add(var, val)   percpu_to_op("add", var, val)

But I dont know it this is the plan ?
Should we get rid of "per_cpu__" prefix and use a special ELF section/
marker instead ?

I have a patch to add percpu_inc() and percpu_dec(), I am not
sure its worth it...

[PATCH] percpu: Adds percpu_inc() and percpu_dec()

Increments and decrements are quite common operations for SNMP mibs.

Signed-off-by: Eric Dumazet <dada1@...mosbay.com>

diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index aee103b..248be11 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -103,6 +103,29 @@ do {							\
 	}						\
 } while (0)
 
+#define percpu_to_op0(op, var)				\
+do {							\
+	switch (sizeof(var)) {				\
+	case 1:						\
+		asm(op "b "__percpu_arg(0)		\
+		    : "+m" (var));			\
+		break;					\
+	case 2:						\
+		asm(op "w "__percpu_arg(0)		\
+		    : "+m" (var));			\
+		break;					\
+	case 4:						\
+		asm(op "l "__percpu_arg(0)		\
+		    : "+m" (var));			\
+		break;					\
+	case 8:						\
+		asm(op "q "__percpu_arg(0)		\
+		    : "+m" (var));			\
+		break;					\
+	default: __bad_percpu_size();			\
+	}						\
+} while (0)
+
 #define percpu_from_op(op, var)				\
 ({							\
 	typeof(var) ret__;				\
@@ -139,6 +162,8 @@ do {							\
 #define percpu_and(var, val)	percpu_to_op("and", per_cpu__##var, val)
 #define percpu_or(var, val)	percpu_to_op("or", per_cpu__##var, val)
 #define percpu_xor(var, val)	percpu_to_op("xor", per_cpu__##var, val)
+#define percpu_inc(var)		percpu_to_op0("inc", per_cpu__##var)
+#define percpu_dec(var)		percpu_to_op0("dec", per_cpu__##var)
 
 /* This is not atomic against other CPUs -- CPU preemption needs to be off */
 #define x86_test_and_clear_bit_percpu(bit, var)				\
diff --git a/include/asm-generic/percpu.h b/include/asm-generic/percpu.h
index 00f45ff..c57357e 100644
--- a/include/asm-generic/percpu.h
+++ b/include/asm-generic/percpu.h
@@ -120,6 +120,14 @@ do {									\
 # define percpu_sub(var, val)		__percpu_generic_to_op(var, (val), -=)
 #endif
 
+#ifndef percpu_inc
+# define percpu_inc(var)			do { percpu_add(var, 1); } while (0)
+#endif
+
+#ifndef percpu_dec
+# define percpu_dec(var)			do { percpu_sub(var, 1); } while (0)
+#endif
+
 #ifndef percpu_and
 # define percpu_and(var, val)		__percpu_generic_to_op(var, (val), &=)
 #endif

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ