lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 29 Jul 2013 18:12:40 -0700
From:	Jed Davis <jld@...illa.com>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Paul Mackerras <paulus@...ba.org>,
	Ingo Molnar <mingo@...hat.com>,
	Arnaldo Carvalho de Melo <acme@...stprotocols.net>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
	linux-kernel@...r.kernel.org
Cc:	Jed Davis <jld@...illa.com>
Subject: [PATCH 1/2] perf: Fix handling of arch_perf_out_copy_user return value.

All architectures except x86 use __copy_from_user_inatomic to provide
arch_perf_out_copy_user; like the other copy_from routines, it returns
the number of bytes not copied.  perf was expecting the number of bytes
that had been copied.  This change corrects that, and thereby allows
PERF_SAMPLE_STACK_USER to be enabled on non-x86 architectures.

x86 uses copy_from_user_nmi, which deviates from the other copy_from
routines by returning the number of bytes copied.  (This cancels out
the effect of perf being backwards; apparently this code has only ever
been tested on x86.)  This change therefore adds a second wrapper to
re-reverse it for perf; the next patch in this series will clean it up.

Signed-off-by: Jed Davis <jld@...illa.com>
---
 arch/x86/include/asm/perf_event.h |  9 ++++++++-
 kernel/events/internal.h          | 11 ++++++++++-
 2 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index 8249df4..ddae5bd 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -274,6 +274,13 @@ static inline void perf_check_microcode(void) { }
  static inline void amd_pmu_disable_virt(void) { }
 #endif
 
-#define arch_perf_out_copy_user copy_from_user_nmi
+static inline unsigned long copy_from_user_nmi_for_perf(void *to,
+							const void __user *from,
+							unsigned long n)
+{
+	return n - copy_from_user_nmi(to, from, n);
+}
+
+#define arch_perf_out_copy_user copy_from_user_nmi_for_perf
 
 #endif /* _ASM_X86_PERF_EVENT_H */
diff --git a/kernel/events/internal.h b/kernel/events/internal.h
index ca65997..e61b22c 100644
--- a/kernel/events/internal.h
+++ b/kernel/events/internal.h
@@ -81,6 +81,7 @@ static inline unsigned long perf_data_size(struct ring_buffer *rb)
 	return rb->nr_pages << (PAGE_SHIFT + page_order(rb));
 }
 
+/* The memcpy_func must return the number of bytes successfully copied. */
 #define DEFINE_OUTPUT_COPY(func_name, memcpy_func)			\
 static inline unsigned int						\
 func_name(struct perf_output_handle *handle,				\
@@ -122,11 +123,19 @@ DEFINE_OUTPUT_COPY(__output_copy, memcpy_common)
 
 DEFINE_OUTPUT_COPY(__output_skip, MEMCPY_SKIP)
 
+/* arch_perf_out_copy_user must return the number of bytes not copied. */
 #ifndef arch_perf_out_copy_user
 #define arch_perf_out_copy_user __copy_from_user_inatomic
 #endif
 
-DEFINE_OUTPUT_COPY(__output_copy_user, arch_perf_out_copy_user)
+static inline unsigned long perf_memcpy_from_user(void *to,
+						  const void __user *from,
+						  unsigned long n)
+{
+	return n - arch_perf_out_copy_user(to, from, n);
+}
+
+DEFINE_OUTPUT_COPY(__output_copy_user, perf_memcpy_from_user)
 
 /* Callchain handling */
 extern struct perf_callchain_entry *
-- 
1.8.3.2

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ