lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190603091008.24776-1-anders.roxell@linaro.org>
Date:   Mon,  3 Jun 2019 11:10:08 +0200
From:   Anders Roxell <anders.roxell@...aro.org>
To:     peterz@...radead.org, mingo@...hat.com
Cc:     will.deacon@....com, tglx@...utronix.de,
        linux-kernel@...r.kernel.org,
        Anders Roxell <anders.roxell@...aro.org>
Subject: [PATCH] seqlock: mark raw_read_seqcount and read_seqcount_retry as __always_inline

If CONFIG_FUNCTION_GRAPH_TRACER is enabled function sched_clock() in
kernel/time/sched_clock.c is marked as notrace. However, functions
raw_read_seqcount and read_seqcount_retry are marked as inline. If
CONFIG_OPTIMIZE_INLINING is set that will make the two functions
tracable which they shouldn't.

Rework so that functions raw_read_seqcount and read_seqcount_retry are
marked with __always_inline so they will be inlined even if
CONFIG_OPTIMIZE_INLINING is turned on.

Signed-off-by: Anders Roxell <anders.roxell@...aro.org>
---
 include/linux/seqlock.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index bcf4cf26b8c8..1b18e3df186e 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -127,7 +127,7 @@ static inline unsigned __read_seqcount_begin(const seqcount_t *s)
  * seqcount without any lockdep checking and without checking or
  * masking the LSB. Calling code is responsible for handling that.
  */
-static inline unsigned raw_read_seqcount(const seqcount_t *s)
+static __always_inline unsigned raw_read_seqcount(const seqcount_t *s)
 {
 	unsigned ret = READ_ONCE(s->sequence);
 	smp_rmb();
@@ -215,7 +215,8 @@ static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start)
  * If the critical section was invalid, it must be ignored (and typically
  * retried).
  */
-static inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
+static
+__always_inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
 {
 	smp_rmb();
 	return __read_seqcount_retry(s, start);
-- 
2.20.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ