lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 20 Jun 2015 18:35:58 +1000
From:	Stephen Rothwell <sfr@...b.auug.org.au>
To:	Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...e.hu>,
	"H. Peter Anvin" <hpa@...or.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Rusty Russell <rusty@...tcorp.com.au>
Cc:	linux-next@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: linux-next: manual merge of the tip tree with the modules tree

Hi all,

Today's linux-next merge of the tip tree got a conflict in:

  include/linux/seqlock.h

between commit:

  7fc26327b756 ("seqlock: Introduce raw_read_seqcount_latch()")

from the modules tree and commit:

  c4bfa3f5f906 ("seqcount: Introduce raw_write_seqcount_barrier()")

from the tip tree.

I fixed it up (see below) and can carry the fix as necessary (no action
is required).

-- 
Cheers,
Stephen Rothwell                    sfr@...b.auug.org.au

diff --cc include/linux/seqlock.h
index 890c7ef709d5,486e685a226a..000000000000
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@@ -234,87 -233,50 +234,128 @@@ static inline void raw_write_seqcount_e
  	s->sequence++;
  }
  
+ /**
+  * raw_write_seqcount_barrier - do a seq write barrier
+  * @s: pointer to seqcount_t
+  *
+  * This can be used to provide an ordering guarantee instead of the
+  * usual consistency guarantee. It is one wmb cheaper, because we can
+  * collapse the two back-to-back wmb()s.
+  *
+  *      seqcount_t seq;
+  *      bool X = true, Y = false;
+  *
+  *      void read(void)
+  *      {
+  *              bool x, y;
+  *
+  *              do {
+  *                      int s = read_seqcount_begin(&seq);
+  *
+  *                      x = X; y = Y;
+  *
+  *              } while (read_seqcount_retry(&seq, s));
+  *
+  *              BUG_ON(!x && !y);
+  *      }
+  *
+  *      void write(void)
+  *      {
+  *              Y = true;
+  *
+  *              raw_write_seqcount_barrier(seq);
+  *
+  *              X = false;
+  *      }
+  */
+ static inline void raw_write_seqcount_barrier(seqcount_t *s)
+ {
+ 	s->sequence++;
+ 	smp_wmb();
+ 	s->sequence++;
+ }
+ 
 -/*
 +static inline int raw_read_seqcount_latch(seqcount_t *s)
 +{
 +	return lockless_dereference(s->sequence);
 +}
 +
 +/**
   * raw_write_seqcount_latch - redirect readers to even/odd copy
   * @s: pointer to seqcount_t
 + *
 + * The latch technique is a multiversion concurrency control method that allows
 + * queries during non-atomic modifications. If you can guarantee queries never
 + * interrupt the modification -- e.g. the concurrency is strictly between CPUs
 + * -- you most likely do not need this.
 + *
 + * Where the traditional RCU/lockless data structures rely on atomic
 + * modifications to ensure queries observe either the old or the new state the
 + * latch allows the same for non-atomic updates. The trade-off is doubling the
 + * cost of storage; we have to maintain two copies of the entire data
 + * structure.
 + *
 + * Very simply put: we first modify one copy and then the other. This ensures
 + * there is always one copy in a stable state, ready to give us an answer.
 + *
 + * The basic form is a data structure like:
 + *
 + * struct latch_struct {
 + *	seqcount_t		seq;
 + *	struct data_struct	data[2];
 + * };
 + *
 + * Where a modification, which is assumed to be externally serialized, does the
 + * following:
 + *
 + * void latch_modify(struct latch_struct *latch, ...)
 + * {
 + *	smp_wmb();	<- Ensure that the last data[1] update is visible
 + *	latch->seq++;
 + *	smp_wmb();	<- Ensure that the seqcount update is visible
 + *
 + *	modify(latch->data[0], ...);
 + *
 + *	smp_wmb();	<- Ensure that the data[0] update is visible
 + *	latch->seq++;
 + *	smp_wmb();	<- Ensure that the seqcount update is visible
 + *
 + *	modify(latch->data[1], ...);
 + * }
 + *
 + * The query will have a form like:
 + *
 + * struct entry *latch_query(struct latch_struct *latch, ...)
 + * {
 + *	struct entry *entry;
 + *	unsigned seq, idx;
 + *
 + *	do {
 + *		seq = lockless_dereference(latch->seq);
 + *
 + *		idx = seq & 0x01;
 + *		entry = data_query(latch->data[idx], ...);
 + *
 + *		smp_rmb();
 + *	} while (seq != latch->seq);
 + *
 + *	return entry;
 + * }
 + *
 + * So during the modification, queries are first redirected to data[1]. Then we
 + * modify data[0]. When that is complete, we redirect queries back to data[0]
 + * and we can modify data[1].
 + *
 + * NOTE: The non-requirement for atomic modifications does _NOT_ include
 + *       the publishing of new entries in the case where data is a dynamic
 + *       data structure.
 + *
 + *       An iteration might start in data[0] and get suspended long enough
 + *       to miss an entire modification sequence, once it resumes it might
 + *       observe the new entry.
 + *
 + * NOTE: When data is a dynamic data structure; one should use regular RCU
 + *       patterns to manage the lifetimes of the objects within.
   */
  static inline void raw_write_seqcount_latch(seqcount_t *s)
  {

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ