lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170727143205.GU3730@linux.vnet.ibm.com>
Date:   Thu, 27 Jul 2017 07:32:05 -0700
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     linux-kernel@...r.kernel.org, mingo@...nel.org,
        jiangshanlai@...il.com, dipankar@...ibm.com,
        akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
        josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
        dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
        oleg@...hat.com, will.deacon@....com
Subject: Re: [PATCH tip/core/rcu 4/5] sys_membarrier: Add expedited option

On Thu, Jul 27, 2017 at 03:49:08PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 27, 2017 at 06:08:16AM -0700, Paul E. McKenney wrote:
> 
> > > No. Its called wakeup latency :-) Your SCHED_OTHER task will not get to
> > > insta-run all the time. If there are other tasks already running, we'll
> > > not IPI unless it should preempt.
> > > 
> > > If its idle, nobody cares..
> > 
> > So it does IPI immediately sometimes.
> > 
> > > > Does this auto-throttling also apply if the user is running a CPU-bound
> > > > SCHED_BATCH or SCHED_IDLE task on each CPU, and periodically waking up
> > > > one of a large group of SCHED_OTHER tasks, where the SCHED_OTHER tasks
> > > > immediately sleep upon being awakened?
> > > 
> > > SCHED_BATCH is even more likely to suffer wakeup latency since it will
> > > never preempt anything.
> > 
> > Ahem.  In this scenario, SCHED_BATCH is already running on a the CPU in
> > question, and a SCHED_OTHER task is awakened from some other CPU.
> > 
> > Do we IPI in that case.
> 
> So I'm a bit confused as to where you're trying to go with this.
> 
> I'm saying that if there are other users of our CPU, we can't
> significantly disturb them with IPIs.
> 
> Yes, we'll sometimes IPI in order to do a preemption on wakeup. But we
> cannot always win that. There is no wakeup triggered starvation case.
> 
> If you create a thread per CPU and have them insta sleep after wakeup,
> and then keep prodding them to wakeup. They will disturb things less
> than if they were while(1); loops.
> 
> If the machine is otherwise idle, nobody cares.
> 
> If there are other tasks on the system, the IPI rate is limited to the
> wakeup latency of you tasks.
> 
> 
> And any of this is limited to the CPUs we're allowed to run in the first
> place.
> 
> 
> So yes, the occasional IPI happens, but if there's other tasks, we can't
> disturb them more than we could with while(1); tasks.
> 
> 
> OTOH, something like:
> 
> 	while(1)
> 		synchronize_sched_expedited();
> 
> as per your proposed patch, will spray IPIs to all CPUs and at high
> rates.

OK, I have updated my patch to do throttling.

							Thanx, Paul

------------------------------------------------------------------------

commit 4cd5253094b6d7f9501e21e13aa4e2e78e8a70cd
Author: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Date:   Tue Jul 18 13:53:32 2017 -0700

    sys_membarrier: Add expedited option
    
    The sys_membarrier() system call has proven too slow for some use cases,
    which has prompted users to instead rely on TLB shootdown.  Although TLB
    shootdown is much faster, it has the slight disadvantage of not working
    at all on arm and arm64 and also of being vulnerable to reasonable
    optimizations that might skip some IPIs.  However, the Linux kernel
    does not currrently provide a reasonable alternative, so it is hard to
    criticize these users from doing what works for them on a given piece
    of hardware at a given time.
    
    This commit therefore adds an expedited option to the sys_membarrier()
    system call, thus providing a faster mechanism that is portable and
    is not subject to death by optimization.  Note that if more than one
    MEMBARRIER_CMD_SHARED_EXPEDITED sys_membarrier() call happens within
    the same jiffy, all but the first will use synchronize_sched() instead
    of synchronize_sched_expedited().
    
    Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
    [ paulmck: Fix code style issue pointed out by Boqun Feng. ]
    Tested-by: Avi Kivity <avi@...lladb.com>
    Cc: Maged Michael <maged.michael@...il.com>
    Cc: Andrew Hunter <ahh@...gle.com>
    Cc: Geoffrey Romer <gromer@...gle.com>

diff --git a/include/uapi/linux/membarrier.h b/include/uapi/linux/membarrier.h
index e0b108bd2624..5720386d0904 100644
--- a/include/uapi/linux/membarrier.h
+++ b/include/uapi/linux/membarrier.h
@@ -40,6 +40,16 @@
  *                          (non-running threads are de facto in such a
  *                          state). This covers threads from all processes
  *                          running on the system. This command returns 0.
+ * @MEMBARRIER_CMD_SHARED_EXPEDITED:  Execute a memory barrier on all
+ *			    running threads, but in an expedited fashion.
+ *                          Upon return from system call, the caller thread
+ *                          is ensured that all running threads have passed
+ *                          through a state where all memory accesses to
+ *                          user-space addresses match program order between
+ *                          entry to and return from the system call
+ *                          (non-running threads are de facto in such a
+ *                          state). This covers threads from all processes
+ *                          running on the system. This command returns 0.
  *
  * Command to be passed to the membarrier system call. The commands need to
  * be a single bit each, except for MEMBARRIER_CMD_QUERY which is assigned to
@@ -48,6 +58,7 @@
 enum membarrier_cmd {
 	MEMBARRIER_CMD_QUERY = 0,
 	MEMBARRIER_CMD_SHARED = (1 << 0),
+	MEMBARRIER_CMD_SHARED_EXPEDITED = (1 << 1),
 };
 
 #endif /* _UAPI_LINUX_MEMBARRIER_H */
diff --git a/kernel/membarrier.c b/kernel/membarrier.c
index 9f9284f37f8d..587e3bbfae7e 100644
--- a/kernel/membarrier.c
+++ b/kernel/membarrier.c
@@ -22,7 +22,8 @@
  * Bitmask made from a "or" of all commands within enum membarrier_cmd,
  * except MEMBARRIER_CMD_QUERY.
  */
-#define MEMBARRIER_CMD_BITMASK	(MEMBARRIER_CMD_SHARED)
+#define MEMBARRIER_CMD_BITMASK	(MEMBARRIER_CMD_SHARED |		\
+				 MEMBARRIER_CMD_SHARED_EXPEDITED)
 
 /**
  * sys_membarrier - issue memory barriers on a set of threads
@@ -64,6 +65,20 @@ SYSCALL_DEFINE2(membarrier, int, cmd, int, flags)
 		if (num_online_cpus() > 1)
 			synchronize_sched();
 		return 0;
+	case MEMBARRIER_CMD_SHARED_EXPEDITED:
+		if (num_online_cpus() > 1) {
+			static unsigned long lastexp;
+			unsigned long j;
+
+			j = jiffies;
+			if (READ_ONCE(lastexp) == j) {
+				synchronize_sched();
+				WRITE_ONCE(lastexp, j);
+			} else {
+				synchronize_sched_expedited();
+			}
+		}
+		return 0;
 	default:
 		return -EINVAL;
 	}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ