lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220124094846.GN20638@worktop.programming.kicks-ass.net>
Date:   Mon, 24 Jan 2022 10:48:46 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Mark Rutland <mark.rutland@....com>
Cc:     mingo@...hat.com, tglx@...utronix.de, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, dietmar.eggemann@....com,
        rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        bristot@...hat.com, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, linux-api@...r.kernel.org, x86@...nel.org,
        pjt@...gle.com, posk@...gle.com, avagin@...gle.com,
        jannh@...gle.com, tdelisle@...terloo.ca, posk@...k.io
Subject: Re: [RFC][PATCH v2 5/5] sched: User Mode Concurency Groups

On Fri, Jan 21, 2022 at 04:57:29PM +0000, Mark Rutland wrote:
> On Thu, Jan 20, 2022 at 04:55:22PM +0100, Peter Zijlstra wrote:
> > User Managed Concurrency Groups is an M:N threading toolkit that allows
> > constructing user space schedulers designed to efficiently manage
> > heterogeneous in-process workloads while maintaining high CPU
> > utilization (95%+).
> > 
> > XXX moar changelog explaining how this is moar awesome than
> > traditional user-space threading.
> 
> Awaiting a commit message that I can parse, I'm just looking at the entry bits
> for now. TBH I have no idea what this is actually trying to do...

Ha! yes.. I knew I was going to have to do that eventually :-)

It's basically a user-space scheduler that is subservient to the kernel
scheduler (hierarchical scheduling were a user task is a server for
other user tasks), where a server thread is in charge of selecting which
of it's worker threads gets to run. The original idea was that each
server only ever runs a single worker, but PeterO is currently trying to
reconsider that.

The *big* feature here, over traditional N:M scheduling, is that threads
can block, while traditional userspace threading is limited to
non-blocking system calls (and per later, page-faults).

In order to make that happen we must ovbiously hook schedule() for
these worker threads and inform userspace (the server thread) when this
happens such that it can select another worker thread to go vroom.

Meanwhile, a worker task getting woken from schedule() must not continue
running; instead it must enter the server's ready-queue and await it's
turn again. Instead of dealing with arbitrary delays deep inside the
kernel block chain, we punt and let the task complete until
return-to-user and block it there. The time between schedule() and
return-to-user is unmanaged time.

Now, since we can't readily poke at userspace memory from schedule(), we
could be holding mmap_sem etc., we pin the worker and server page on
sys-enter such that when we hit schedule() we can update state and then
unpin the pages such that page pin time is from sys-enter to first
schedule(), or sys-exit which ever comes first. This ensures the
page-pin is *short* term.

Additionally we must deal with signals :-(, the currnt approach is to
let them bust boundaries and run them as unmanaged time. UMCG userspace
can obviously control this by using pthread_sigmask() and friends.

Now, the reason for irqentry_irq_enable() is mostly #PF.  When a worker
faults and blocks we want the same things to happen.

Anyway, so workers have 3 layers of hooks:

		sys_enter
				schedule()
		sys_exit

	return-to-user

There's a bunch of paths through this:

 - sys_enter -> sys_exit:

	no blocking; nothing changes:
	  - sys_enter:
	    * pin pages

	  - sys_exit:
	    * unpin pages

 - sys_enter -> schedule() -> sys_exit:

	we did block:
	  - sys_enter:
	    * pin pages

	  - schedule():
	    * mark worker BLOCKED
	    * wake server (it will observe it's current worker !RUNNING
	      and select a new worker or idles)
	    * unpin pages

	  - sys_exit():
	    * mark worker RUNNABLE
	    * enqueue worker on server's runnable_list
	    * wake server (which will observe a new runnable task, add
	      it to whatever and if it was idle goes run, otherwise goes
	      back to sleep to let it's current worker finish)
	    * block until RUNNING

 - sys_enter -> schedule() -> sys_exit -> return_to_user:

	As above; except now we got a signal while !RUNNING. sys_exit()
	terminates and return-to-user takes over running the signal and
	on return from the signal we'll again block until RUNNING, or do
	the whole signal dance again if so required.


Does this clarify things a little?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ