lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 16 Dec 2021 00:16:10 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Peter Oskolkov <posk@...gle.com>
Cc:     Peter Oskolkov <posk@...k.io>, Ingo Molnar <mingo@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>, juri.lelli@...hat.com,
        Vincent Guittot <vincent.guittot@...aro.org>,
        dietmar.eggemann@....com, Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, mgorman@...e.de,
        bristot@...hat.com,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux Memory Management List <linux-mm@...ck.org>,
        linux-api@...r.kernel.org, x86@...nel.org,
        Paul Turner <pjt@...gle.com>, Andrei Vagin <avagin@...gle.com>,
        Jann Horn <jannh@...gle.com>,
        Thierry Delisle <tdelisle@...terloo.ca>
Subject: Re: [RFC][PATCH 0/3] sched: User Managed Concurrency Groups

On Wed, Dec 15, 2021 at 01:04:33PM -0800, Peter Oskolkov wrote:
> On Wed, Dec 15, 2021 at 10:25 AM Peter Zijlstra <peterz@...radead.org> wrote:
> >
> > On Wed, Dec 15, 2021 at 09:56:06AM -0800, Peter Oskolkov wrote:
> > > On Wed, Dec 15, 2021 at 2:06 AM Peter Zijlstra <peterz@...radead.org> wrote:
> > > >  /*
> > > > + * Enqueue tsk to it's server's runnable list and wake the server for pickup if
> > > > + * so desired. Notable LAZY workers will not wake the server and rely on the
> > > > + * server to do pickup whenever it naturally runs next.
> > >
> > > No, I never suggested we needed per-server runnable queues: in all my
> > > patchsets I had a single list of idle (runnable) workers.
> >
> > This is not about the idle servers..
> >
> > So without the LAZY thing on, a previously blocked task hitting sys_exit
> > will enqueue itself on the runnable list and wake the server for pickup.
> 
> How can a blocked task hit sys_exit()? Shouldn't it be RUNNING?

Task was RUNNING, hits schedule() after passing through sys_enter().
this marks it BLOCKED. Task wakes again and proceeds to sys_exit(), at
which point it's marked RUNNABLE and put on the runnable list. After
which it'll kick the server to process said list.

> Anyway, servers and workers are supposed to unregister before exiting,
> so if they call sys_exit() they break the agreement; in my patch I
> just clear all umcg-related state and proceed, without waking the
> server: the user broke the protocol, let them figure out what
> happened:

No violation of anything there. The time between returning from
schedule() and sys_exit() is unmanaged.

sys_exit() is the spot where we regain control.

> > IIRC you didn't like the server waking while it was still running
> > another task, but instead preferred to have it pick up the newly
> > enqueued task when next it ran.
> 
> Yes, this is the model I have, as I outlined in another email. I
> understand that having queues per-CPU/per-server is how it is done in
> the kernel, both for historical reasons (before multiprocessing there
> was a single queue/cpu) and for throughput (per-cpu runqueues are
> individually faster than a global one). However, this model is known
> to lag in presence of load spikes (long per-cpu queues with some CPUs
> idle), and is not really easy to work with given the use cases this
> whole userspace scheduling effort is trying to address:

Well, that's *your* use-case. I'm fairly sure there's more people that
want to use this thing.

> multiple
> priorities and work isolation: these are easy to address directly with
> a scheduler that has a global view rather than multiple
> per-cpu/per-server schedulers/queues that try to coordinate.

You can trivially create this, even if the underlying thing is
per-server. Simply have a lock and shared data structure between the
servers.

Even in the kernel, it should be mostly trivial to create a global
policy. The only tricky bit (in the kernel) is the whole affinity muck,
but userspace doesn't *need* to do even that.

> > LAZY enables that.. *however* it does need to wake the server when it is
> > idle, otherwise they'll all sit there waiting for one another.
> 
> If all servers are busy running workers, then it is not up to the
> kernel to "preempt" them in my model: the userspace can set up another
> thread/task to preempt a misbehaving worker, which will wake the
> server attached to it. 

So the way I'm seeing things is that the server *is* the 'CPU'. A UP
machine cannot rely on another CPU to make preemption happen.

Also, preemption is very much not about misbehaviour. Wakeup can cause a
preemption event if the woken task is deemed higher priority than the
current running one for example.

And time based preemption is definitely also a thing wrt resource
distribution.

> But in practice there are always workers
> blocking in the kernel, which wakes their servers, which then reap the
> woken/runnable workers list, so well-behaving code does not need this.

This seems to discount pure computational workloads.

> And so we need to figure out this high-level thing first: do we go
> with the per-server worker queues/lists, or do we go with the approach
> I use in my patchset? It seems to me that the kernel-side code in my
> patchset is not more complicated than your patchset is shaping up to
> be, and some things are actually easier to accomplish, like having a
> single idle_server_ptr vs this LAZY and/or server "preemption"
> behavior that you have.
> 
> Again, I'm OK with having it your way if all needed features are
> covered, but I think we should be explicit about why
> per-server/per-cpu model is chosen vs the one I proposed, especially
> as it seems the kernel side code is not really simpler in the end.

So I went with a UP first approach. I made single server preemption
driven scheduling work first (both tick and wakeup-preemption are
supported).

The whole LAZY thing is only meant to supress some of that (notably
wakeup preemption), but you're right in that it's not really nice. I got
it working, but I'm not particularly happy with it either.

Having the sys_enter/sys_exit hooks also made the page pins short lived,
and signals much simpler to handle. You're destroying signals IIUC.


So I see no fundamental reason why userspace cannot do something like:

	struct umcg_task *current = NULL;

	for (;;) {
		self->state = UMCG_TASK_RUNNABLE | UMCG_TF_COND_WAIT;

		runnable_ptr = (void *)__atomic_exchange_n(&self->runnable_workers_ptr,
                                                           NULL, __ATOMIC_SEQ_CST);

		pthread_mutex_lock(&global_queue.lock);
		while (runnable_ptr) {
			next = (void *)runnable_ptr->runnable_workers_ptr;
			enqueue_task(&global_queue, runnable_ptr);
			runnable_ptr = next;
		}

		/* complicated bit about current already running goes here */

		current = pick_task(&global_queue);
		self->next_tid = current ? current->tid : 0;
unlock:
		pthread_mutex_unlock(&global_queue.lock);

		ret = sys_umcg_wait(0, 0);

		pthread_mutex_lock(&global_queue.lock);
		/* umcg_wait() didn't switch, make sure to return the task */
		if (self->next_tid) {
			enqueue_task(&global_queue, current);
			current = NULL;
		}
		pthread_mutex_unlock(&global_queue.lock);

		// do something with @ret
	}

to get global scheduling and all the contention^Wgoodness related to it.
Except, of course, it's more complicated, but I think the idea's clear
enough.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ