lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ybnnqd6I+CCGSSJa@hirez.programming.kicks-ass.net>
Date:   Wed, 15 Dec 2021 14:03:37 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Peter Oskolkov <posk@...k.io>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>, juri.lelli@...hat.com,
        Vincent Guittot <vincent.guittot@...aro.org>,
        dietmar.eggemann@....com, Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, mgorman@...e.de,
        bristot@...hat.com,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux Memory Management List <linux-mm@...ck.org>,
        linux-api@...r.kernel.org, x86@...nel.org,
        Paul Turner <pjt@...gle.com>, Peter Oskolkov <posk@...gle.com>,
        Andrei Vagin <avagin@...gle.com>, Jann Horn <jannh@...gle.com>,
        Thierry Delisle <tdelisle@...terloo.ca>
Subject: Re: [RFC][PATCH 0/3] sched: User Managed Concurrency Groups

On Wed, Dec 15, 2021 at 11:06:20AM +0100, Peter Zijlstra wrote:
> On Tue, Dec 14, 2021 at 07:46:25PM -0800, Peter Oskolkov wrote:
> > On Tue, Dec 14, 2021 at 12:55 PM Peter Zijlstra <peterz@...radead.org> wrote:
> > >
> > > Hi,
> > >
> > > This is actually tested code; but still missing the SMP wake-to-idle machinery.
> > > I still need to think about that.
> > 
> > Thanks, Peter!
> > 
> > At a first glance, your main patch does not look much smaller than
> > mine, and I thought the whole point of re-doing it was to throw away
> > extra features and make things smaller/simpler...
> 
> Well, simpler was the goal. I didn't really focus on size much. It isn't
> really big to begin with.
> 
> But yes, it has 5 hooks now, 3 syscalls and lots of comments and all
> that under 900 lines, not bad I'd say.
> 
> Also I think you wanted something like this? I'm not sure of the LAZY
> name, but I can't seem to come up with anything saner atm.
> 
> 
> ---
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1297,6 +1297,7 @@ struct task_struct {
>  
>  #ifdef CONFIG_UMCG
>  	/* setup by sys_umcg_ctrl() */
> +	u32			umcg_flags;
>  	clockid_t		umcg_clock;
>  	struct umcg_task __user	*umcg_task;
>  
> --- a/include/uapi/linux/umcg.h
> +++ b/include/uapi/linux/umcg.h
> @@ -133,11 +133,13 @@ struct umcg_task {
>   * @UMCG_CTL_REGISTER:   register the current task as a UMCG task
>   * @UMCG_CTL_UNREGISTER: unregister the current task as a UMCG task
>   * @UMCG_CTL_WORKER:     register the current task as a UMCG worker
> + * @UMCG_CTL_LAZY:	 don't wake server on runnable enqueue
>   */
>  enum umcg_ctl_flag {
>  	UMCG_CTL_REGISTER	= 0x00001,
>  	UMCG_CTL_UNREGISTER	= 0x00002,
>  	UMCG_CTL_WORKER		= 0x10000,
> +	UMCG_CTL_LAZY		= 0x20000,
>  };
>  
>  #endif /* _UAPI_LINUX_UMCG_H */
> --- a/kernel/sched/umcg.c
> +++ b/kernel/sched/umcg.c
> @@ -416,6 +416,27 @@ static int umcg_enqueue_runnable(struct
>  }
>  
>  /*
> + * Enqueue tsk to it's server's runnable list and wake the server for pickup if
> + * so desired. Notable LAZY workers will not wake the server and rely on the
> + * server to do pickup whenever it naturally runs next.
> + *
> + * Returns:
> + * 0:	success
> + * -EFAULT
> + */
> +static int umcg_enqueue_and_wake(struct task_struct *tsk, bool force)
> +{
> +	int ret = umcg_enqueue_runnable(tsk);
> +	if (ret)
> +		return ret;
> +
> +	if (force || !(tsk->umcg_flags & UMCG_CTL_LAZY))
> +		ret = umcg_wake_server(tsk);
> +
> +	return ret;
> +}

Aah, this has a problem when the server is otherwise idle. I think we
need that TF_IDLE thing for this too. Let me go write a test-case for
all this.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ