lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yerl+ZrZ2qflIMyg@FVFF77S0Q05N>
Date:   Fri, 21 Jan 2022 16:57:29 +0000
From:   Mark Rutland <mark.rutland@....com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     mingo@...hat.com, tglx@...utronix.de, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, dietmar.eggemann@....com,
        rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        bristot@...hat.com, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, linux-api@...r.kernel.org, x86@...nel.org,
        pjt@...gle.com, posk@...gle.com, avagin@...gle.com,
        jannh@...gle.com, tdelisle@...terloo.ca, posk@...k.io
Subject: Re: [RFC][PATCH v2 5/5] sched: User Mode Concurency Groups

On Thu, Jan 20, 2022 at 04:55:22PM +0100, Peter Zijlstra wrote:
> User Managed Concurrency Groups is an M:N threading toolkit that allows
> constructing user space schedulers designed to efficiently manage
> heterogeneous in-process workloads while maintaining high CPU
> utilization (95%+).
> 
> XXX moar changelog explaining how this is moar awesome than
> traditional user-space threading.

Awaiting a commit message that I can parse, I'm just looking at the entry bits
for now. TBH I have no idea what this is actually trying to do...

[...]

> --- a/include/linux/entry-common.h
> +++ b/include/linux/entry-common.h
> @@ -23,6 +23,10 @@
>  # define _TIF_UPROBE			(0)
>  #endif
>  
> +#ifndef _TIF_UMCG
> +# define _TIF_UMCG			(0)
> +#endif
> +
>  /*
>   * SYSCALL_WORK flags handled in syscall_enter_from_user_mode()
>   */
> @@ -43,11 +47,13 @@
>  				 SYSCALL_WORK_SYSCALL_EMU |		\
>  				 SYSCALL_WORK_SYSCALL_AUDIT |		\
>  				 SYSCALL_WORK_SYSCALL_USER_DISPATCH |	\
> +				 SYSCALL_WORK_SYSCALL_UMCG |		\
>  				 ARCH_SYSCALL_WORK_ENTER)
>  #define SYSCALL_WORK_EXIT	(SYSCALL_WORK_SYSCALL_TRACEPOINT |	\
>  				 SYSCALL_WORK_SYSCALL_TRACE |		\
>  				 SYSCALL_WORK_SYSCALL_AUDIT |		\
>  				 SYSCALL_WORK_SYSCALL_USER_DISPATCH |	\
> +				 SYSCALL_WORK_SYSCALL_UMCG |		\
>  				 SYSCALL_WORK_SYSCALL_EXIT_TRAP	|	\
>  				 ARCH_SYSCALL_WORK_EXIT)
>  
> @@ -221,8 +227,11 @@ static inline void local_irq_disable_exi
>   */
>  static inline void irqentry_irq_enable(struct pt_regs *regs)
>  {
> -	if (!regs_irqs_disabled(regs))
> +	if (!regs_irqs_disabled(regs)) {
>  		local_irq_enable();
> +		if (user_mode(regs) && (current->flags & PF_UMCG_WORKER))
> +			umcg_sys_enter(regs, -1);
> +	}
>  }

Perhaps it would make sense to have separate umcg_sys_enter(regs) and
umcg_sys_enter_syscall(regs, syscallno)? Even if the former is just a wrapper,
to make the entry/exit bits clearly correspond for all the !syscall cases?

Also, is the syscall case meant to nest within this, or syscall entry paths not
supposed to call irqentry_irq_enable() ?

>  
>  /**
> @@ -232,8 +241,11 @@ static inline void irqentry_irq_enable(s
>   */
>  static inline void irqentry_irq_disable(struct pt_regs *regs)
>  {
> -	if (!regs_irqs_disabled(regs))
> +	if (!regs_irqs_disabled(regs)) {
> +		if (user_mode(regs) && (current->flags & PF_UMCG_WORKER))
> +			umcg_sys_exit(regs);
>  		local_irq_disable();
> +	}
>  }

Do the umcg_sys_{enter,exit}() calls need to happen with IRQs unmasked?

* If not (and this nests): for arm64 these can live in our
  enter_from_user_mode() and exit_to_user_mode() helpers.

* If so (or this doesn't nest): for arm64 we'd need to rework our
  local_daif_{inherit,restore,mask}() calls to handle this, though I've been
  meaning to do that anyway to handle pseudo-NMI better.

Either way, it looks like we'd need helpers along the lines of:

| static __always_inline void umcg_enter_from_user(struct pt_regs *regs)
| {
| 	if (current->flags & PF_UMCG_WORKER)
| 		umcg_sys_enter(regs, -1);
| }
| 
| static __always_inline void umcg_exit_to_user(struct pt_regs *regs)
| {
| 	if (current->flags & PF_UMCG_WORKER)
| 		umcg_sys_exit(regs);
| }

Thanks,
Mark.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ