lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 24 Nov 2014 14:43:02 -0800
From:	Andi Kleen <ak@...ux.intel.com>
To:	Khalid Aziz <khalid.aziz@...cle.com>
Cc:	tglx@...utronix.de, corbet@....net, mingo@...hat.com,
	hpa@...or.com, peterz@...radead.org, riel@...hat.com,
	akpm@...ux-foundation.org, rientjes@...gle.com, mgorman@...e.de,
	liwanp@...ux.vnet.ibm.com, raistlin@...ux.it,
	kirill.shutemov@...ux.intel.com, atomlin@...hat.com,
	avagin@...nvz.org, gorcunov@...nvz.org, serge.hallyn@...onical.com,
	athorlton@....com, oleg@...hat.com, vdavydov@...allels.com,
	daeseok.youn@...il.com, keescook@...omium.org,
	yangds.fnst@...fujitsu.com, sbauer@....utah.edu,
	vishnu.ps@...sung.com, axboe@...com, paulmck@...ux.vnet.ibm.com,
	linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
	linux-api@...r.kernel.org
Subject: Re: [PATCH v3] sched/fair: Add advisory flag for borrowing a
 timeslice (was: Pre-emption control for userspace)

> +1. Location of shared flag can be set using prctl() only once. To
> +   write a new memory address, the previous memory address must be
> +   cleared first by writing NULL. Each new memory address requires
> +   validation in the kernel and update of pointers. Changing this
> +   address too many times creates too much overhead.

Can you explain this more? Doesn't make any sense to me.
The validation is just access_ok() which is only a few instructions?

Also I would drop the config symbol. Linux normally doesn't
do CONFIG for things like that.

> diff --git a/kernel/fork.c b/kernel/fork.c
> index 9b7d746..7f0d843 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -1671,6 +1671,11 @@ long do_fork(unsigned long clone_flags,
>  			init_completion(&vfork);
>  			get_task_struct(p);
>  		}
> +#ifdef CONFIG_SCHED_PREEMPT_DELAY
> +		p->sched_preempt_delay.delay_req = NULL;
> +		p->sched_preempt_delay.delay_granted = 0;
> +		p->sched_preempt_delay.yield_penalty = 0;
> +#endif

FWIW this would lead to every new thread having to reexecute
this. No good way around it, but it may eventually make
thread spawns more expensive if it was widely used.

>  
> +#ifdef CONFIG_SCHED_PREEMPT_DELAY
> +	/*
> +	 * Clear the penalty flag for current task to reward it for
> +	 * palying by the rules
> +	 */
> +	current->sched_preempt_delay.yield_penalty = 0;
> +#endif

Doesn't that need to be quantified? After all they may yield
only near the end of their time slice.

> +	}
> +
> +	/*
> +	 * Get the value of preemption delay request flag from userspace.
> +	 * Task had already passed us the address where the flag is stored
> +	 * in userspace earlier. This flag is just like the PROCESS_PRIVATE
> +	 * futex, leverage the futex code here to read the flag. If there

I don't think any of the calls below are futex code.

> +	case PR_GET_PREEMPT_DELAY:
> +		error = put_user(
> +			(unsigned long)current->sched_preempt_delay.delay_req,
> +					(unsigned long __user *)arg2);
> +		break;
> +#endif

Unnecessary cast.

> --- a/kernel/sysctl.c
> +++ b/kernel/sysctl.c
> @@ -1104,6 +1104,15 @@ static struct ctl_table kern_table[] = {
>  		.proc_handler	= proc_dointvec,
>  	},
>  #endif
> +#ifdef CONFIG_SCHED_PREEMPT_DELAY
> +	{
> +		.procname	= "preempt_delay_available",
> +		.data		= &sysctl_preempt_delay_available,
> +		.maxlen		= sizeof(int),
> +		.mode		= 0600,

Better 0644, so users can know if they can use it.

Rest looks reasonable to me.

-Andi
-- 
ak@...ux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ