[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7d39f26d-3c9f-4ee4-977c-87f9bed0bac1@huawei.com>
Date: Thu, 18 Jul 2024 14:04:47 +0800
From: "Zhangqiao (2012 lab)" <zhangqiao22@...wei.com>
To: Tejun Heo <tj@...nel.org>
CC: <void@...ifault.com>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH sched_ext/for-6.11] sched_ext: Reverting @p->sched_class
if @p->disallow is set
在 2024/7/18 1:49, Tejun Heo 写道:
> On Wed, Jul 17, 2024 at 10:01:13AM +0800, Zhangqiao (2012 lab) wrote:
>>> Ah, I see what you mean. I was referring to the classs switching operations
>>> in scx_ops_enable(). You're looking at the fork path. I don't think we can
>>
>> Yes, i was referring to the fork path.
>>
>>> switch sched_class at that point and the .disallow mechanism is there to
>>> allow the scheduler to filter out tasks on scheduler start. I'll update the
>>> code so that .disallow is only allowed during the initial attach.
>
> So, something like this.
>
LGTM for this patch.
In addition, the @scx_nr_rejected is only updated while the BPF
scheduler is being loaded and this update behavior is proected by
scx_ops_enable_mutex, so is it appropriate to change the
@scx_nr_rejcted's type from atomic to int ?
> Thanks.
>
> diff --git a/include/linux/sched/ext.h b/include/linux/sched/ext.h
> index 593d2f4909dd..a4aa516cee7d 100644
> --- a/include/linux/sched/ext.h
> +++ b/include/linux/sched/ext.h
> @@ -181,11 +181,12 @@ struct sched_ext_entity {
> * If set, reject future sched_setscheduler(2) calls updating the policy
> * to %SCHED_EXT with -%EACCES.
> *
> - * If set from ops.init_task() and the task's policy is already
> - * %SCHED_EXT, which can happen while the BPF scheduler is being loaded
> - * or by inhering the parent's policy during fork, the task's policy is
> - * rejected and forcefully reverted to %SCHED_NORMAL. The number of
> - * such events are reported through /sys/kernel/debug/sched_ext::nr_rejected.
> + * Can be set from ops.init_task() while the BPF scheduler is being
> + * loaded (!scx_init_task_args->fork). If set and the task's policy is
> + * already %SCHED_EXT, the task's policy is rejected and forcefully
> + * reverted to %SCHED_NORMAL. The number of such events are reported
> + * through /sys/kernel/debug/sched_ext::nr_rejected. Setting this flag
> + * during fork is not allowed.
> */
> bool disallow; /* reject switching into SCX */
>
> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
> index da9cac6b6cc2..cf60474efa75 100644
> --- a/kernel/sched/ext.c
> +++ b/kernel/sched/ext.c
> @@ -3399,18 +3399,17 @@ static int scx_ops_init_task(struct task_struct *p, struct task_group *tg, bool
>
> scx_set_task_state(p, SCX_TASK_INIT);
>
> - if (p->scx.disallow) {
> + if (!fork && p->scx.disallow) {
> struct rq *rq;
> struct rq_flags rf;
>
> rq = task_rq_lock(p, &rf);
>
> /*
> - * We're either in fork or load path and @p->policy will be
> - * applied right after. Reverting @p->policy here and rejecting
> - * %SCHED_EXT transitions from scx_check_setscheduler()
> - * guarantees that if ops.init_task() sets @p->disallow, @p can
> - * never be in SCX.
> + * We're in the load path and @p->policy will be applied right
> + * after. Reverting @p->policy here and rejecting %SCHED_EXT
> + * transitions from scx_check_setscheduler() guarantees that if
> + * ops.init_task() sets @p->disallow, @p can never be in SCX.
> */
> if (p->policy == SCHED_EXT) {
> p->policy = SCHED_NORMAL;
> @@ -3418,6 +3417,9 @@ static int scx_ops_init_task(struct task_struct *p, struct task_group *tg, bool
> }
>
> task_rq_unlock(rq, p, &rf);
> + } else if (p->scx.disallow) {
> + scx_ops_error("ops.init_task() set task->scx.disallow for %s[%d] during fork",
> + p->comm, p->pid);
> }
>
> p->scx.flags |= SCX_TASK_RESET_RUNNABLE_AT;
>
Powered by blists - more mailing lists