lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 30 Jan 2023 13:38:15 -0800
From:   Josh Don <joshdon@...gle.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     torvalds@...ux-foundation.org, mingo@...hat.com,
        peterz@...radead.org, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, dietmar.eggemann@....com,
        rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        bristot@...hat.com, vschneid@...hat.com, ast@...nel.org,
        daniel@...earbox.net, andrii@...nel.org, martin.lau@...nel.org,
        brho@...gle.com, pjt@...gle.com, derkling@...gle.com,
        haoluo@...gle.com, dvernet@...a.com, dschatzberg@...a.com,
        dskarlat@...cmu.edu, riel@...riel.com,
        linux-kernel@...r.kernel.org, bpf@...r.kernel.org,
        kernel-team@...a.com
Subject: Re: [PATCH 27/30] sched_ext: Implement core-sched support

Hi Tejun,

On Fri, Jan 27, 2023 at 4:17 PM Tejun Heo <tj@...nel.org> wrote:
>
> The core-sched support is composed of the following parts:

Thanks, this looks pretty reasonable overall.

One meta comment is that I think we can shortcircuit from
touch_core_sched when we have sched_core_disabled().

Reviewed-by: Josh Don <joshdon@...gle.com>

> +                       /*
> +                        * While core-scheduling, rq lock is shared among
> +                        * siblings but the debug annotations and rq clock
> +                        * aren't. Do pinning dance to transfer the ownership.
> +                        */
> +                       WARN_ON_ONCE(__rq_lockp(rq) != __rq_lockp(srq));
> +                       rq_unpin_lock(rq, rf);
> +                       rq_pin_lock(srq, &srf);
> +
> +                       update_rq_clock(srq);

Unfortunate that we have to do this superfluous update; maybe we can
save/restore the clock flags from before the pinning shenanigans?

> +static struct task_struct *pick_task_scx(struct rq *rq)
> +{
> +       struct task_struct *curr = rq->curr;
> +       struct task_struct *first = first_local_task(rq);
> +
> +       if (curr->scx.flags & SCX_TASK_QUEUED) {
> +               /* is curr the only runnable task? */
> +               if (!first)
> +                       return curr;
> +
> +               /*
> +                * Does curr trump first? We can always go by core_sched_at for
> +                * this comparison as it represents global FIFO ordering when
> +                * the default core-sched ordering is in used and local-DSQ FIFO
> +                * ordering otherwise.
> +                */
> +               if (curr->scx.slice && time_before64(curr->scx.core_sched_at,
> +                                                    first->scx.core_sched_at))
> +                       return curr;

So is this to handle the case where we have something running on 'rq'
to match the cookie of our sibling (which had priority), but now we
want to switch to running the first thing in the local queue, which
has a different cookie (and is now the highest priority entity)? Maybe
being slightly more specific in the comment would help :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ