lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAHP3+4DSdvB43JHc2hWwBWQLQG8AoLFBSNuHEPi3_LSKa8vHrQ@mail.gmail.com>
Date: Thu, 9 Oct 2025 14:55:02 +0800
From: Jianyun Gao <jianyungao89@...il.com>
To: Madadi Vineeth Reddy <vineethr@...ux.ibm.com>
Cc: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>, 
	Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot <vincent.guittot@...aro.org>, 
	Dietmar Eggemann <dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>, 
	Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, 
	Valentin Schneider <vschneid@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] sched: Fix some spelling mistakes in the scheduler module

Hi Madadi,
Thank you for your review. I will fix it in the next patch.

On Thu, Oct 9, 2025 at 2:01 PM Madadi Vineeth Reddy
<vineethr@...ux.ibm.com> wrote:
>
> Hi Jianyun,
>
> On 09/10/25 08:16, Jianyun Gao wrote:
> > From: "jianyun.gao" <jianyungao89@...il.com>
> >
> > The following are some spelling mistakes existing in the scheduler
> > module. Just fix it!
> >
> >   slection -> selection
> >   achitectures -> architectures
> >   excempt -> exempt
> >   incorectly -> incorrectly
> >   litle -> little
> >   faireness -> fairness
> >   condtion -> condition
> >
> > Signed-off-by: jianyun.gao <jianyungao89@...il.com>
> > ---
> > v3:
> > Change "except" to "exempt" in v2.
>
> It should be "excempt" to "exempt"
>
> > The previous version is here:
> >
> > https://lore.kernel.org/lkml/20250929061213.1659258-1-jianyungao89@gmail.com/
> >
> >  kernel/sched/core.c     | 2 +-
> >  kernel/sched/cputime.c  | 2 +-
> >  kernel/sched/fair.c     | 8 ++++----
> >  kernel/sched/wait_bit.c | 2 +-
> >  4 files changed, 7 insertions(+), 7 deletions(-)
> >
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 7f1e5cb94c53..af5076e40567 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -6858,7 +6858,7 @@ static void __sched notrace __schedule(int sched_mode)
> >               /*
> >                * We pass task_is_blocked() as the should_block arg
> >                * in order to keep mutex-blocked tasks on the runqueue
> > -              * for slection with proxy-exec (without proxy-exec
> > +              * for selection with proxy-exec (without proxy-exec
> >                * task_is_blocked() will always be false).
> >                */
> >               try_to_block_task(rq, prev, &prev_state,
> > diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> > index 7097de2c8cda..2429be5a5e40 100644
> > --- a/kernel/sched/cputime.c
> > +++ b/kernel/sched/cputime.c
> > @@ -585,7 +585,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
> >       stime = mul_u64_u64_div_u64(stime, rtime, stime + utime);
> >       /*
> >        * Because mul_u64_u64_div_u64() can approximate on some
> > -      * achitectures; enforce the constraint that: a*b/(b+c) <= a.
> > +      * architectures; enforce the constraint that: a*b/(b+c) <= a.
> >        */
> >       if (unlikely(stime > rtime))
> >               stime = rtime;
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 18a30ae35441..b1c335719f49 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -5381,7 +5381,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> >               bool delay = sleep;
> >               /*
> >                * DELAY_DEQUEUE relies on spurious wakeups, special task
> > -              * states must not suffer spurious wakeups, excempt them.
> > +              * states must not suffer spurious wakeups, exempt them.
> >                */
> >               if (flags & (DEQUEUE_SPECIAL | DEQUEUE_THROTTLE))
> >                       delay = false;
> > @@ -5842,7 +5842,7 @@ static bool enqueue_throttled_task(struct task_struct *p)
> >        * target cfs_rq's limbo list.
> >        *
> >        * Do not do that when @p is current because the following race can
> > -      * cause @p's group_node to be incorectly re-insterted in its rq's
> > +      * cause @p's group_node to be incorrectly re-insterted in its rq's
>
> s/re-insterted/re-inserted/
>
> Thanks,
> Madadi Vineeth Reddy
>
> >        * cfs_tasks list, despite being throttled:
> >        *
> >        *     cpuX                       cpuY
> > @@ -12161,7 +12161,7 @@ static inline bool update_newidle_cost(struct sched_domain *sd, u64 cost)
> >                * sched_balance_newidle() bumps the cost whenever newidle
> >                * balance fails, and we don't want things to grow out of
> >                * control.  Use the sysctl_sched_migration_cost as the upper
> > -              * limit, plus a litle extra to avoid off by ones.
> > +              * limit, plus a little extra to avoid off by ones.
> >                */
> >               sd->max_newidle_lb_cost =
> >                       min(cost, sysctl_sched_migration_cost + 200);
> > @@ -13176,7 +13176,7 @@ static void propagate_entity_cfs_rq(struct sched_entity *se)
> >        * If a task gets attached to this cfs_rq and before being queued,
> >        * it gets migrated to another CPU due to reasons like affinity
> >        * change, make sure this cfs_rq stays on leaf cfs_rq list to have
> > -      * that removed load decayed or it can cause faireness problem.
> > +      * that removed load decayed or it can cause fairness problem.
> >        */
> >       if (!cfs_rq_pelt_clock_throttled(cfs_rq))
> >               list_add_leaf_cfs_rq(cfs_rq);
> > diff --git a/kernel/sched/wait_bit.c b/kernel/sched/wait_bit.c
> > index 1088d3b7012c..47ab3bcd2ebc 100644
> > --- a/kernel/sched/wait_bit.c
> > +++ b/kernel/sched/wait_bit.c
> > @@ -207,7 +207,7 @@ EXPORT_SYMBOL(init_wait_var_entry);
> >   * given variable to change.  wait_var_event() can be waiting for an
> >   * arbitrary condition to be true and associates that condition with an
> >   * address.  Calling wake_up_var() suggests that the condition has been
> > - * made true, but does not strictly require the condtion to use the
> > + * made true, but does not strictly require the condition to use the
> >   * address given.
> >   *
> >   * The wake-up is sent to tasks in a waitqueue selected by hash from a
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ