lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Thu,  9 Jun 2022 08:19:48 +0800
From:   Steven Lung <1030steven@...il.com>
To:     mingo@...hat.com
Cc:     peterz@...radead.org, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, dietmar.eggemann@....com,
        rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        bristot@...hat.com, vschneid@...hat.com,
        linux-kernel@...r.kernel.org, 1030steven@...il.com
Subject: [PATCH] sched/fair: Fix minor grammar

The letter 's' for 'CPUs' was in capital, and replace the word
'maybe' with 'may be' would be more appropriate in the sentence,
since 'maybe' is an adverb.

Signed-off-by: Steven Lung <1030steven@...il.com>
---
 kernel/sched/fair.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 77b2048a9..f0cade37f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -960,8 +960,8 @@ update_stats_wait_end_fair(struct cfs_rq *cfs_rq, struct sched_entity *se)
 
 	/*
 	 * When the sched_schedstat changes from 0 to 1, some sched se
-	 * maybe already in the runqueue, the se->statistics.wait_start
-	 * will be 0.So it will let the delta wrong. We need to avoid this
+	 * may be already in the runqueue, the se->statistics.wait_start
+	 * will be 0. So it will let the delta wrong. We need to avoid this
 	 * scenario.
 	 */
 	if (unlikely(!schedstat_val(stats->wait_start)))
@@ -5851,7 +5851,7 @@ DEFINE_PER_CPU(cpumask_var_t, select_idle_mask);
 static struct {
 	cpumask_var_t idle_cpus_mask;
 	atomic_t nr_cpus;
-	int has_blocked;		/* Idle CPUS has blocked load */
+	int has_blocked;		/* Idle CPUs has blocked load */
 	int needs_update;		/* Newly idle CPUs need their next_balance collated */
 	unsigned long next_balance;     /* in jiffy units */
 	unsigned long next_blocked;	/* Next update of blocked load in jiffies */
-- 
2.35.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ