[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20201221044037.15197-1-rdunlap@infradead.org>
Date: Sun, 20 Dec 2020 20:40:37 -0800
From: Randy Dunlap <rdunlap@...radead.org>
To: linux-kernel@...r.kernel.org
Cc: Randy Dunlap <rdunlap@...radead.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>
Subject: [PATCH v2] kernel: events: delete repeated words in comments
Drop repeated words in kernel/events/.
{if, the, that, with, time}
Signed-off-by: Randy Dunlap <rdunlap@...radead.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>
---
v2: rebase, resend
kernel/events/core.c | 8 ++++----
kernel/events/uprobes.c | 2 +-
2 files changed, 5 insertions(+), 5 deletions(-)
--- linux-next-20201218.orig/kernel/events/core.c
+++ linux-next-20201218/kernel/events/core.c
@@ -268,7 +268,7 @@ static void event_function_call(struct p
if (!event->parent) {
/*
* If this is a !child event, we must hold ctx::mutex to
- * stabilize the the event->ctx relation. See
+ * stabilize the event->ctx relation. See
* perf_event_ctx_lock().
*/
lockdep_assert_held(&ctx->mutex);
@@ -1301,7 +1301,7 @@ static void put_ctx(struct perf_event_co
* life-time rules separate them. That is an exiting task cannot fork, and a
* spawning task cannot (yet) exit.
*
- * But remember that that these are parent<->child context relations, and
+ * But remember that these are parent<->child context relations, and
* migration does not affect children, therefore these two orderings should not
* interact.
*
@@ -1440,7 +1440,7 @@ static u64 primary_event_id(struct perf_
/*
* Get the perf_event_context for a task and lock it.
*
- * This has to cope with with the fact that until it is locked,
+ * This has to cope with the fact that until it is locked,
* the context could get moved to another task.
*/
static struct perf_event_context *
@@ -2499,7 +2499,7 @@ static void perf_set_shadow_time(struct
* But this is a bit hairy.
*
* So instead, we have an explicit cgroup call to remain
- * within the time time source all along. We believe it
+ * within the time source all along. We believe it
* is cleaner and simpler to understand.
*/
if (is_cgroup_event(event))
--- linux-next-20201218.orig/kernel/events/uprobes.c
+++ linux-next-20201218/kernel/events/uprobes.c
@@ -1735,7 +1735,7 @@ void uprobe_free_utask(struct task_struc
}
/*
- * Allocate a uprobe_task object for the task if if necessary.
+ * Allocate a uprobe_task object for the task if necessary.
* Called when the thread hits a breakpoint.
*
* Returns:
Powered by blists - more mailing lists