[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190920190656.GH4865@kernel.org>
Date: Fri, 20 Sep 2019 16:06:56 -0300
From: Arnaldo Carvalho de Melo <arnaldo.melo@...il.com>
To: Roy Ben Shlomo <royb@...tinelone.com>
Cc: Roy Ben Shlomo <roy.benshlomo@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...hat.com>,
Namhyung Kim <namhyung@...nel.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] perf/core: fixing several typos in comments
Em Fri, Sep 20, 2019 at 08:12:53PM +0300, Roy Ben Shlomo escreveu:
> From: Roy Ben Shlomo <roy.benshlomo@...il.com>
Thanks, applied.
- Arnaldo
> Fixing typos in a few functions' documentation comments
> Signed-off-by: Roy Ben Shlomo <royb@...tinelone.com>
> ---
> kernel/events/core.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 4f08b17d6426..275eae05af20 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -2239,7 +2239,7 @@ static void __perf_event_disable(struct perf_event *event,
> *
> * If event->ctx is a cloned context, callers must make sure that
> * every task struct that event->ctx->task could possibly point to
> - * remains valid. This condition is satisifed when called through
> + * remains valid. This condition is satisfied when called through
> * perf_event_for_each_child or perf_event_for_each because they
> * hold the top-level event's child_mutex, so any descendant that
> * goes to exit will block in perf_event_exit_event().
> @@ -6054,7 +6054,7 @@ static void perf_sample_regs_intr(struct perf_regs *regs_intr,
> * Get remaining task size from user stack pointer.
> *
> * It'd be better to take stack vma map and limit this more
> - * precisly, but there's no way to get it safely under interrupt,
> + * precisely, but there's no way to get it safely under interrupt,
> * so using TASK_SIZE as limit.
> */
> static u64 perf_ustack_task_size(struct pt_regs *regs)
> @@ -6616,7 +6616,7 @@ void perf_prepare_sample(struct perf_event_header *header,
>
> if (sample_type & PERF_SAMPLE_STACK_USER) {
> /*
> - * Either we need PERF_SAMPLE_STACK_USER bit to be allways
> + * Either we need PERF_SAMPLE_STACK_USER bit to be always
> * processed as the last one or have additional check added
> * in case new sample type is added, because we could eat
> * up the rest of the sample size.
> --
> 2.20.1
--
- Arnaldo
Powered by blists - more mailing lists