[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a2df1641-cd13-4516-afa5-546bb9ef8608@infradead.org>
Date: Sat, 6 Apr 2024 10:30:23 -0700
From: Randy Dunlap <rdunlap@...radead.org>
To: Kuan-Wei Chiu <visitorckw@...il.com>, colyli@...e.de,
kent.overstreet@...ux.dev, msakai@...hat.com, peterz@...radead.org,
mingo@...hat.com, acme@...nel.org, namhyung@...nel.org,
akpm@...ux-foundation.org
Cc: bfoster@...hat.com, mark.rutland@....com,
alexander.shishkin@...ux.intel.com, jolsa@...nel.org, irogers@...gle.com,
adrian.hunter@...el.com, jserv@...s.ncku.edu.tw,
linux-bcache@...r.kernel.org, linux-kernel@...r.kernel.org,
dm-devel@...ts.linux.dev, linux-bcachefs@...r.kernel.org,
linux-perf-users@...r.kernel.org
Subject: Re: [PATCH v3 01/17] perf/core: Fix several typos
On 4/6/24 9:47 AM, Kuan-Wei Chiu wrote:
> Replace 'artifically' with 'artificially'.
> Replace 'irrespecive' with 'irrespective'.
> Replace 'futher' with 'further'.
> Replace 'sufficent' with 'sufficient'.
>
> Signed-off-by: Kuan-Wei Chiu <visitorckw@...il.com>
> Reviewed-by: Ian Rogers <irogers@...gle.com>
Reviewed-by: Randy Dunlap <rdunlap@...radead.org>
Thanks.
> ---
> kernel/events/core.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 724e6d7e128f..10ac2db83f14 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -534,7 +534,7 @@ void perf_sample_event_took(u64 sample_len_ns)
> __this_cpu_write(running_sample_length, running_len);
>
> /*
> - * Note: this will be biased artifically low until we have
> + * Note: this will be biased artificially low until we have
> * seen NR_ACCUMULATED_SAMPLES. Doing it this way keeps us
> * from having to maintain a count.
> */
> @@ -596,10 +596,10 @@ static inline u64 perf_event_clock(struct perf_event *event)
> *
> * Event groups make things a little more complicated, but not terribly so. The
> * rules for a group are that if the group leader is OFF the entire group is
> - * OFF, irrespecive of what the group member states are. This results in
> + * OFF, irrespective of what the group member states are. This results in
> * __perf_effective_state().
> *
> - * A futher ramification is that when a group leader flips between OFF and
> + * A further ramification is that when a group leader flips between OFF and
> * !OFF, we need to update all group member times.
> *
> *
> @@ -891,7 +891,7 @@ static int perf_cgroup_ensure_storage(struct perf_event *event,
> int cpu, heap_size, ret = 0;
>
> /*
> - * Allow storage to have sufficent space for an iterator for each
> + * Allow storage to have sufficient space for an iterator for each
> * possibly nested cgroup plus an iterator for events with no cgroup.
> */
> for (heap_size = 1; css; css = css->parent)
--
#Randy
Powered by blists - more mailing lists