lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <79efaafd-01cd-f2da-a821-997999ef5fd9@infradead.org>
Date:   Wed, 24 Mar 2021 16:47:29 -0700
From:   Randy Dunlap <rdunlap@...radead.org>
To:     Bhaskar Chowdhury <unixbhaskar@...il.com>, cl@...ux.com,
        penberg@...nel.org, rientjes@...gle.com, iamjoonsoo.kim@....com,
        akpm@...ux-foundation.org, vbabka@...e.cz, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH V2] mm/slub.c: Trivial typo fixes

On 3/24/21 9:49 PM, Bhaskar Chowdhury wrote:
> s/operatios/operations/
> s/Mininum/Minimum/
> s/mininum/minimum/  ......two different places.
> 
> Signed-off-by: Bhaskar Chowdhury <unixbhaskar@...il.com>

Acked-by: Randy Dunlap <rdunlap@...radead.org>

> ---
>  Changes from V1:
>   David's finding incorporated.i.e operation->operations
>  mm/slub.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 3021ce9bf1b3..75d103ad5d2e 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3,7 +3,7 @@
>   * SLUB: A slab allocator that limits cache line use instead of queuing
>   * objects in per cpu and per node lists.
>   *
> - * The allocator synchronizes using per slab locks or atomic operatios
> + * The allocator synchronizes using per slab locks or atomic operations
>   * and only uses a centralized lock to manage a pool of partial slabs.
>   *
>   * (C) 2007 SGI, Christoph Lameter
> @@ -160,7 +160,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s)
>  #undef SLUB_DEBUG_CMPXCHG
> 
>  /*
> - * Mininum number of partial slabs. These will be left on the partial
> + * Minimum number of partial slabs. These will be left on the partial
>   * lists even if they are empty. kmem_cache_shrink may reclaim them.
>   */
>  #define MIN_PARTIAL 5
> @@ -832,7 +832,7 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
>   *
>   * 	A. Free pointer (if we cannot overwrite object on free)
>   * 	B. Tracking data for SLAB_STORE_USER
> - * 	C. Padding to reach required alignment boundary or at mininum
> + * 	C. Padding to reach required alignment boundary or at minimum
>   * 		one word if debugging is on to be able to detect writes
>   * 		before the word boundary.
>   *
> @@ -3421,7 +3421,7 @@ static unsigned int slub_min_objects;
>   *
>   * Higher order allocations also allow the placement of more objects in a
>   * slab and thereby reduce object handling overhead. If the user has
> - * requested a higher mininum order then we start with that one instead of
> + * requested a higher minimum order then we start with that one instead of
>   * the smallest order which will fit the object.
>   */
>  static inline unsigned int slab_order(unsigned int size,
> --


-- 
~Randy

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ