[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220517141024.331475-1-juergh@canonical.com>
Date: Tue, 17 May 2022 16:10:24 +0200
From: Juerg Haefliger <juerg.haefliger@...onical.com>
To: nathan@...nel.org, ndesaulniers@...gle.com, trix@...hat.com,
llvm@...ts.linux.dev
Cc: linux-kernel@...r.kernel.org,
Juerg Haefliger <juergh@...onical.com>
Subject: [PATCH] ARM: Kconfig: Fix indentation and add comments
The convention for indentation seems to be a single tab. Help text is
further indented by an additional two whitespaces. Fix the lines that
violate these rules.
While add it, add trailing comments to endmenu statements for better
readability.
Signed-off-by: Juerg Haefliger <juergh@...onical.com>
---
arch/Kconfig | 130 +++++++++++++++++++++++++--------------------------
1 file changed, 65 insertions(+), 65 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index 31c4fdc4a4ba..65091088b1fd 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -28,7 +28,7 @@ config HOTPLUG_SMT
bool
config GENERIC_ENTRY
- bool
+ bool
config KPROBES
bool "Kprobes"
@@ -47,26 +47,26 @@ config JUMP_LABEL
depends on HAVE_ARCH_JUMP_LABEL
depends on CC_HAS_ASM_GOTO
help
- This option enables a transparent branch optimization that
- makes certain almost-always-true or almost-always-false branch
- conditions even cheaper to execute within the kernel.
+ This option enables a transparent branch optimization that
+ makes certain almost-always-true or almost-always-false branch
+ conditions even cheaper to execute within the kernel.
- Certain performance-sensitive kernel code, such as trace points,
- scheduler functionality, networking code and KVM have such
- branches and include support for this optimization technique.
+ Certain performance-sensitive kernel code, such as trace points,
+ scheduler functionality, networking code and KVM have such
+ branches and include support for this optimization technique.
- If it is detected that the compiler has support for "asm goto",
- the kernel will compile such branches with just a nop
- instruction. When the condition flag is toggled to true, the
- nop will be converted to a jump instruction to execute the
- conditional block of instructions.
+ If it is detected that the compiler has support for "asm goto",
+ the kernel will compile such branches with just a nop
+ instruction. When the condition flag is toggled to true, the
+ nop will be converted to a jump instruction to execute the
+ conditional block of instructions.
- This technique lowers overhead and stress on the branch prediction
- of the processor and generally makes the kernel faster. The update
- of the condition is slower, but those are always very rare.
+ This technique lowers overhead and stress on the branch prediction
+ of the processor and generally makes the kernel faster. The update
+ of the condition is slower, but those are always very rare.
- ( On 32-bit x86, the necessary options added to the compiler
- flags may increase the size of the kernel slightly. )
+ ( On 32-bit x86, the necessary options added to the compiler
+ flags may increase the size of the kernel slightly. )
config STATIC_KEYS_SELFTEST
bool "Static key selftest"
@@ -90,9 +90,9 @@ config KPROBES_ON_FTRACE
depends on KPROBES && HAVE_KPROBES_ON_FTRACE
depends on DYNAMIC_FTRACE_WITH_REGS
help
- If function tracer is enabled and the arch supports full
- passing of pt_regs to function tracing, then kprobes can
- optimize on top of function tracing.
+ If function tracer is enabled and the arch supports full
+ passing of pt_regs to function tracing, then kprobes can
+ optimize on top of function tracing.
config UPROBES
def_bool n
@@ -146,21 +146,21 @@ config HAVE_EFFICIENT_UNALIGNED_ACCESS
config ARCH_USE_BUILTIN_BSWAP
bool
help
- Modern versions of GCC (since 4.4) have builtin functions
- for handling byte-swapping. Using these, instead of the old
- inline assembler that the architecture code provides in the
- __arch_bswapXX() macros, allows the compiler to see what's
- happening and offers more opportunity for optimisation. In
- particular, the compiler will be able to combine the byteswap
- with a nearby load or store and use load-and-swap or
- store-and-swap instructions if the architecture has them. It
- should almost *never* result in code which is worse than the
- hand-coded assembler in <asm/swab.h>. But just in case it
- does, the use of the builtins is optional.
+ Modern versions of GCC (since 4.4) have builtin functions
+ for handling byte-swapping. Using these, instead of the old
+ inline assembler that the architecture code provides in the
+ __arch_bswapXX() macros, allows the compiler to see what's
+ happening and offers more opportunity for optimisation. In
+ particular, the compiler will be able to combine the byteswap
+ with a nearby load or store and use load-and-swap or
+ store-and-swap instructions if the architecture has them. It
+ should almost *never* result in code which is worse than the
+ hand-coded assembler in <asm/swab.h>. But just in case it
+ does, the use of the builtins is optional.
- Any architecture with load-and-swap or store-and-swap
- instructions should set this. And it shouldn't hurt to set it
- on architectures that don't have such instructions.
+ Any architecture with load-and-swap or store-and-swap
+ instructions should set this. And it shouldn't hurt to set it
+ on architectures that don't have such instructions.
config KRETPROBES
def_bool y
@@ -686,13 +686,13 @@ config LTO_CLANG_FULL
depends on !COMPILE_TEST
select LTO_CLANG
help
- This option enables Clang's full Link Time Optimization (LTO), which
- allows the compiler to optimize the kernel globally. If you enable
- this option, the compiler generates LLVM bitcode instead of ELF
- object files, and the actual compilation from bitcode happens at
- the LTO link step, which may take several minutes depending on the
- kernel configuration. More information can be found from LLVM's
- documentation:
+ This option enables Clang's full Link Time Optimization (LTO), which
+ allows the compiler to optimize the kernel globally. If you enable
+ this option, the compiler generates LLVM bitcode instead of ELF
+ object files, and the actual compilation from bitcode happens at
+ the LTO link step, which may take several minutes depending on the
+ kernel configuration. More information can be found from LLVM's
+ documentation:
https://llvm.org/docs/LinkTimeOptimization.html
@@ -1284,9 +1284,9 @@ config ARCH_HAS_CC_PLATFORM
bool
config HAVE_SPARSE_SYSCALL_NR
- bool
- help
- An architecture should select this if its syscall numbering is sparse
+ bool
+ help
+ An architecture should select this if its syscall numbering is sparse
to save space. For example, MIPS architecture has a syscall array with
entries at 4000, 5000 and 6000 locations. This option turns on syscall
related optimizations for a given architecture.
@@ -1309,35 +1309,35 @@ config HAVE_PREEMPT_DYNAMIC_CALL
depends on HAVE_STATIC_CALL
select HAVE_PREEMPT_DYNAMIC
help
- An architecture should select this if it can handle the preemption
- model being selected at boot time using static calls.
+ An architecture should select this if it can handle the preemption
+ model being selected at boot time using static calls.
- Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a
- preemption function will be patched directly.
+ Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a
+ preemption function will be patched directly.
- Where an architecture does not select HAVE_STATIC_CALL_INLINE, any
- call to a preemption function will go through a trampoline, and the
- trampoline will be patched.
+ Where an architecture does not select HAVE_STATIC_CALL_INLINE, any
+ call to a preemption function will go through a trampoline, and the
+ trampoline will be patched.
- It is strongly advised to support inline static call to avoid any
- overhead.
+ It is strongly advised to support inline static call to avoid any
+ overhead.
config HAVE_PREEMPT_DYNAMIC_KEY
bool
depends on HAVE_ARCH_JUMP_LABEL && CC_HAS_ASM_GOTO
select HAVE_PREEMPT_DYNAMIC
help
- An architecture should select this if it can handle the preemption
- model being selected at boot time using static keys.
+ An architecture should select this if it can handle the preemption
+ model being selected at boot time using static keys.
- Each preemption function will be given an early return based on a
- static key. This should have slightly lower overhead than non-inline
- static calls, as this effectively inlines each trampoline into the
- start of its callee. This may avoid redundant work, and may
- integrate better with CFI schemes.
+ Each preemption function will be given an early return based on a
+ static key. This should have slightly lower overhead than non-inline
+ static calls, as this effectively inlines each trampoline into the
+ start of its callee. This may avoid redundant work, and may
+ integrate better with CFI schemes.
- This will have greater overhead than using inline static calls as
- the call to the preemption function cannot be entirely elided.
+ This will have greater overhead than using inline static calls as
+ the call to the preemption function cannot be entirely elided.
config ARCH_WANT_LD_ORPHAN_WARN
bool
@@ -1360,8 +1360,8 @@ config ARCH_SUPPORTS_PAGE_TABLE_CHECK
config ARCH_SPLIT_ARG64
bool
help
- If a 32-bit architecture requires 64-bit arguments to be split into
- pairs of 32-bit arguments, select this option.
+ If a 32-bit architecture requires 64-bit arguments to be split into
+ pairs of 32-bit arguments, select this option.
config ARCH_HAS_ELFCORE_COMPAT
bool
@@ -1380,4 +1380,4 @@ source "kernel/gcov/Kconfig"
source "scripts/gcc-plugins/Kconfig"
-endmenu
+endmenu # "General architecture-dependent options"
--
2.32.0
Powered by blists - more mailing lists