[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230109095108.21229-4-bagasdotme@gmail.com>
Date: Mon, 9 Jan 2023 16:51:03 +0700
From: Bagas Sanjaya <bagasdotme@...il.com>
To: Jonathan Corbet <corbet@....net>,
Yann Sionneau <ysionneau@...ray.eu>
Cc: linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
Clement Leger <clement.leger@...tlin.com>,
Guillaume Thouvenin <gthouvenin@...ray.eu>,
Bagas Sanjaya <bagasdotme@...il.com>
Subject: [PATCH 3/8] Documentation: kvx: Fix lists
Many "unexpected indentation" and block quote warnings are generated due
to errors in lists. Fix them up by:
* Align lists texts just after the lists marker
* Add required blank line between nested lists and between paragraphs
and the lists
* Use appropriate syntax for numbered lists
While at it, also lightly reword.
Signed-off-by: Bagas Sanjaya <bagasdotme@...il.com>
---
Documentation/kvx/kvx-exceptions.rst | 53 ++++++++++++++-----------
Documentation/kvx/kvx-iommu.rst | 3 +-
Documentation/kvx/kvx-mmu.rst | 37 +++++++++--------
Documentation/kvx/kvx.rst | 59 +++++++++++++++-------------
4 files changed, 85 insertions(+), 67 deletions(-)
diff --git a/Documentation/kvx/kvx-exceptions.rst b/Documentation/kvx/kvx-exceptions.rst
index bb9010efb14196..bd485efd2362c1 100644
--- a/Documentation/kvx/kvx-exceptions.rst
+++ b/Documentation/kvx/kvx-exceptions.rst
@@ -5,6 +5,7 @@ specifies a base address.
An offset is added to $ev upon exception and the result is used as
"Next $pc".
The offset depends on which exception vector the cpu wants to jump to:
+
* $ev + 0x00 for debug
* $ev + 0x40 for trap
* $ev + 0x80 for interrupt
@@ -28,6 +29,7 @@ Then, handlers are laid in the following order::
Interrupts, and traps are serviced similarly, ie:
+
- Jump to handler
- Save all registers
- Prepare the call (do_IRQ or trap_handler)
@@ -58,12 +60,15 @@ The following steps are then taken:
- Switch to kernel stack
- Extract syscall number
-- Check that the syscall number is not bogus
- - If so, set syscall func to a not implemented one
-- Check if tracing is enabled
- - If so, jump to trace_syscall_enter
+- Check that the syscall number is not bogus.
+ If so, set syscall func to a not implemented one
+
+- Check if tracing is enabled.
+ If so, jump to trace_syscall_enter, then:
+
- Save syscall arguments (r0 -> r7) on stack in pt_regs
- Call do_trace_syscall_enter function
+
- Restore syscall arguments since they have been modified by C call
- Call the syscall function
- Save $r0 in pt_regs since it can be cloberred afterward
@@ -80,24 +85,28 @@ Signals
Signals are handled when exiting kernel before returning to user.
When handling a signal, the path is the following:
-1 - User application is executing normally
- Then any exception happens (syscall, interrupt, trap)
-2 - The exception handling path is taken
- and before returning to user, pending signals are checked
-3 - Signal are handled by do_signal
- Registers are saved and a special part of the stack is modified
- to create a trampoline to call rt_sigreturn
- $spc is modified to jump to user signal handler
- $ra is modified to jump to sigreturn trampoline directly after
- returning from user signal handler.
-4 - User signal handler is called after rfe from exception
- when returning, $ra is retored to $pc, resulting in a call
- to the syscall trampoline.
-5 - syscall trampoline is executed, leading to rt_sigreturn syscall
-6 - rt_sigreturn syscall is executed
- Previous registers are restored to allow returning to user correctly
-7 - User application is restored at the exact point it was interrupted
- before.
+1. User application is executing normally, then exception occurs (syscall,
+ interrupt, trap)
+2. The exception handling path is taken
+ and before returning to user, pending signals are checked.
+
+3. The signal handling path is as follows:
+
+ * Signals are handled by do_signal.
+ * Registers are saved and a special part of the stack is modified
+ to create a trampoline to call rt_sigreturn.
+ * $spc is modified to jump to user signal handler.
+ * $ra is modified to jump to sigreturn trampoline directly after
+ returning from user signal handler.
+
+4. User signal handler is called after rfe from exception.
+ When returning, $ra is retored to $pc, resulting in a call
+ to the syscall trampoline.
+5. syscall trampoline is executed, leading to rt_sigreturn syscall
+6. rt_sigreturn syscall is executed.
+ Previous registers are restored to allow returning to user correctly
+7. User application is restored at the exact point it was interrupted
+ before.
::
diff --git a/Documentation/kvx/kvx-iommu.rst b/Documentation/kvx/kvx-iommu.rst
index 69eca8d1bc37a1..f7f61777eee21e 100644
--- a/Documentation/kvx/kvx-iommu.rst
+++ b/Documentation/kvx/kvx-iommu.rst
@@ -32,7 +32,8 @@ Cluster IOMMUs
--------------
IOMMUs on cluster are used for DMA and cryptographic accelerators.
-There are six IOMMUs connected to the:
+There are six IOMMUs connected:
+
- cluster DMA tx
- cluster DMA rx
- first non secure cryptographic accelerator
diff --git a/Documentation/kvx/kvx-mmu.rst b/Documentation/kvx/kvx-mmu.rst
index 832fb7c41a49d8..faa6bda2c39959 100644
--- a/Documentation/kvx/kvx-mmu.rst
+++ b/Documentation/kvx/kvx-mmu.rst
@@ -77,6 +77,7 @@ routine which would (obviously) not work... Once this is done, we can flush the
entries and that new entries inserted in JTLB will apply.
By default, the following policy is applied on vmlinux sections:
+
- init_data: RW
- init_text: RX (or RWX if parameter rodata=off)
- text: RX (or RWX if parameter rodata=off)
@@ -92,8 +93,9 @@ spaces to be in the same space as the user. The kernel will have the
$ps.mmup set in kernel (PL1) and unset for user (PL2).
As said in kvx documentation, we have two cases when the kernel is
booted:
-- Either we have been booted by someone (bootloader, hypervisor, etc)
-- Or we are alone (boot from flash)
+
+- Boot via intermediaries (bootloader, hypervisor, etc)
+- Direct boot from flash
In both cases, we will use the virtual space 0. Indeed, if we are alone
on the core, then it means nobody is using the MMU and we can take the
@@ -115,6 +117,7 @@ setup_bootmem: Memory : 0x100000000 - 0x120000000
setup_bootmem: Reserved: 0x10001f000 - 0x1002d1bc0
During the paging init we need to set:
+
- min_low_pfn that is the lowest PFN available in the system
- max_low_pfn that indicates the end if NORMAL zone
- max_pfn that is the number of pages in the system
@@ -213,16 +216,16 @@ kvx core does not feature a hardware page walker. This work must be done
by the core in software. In order to optimize TLB refill, a special fast
path is taken when entering in kernel space.
In order to speed up the process, the following actions are taken:
-# Save some registers in a per process scratchpad
-# If the trap is a nomapping then try the fastpath
-# Save some more registers for this fastpath
-# Check if faulting address is a memory direct mapping one.
- # If entry is a direct mapping one and RWX is not enabled, add an entry into LTLB
- # If not, continue
-# Try to walk the page table
- # If entry is not present, take the slowpath (do_page_fault)
-# Refill the tlb properly
-# Exit by restoring only a few registers
+
+1. Save some registers in a per process scratchpad
+2. If the trap is a nomapping then try the fastpath
+3. Save some more registers for this fastpath
+4. Check if faulting address is a memory direct mapping one. If entry is a
+ direct mapping one and RWX is not enabled, add an entry into LTLB.
+ Otherwise, continue
+5. Try to walk the page table. If entry is not present, take the slowpath (do_page_fault)
+6. Refill the tlb properly
+7. Exit by restoring only a few registers
ASN Handling
============
@@ -273,13 +276,15 @@ Debug
In order to debug the page table and tlb entries, gdb scripts contains commands
which allows to dump the page table:
+
- lx-kvx-page-table-walk
- - Display the current process page table by default
+ Display the current process page table by default
- lx-kvx-tlb-decode
- - Display the content of $tel and $teh into something readable
+ Display the content of $tel and $teh into something readable
Other commands available in kvx-gdb are the following:
+
- mppa-dump-tlb
- - Display the content of TLBs (JTLB and LTLB)
+ Display the content of TLBs (JTLB and LTLB)
- mppa-lookup-addr
- - Find physical address matching a virtual one
+ Find physical address matching a virtual one
diff --git a/Documentation/kvx/kvx.rst b/Documentation/kvx/kvx.rst
index 20c3666f445e26..8982d10f2678df 100644
--- a/Documentation/kvx/kvx.rst
+++ b/Documentation/kvx/kvx.rst
@@ -74,6 +74,7 @@ When entering the C (kvx_lowlevel_start) the kernel will look for a special
magic in $r0 (0x494C314B). This magic tells the kernel if there is arguments
passed by a bootloader.
Currently, the following values are passed through registers:
+
- r1: pointer to command line setup by bootloader
- r2: device tree
@@ -105,11 +106,13 @@ of the LTLB.
The first entry will map the first 1G of virtual address space to the first
1G of DDR:
+
- TLB[0]: 0xffffff0000000000 -> 0x100000000 (size 512Mo)
The second entry will be a flat mapping of the first 512 Ko of the SMEM. It
is required to have this flat mapping because there is still code located at
this address that needs to be executed:
+
- TLB[1]: 0x0 -> 0x0 (size 512Ko)
Once virtual space reached the second entry is removed.
@@ -151,19 +154,19 @@ r20 and r21 to special values containing the function to call.
The normal path for a kernel thread will be the following:
- 1 - Enter copy_thread_tls and setup callee saved registers which will
- be restored in __switch_to.
- 2 - set r20 and r21 (in thread_struct) to function and argument and
- ra to ret_from_kernel_thread.
- These callee saved will be restored in switch_to.
- 3 - Call _switch_to at some point.
- 4 - Save all callee saved register since switch_to is seen as a
- standard function call by the caller.
- 5 - Change stack pointer to the new stack
- 6 - At the end of switch to, set sr0 to the new task and use ret to
- jump to ret_from_kernel_thread (address restored from ra).
- 7 - In ret_from_kernel_thread, execute the function with arguments by
- using r20, r21 and we are done
+ 1. Enter copy_thread_tls and setup callee saved registers which will
+ be restored in __switch_to.
+ 2. set r20 and r21 (in thread_struct) to function and argument and
+ ra to ret_from_kernel_thread.
+ These callee saved will be restored in switch_to.
+ 3. Call _switch_to at some point.
+ 4. Save all callee saved register since switch_to is seen as a
+ standard function call by the caller.
+ 5. Change stack pointer to the new stack
+ 6. At the end of switch to, set sr0 to the new task and use ret to
+ jump to ret_from_kernel_thread (address restored from ra).
+ 7. In ret_from_kernel_thread, execute the function with arguments by
+ using r20, r21 and we are done
For more explanation, you can refer to https://lwn.net/Articles/520227/
@@ -173,21 +176,21 @@ User thread creation
We are using almost the same path as copy_thread to create it.
The detailed path is the following:
- 1 - Call start_thread which will setup user pc and stack pointer in
- task regs. We also set sps and clear privilege mode bit.
- When returning from exception, it will "flip" to user mode.
- 2 - Enter copy_thread_tls and setup callee saved registers which will
- be restored in __switch_to. Also, set the "return" function to be
- ret_from_fork which will be called at end of switch_to
- 3 - set r20 (in thread_struct) with tracing information.
- (simply by lazyness to avoid computing it in assembly...)
- 4 - Call _switch_to at some point.
- 5 - The current pc will then be restored to be ret_from fork.
- 6 - Ret from fork calls schedule_tail and then check if tracing is
- enabled. If so call syscall_trace_exit
- 7 - finally, instead of returning to kernel, we restore all registers
- that have been setup by start_thread by restoring regs stored on
- stack
+ 1. Call start_thread which will setup user pc and stack pointer in
+ task regs. We also set sps and clear privilege mode bit.
+ When returning from exception, it will "flip" to user mode.
+ 2. Enter copy_thread_tls and setup callee saved registers which will
+ be restored in __switch_to. Also, set the "return" function to be
+ ret_from_fork which will be called at end of switch_to
+ 3. set r20 (in thread_struct) with tracing information.
+ (simply by lazyness to avoid computing it in assembly...)
+ 4. Call _switch_to at some point.
+ 5. The current pc will then be restored to be ret_from fork.
+ 6. Ret from fork calls schedule_tail and then check if tracing is
+ enabled. If so call syscall_trace_exit
+ 7. Finally, instead of returning to kernel, we restore all registers
+ that have been setup by start_thread by restoring regs stored on
+ stack
L2$ handling
============
--
An old man doll... just what I always wanted! - Clara
Powered by blists - more mailing lists