lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AM7PR08MB551161F243297FCBBEC0C6249C3BA@AM7PR08MB5511.eurprd08.prod.outlook.com>
Date:   Mon, 17 Jul 2023 11:19:15 +0000
From:   David Spickett <David.Spickett@....com>
To:     Mark Brown <broonie@...nel.org>,
        Catalin Marinas <Catalin.Marinas@....com>,
        Will Deacon <will@...nel.org>, Shuah Khan <shuah@...nel.org>
CC:     "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
        "stable@...r.kernel.org" <stable@...r.kernel.org>
Subject: Re: [PATCH 1/3] arm64/fpsimd: Ensure SME storage is allocated after
 SVE VL changes

I've confirmed on QEMU and Arm's FVP that this fixes the issue I was seeing.


From: Mark Brown <broonie@...nel.org>
Sent: 13 July 2023 21:06
To: Catalin Marinas <Catalin.Marinas@....com>; Will Deacon <will@...nel.org>; Shuah Khan <shuah@...nel.org>
Cc: David Spickett <David.Spickett@....com>; linux-arm-kernel@...ts.infradead.org <linux-arm-kernel@...ts.infradead.org>; linux-kernel@...r.kernel.org <linux-kernel@...r.kernel.org>; linux-kselftest@...r.kernel.org <linux-kselftest@...r.kernel.org>; Mark Brown <broonie@...nel.org>; stable@...r.kernel.org <stable@...r.kernel.org>
Subject: [PATCH 1/3] arm64/fpsimd: Ensure SME storage is allocated after SVE VL changes 
 
When we reconfigure the SVE vector length we discard the backing storage
for the SVE vectors and then reallocate on next SVE use, leaving the SME
specific state alone. This means that we do not enable SME traps if they
were already disabled. That means that userspace code can enter streaming
mode without trapping, putting the task in a state where if we try to save
the state of the task we will fault.

Since the ABI does not specify that changing the SVE vector length disturbs
SME state, and since SVE code may not be aware of SME code in the process,
we shouldn't simply discard any ZA state. Instead immediately reallocate
the storage for SVE if SME is active, and disable SME if we change the SVE
vector length while there is no SME state active.

Disabling SME traps on SVE vector length changes would make the overall
code more complex since we would have a state where we have valid SME state
stored but might get a SME trap.

Fixes: 9e4ab6c89109 ("arm64/sme: Implement vector length configuration prctl()s")
Reported-by: David Spickett <David.Spickett@....com>
Signed-off-by: Mark Brown <broonie@...nel.org>
Cc: stable@...r.kernel.org
---
 arch/arm64/kernel/fpsimd.c | 32 +++++++++++++++++++++++++-------
 1 file changed, 25 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 7a1aeb95d7c3..a527b95c06e7 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -847,6 +847,9 @@ void sve_sync_from_fpsimd_zeropad(struct task_struct *task)
 int vec_set_vector_length(struct task_struct *task, enum vec_type type,
                           unsigned long vl, unsigned long flags)
 {
+       bool free_sme = false;
+       bool alloc_sve = true;
+
         if (flags & ~(unsigned long)(PR_SVE_VL_INHERIT |
                                      PR_SVE_SET_VL_ONEXEC))
                 return -EINVAL;
@@ -897,22 +900,37 @@ int vec_set_vector_length(struct task_struct *task, enum vec_type type,
                 task->thread.fp_type = FP_STATE_FPSIMD;
         }
 
-       if (system_supports_sme() && type == ARM64_VEC_SME) {
-               task->thread.svcr &= ~(SVCR_SM_MASK |
-                                      SVCR_ZA_MASK);
-               clear_thread_flag(TIF_SME);
+       if (system_supports_sme()) {
+               if (type == ARM64_VEC_SME ||
+                   !(task->thread.svcr & (SVCR_SM_MASK | SVCR_ZA_MASK))) {
+                       /*
+                        * We are changing the SME VL or weren't using
+                        * SME anyway, discard the state and force a
+                        * reallocation.
+                        */
+                       task->thread.svcr &= ~(SVCR_SM_MASK |
+                                              SVCR_ZA_MASK);
+                       clear_thread_flag(TIF_SME);
+                       free_sme = true;
+               } else  {
+                       alloc_sve = true;
+               }
         }
 
         if (task == current)
                 put_cpu_fpsimd_context();
 
         /*
-        * Force reallocation of task SVE and SME state to the correct
-        * size on next use:
+        * Free the changed states if they are not in use, they will
+        * be reallocated to the correct size on next use.  If we need
+        * SVE state due to having untouched SME state then reallocate
+        * it immediately.
          */
         sve_free(task);
-       if (system_supports_sme() && type == ARM64_VEC_SME)
+       if (free_sme)
                 sme_free(task);
+       if (alloc_sve)
+               sve_alloc(task, true);
 
         task_set_vl(task, type, vl);
 

-- 
2.30.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ