lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 26 Jun 2024 11:45:23 -0700
From: Yang Shi <yang@...amperecomputing.com>
To: Catalin Marinas <catalin.marinas@....com>
Cc: will@...nel.org, anshuman.khandual@....com, scott@...amperecomputing.com,
 cl@...two.org, linux-arm-kernel@...ts.infradead.org,
 linux-kernel@...r.kernel.org
Subject: Re: [v4 PATCH] arm64: mm: force write fault for atomic RMW
 instructions



On 6/14/24 5:20 AM, Catalin Marinas wrote:
> On Wed, Jun 05, 2024 at 01:37:23PM -0700, Yang Shi wrote:
>> +static __always_inline bool aarch64_insn_is_class_cas(u32 insn)
>> +{
>> +	return aarch64_insn_is_cas(insn) ||
>> +	       aarch64_insn_is_casp(insn);
>> +}
>> +
>> +/*
>> + * Exclude unallocated atomic instructions and LD64B/LDAPR.
>> + * The masks and values were generated by using Python sympy module.
>> + */
>> +static __always_inline bool aarch64_atomic_insn_has_wr_perm(u32 insn)
>> +{
>> +	return ((insn & 0x3f207c00) == 0x38200000) ||
>> +	       ((insn & 0x3f208c00) == 0x38200000) ||
>> +	       ((insn & 0x7fe06c00) == 0x78202000) ||
>> +	       ((insn & 0xbf204c00) == 0x38200000);
>> +}
> This is still pretty opaque if we want to modify it in the future. I
> guess we could add more tests on top but it would be nice to have a way
> to re-generate these masks. I'll think about, for now these tests would
> do.

Sorry for the late reply, just came back from vacation and tried to 
catch up all the emails and TODOs. We should be able to share the tool 
used by us to generate the tests. But it may take some time.

>
>> @@ -511,6 +539,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
>>   	unsigned long addr = untagged_addr(far);
>>   	struct vm_area_struct *vma;
>>   	int si_code;
>> +	bool may_force_write = false;
>>   
>>   	if (kprobe_page_fault(regs, esr))
>>   		return 0;
>> @@ -547,6 +576,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
>>   		/* If EPAN is absent then exec implies read */
>>   		if (!alternative_has_cap_unlikely(ARM64_HAS_EPAN))
>>   			vm_flags |= VM_EXEC;
>> +		may_force_write = true;
>>   	}
>>   
>>   	if (is_ttbr0_addr(addr) && is_el1_permission_fault(addr, esr, regs)) {
>> @@ -568,6 +598,12 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
>>   	if (!vma)
>>   		goto lock_mmap;
>>   
>> +	if (may_force_write && (vma->vm_flags & VM_WRITE) &&
>> +	    is_el0_atomic_instr(regs)) {
>> +		vm_flags = VM_WRITE;
>> +		mm_flags |= FAULT_FLAG_WRITE;
>> +	}
> I think we can get rid of may_force_write and just test (vm_flags &
> VM_READ).

Yes, will fix it in v5.
>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ