lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <904edac0-3de7-35a6-a9bc-b983ccd3490c@arm.com>
Date:   Wed, 24 Jun 2020 12:18:46 +0100
From:   Steven Price <steven.price@....com>
To:     Catalin Marinas <catalin.marinas@....com>
Cc:     Dave P Martin <Dave.Martin@....com>,
        Peter Maydell <peter.maydell@...aro.org>,
        Marc Zyngier <maz@...nel.org>,
        lkml - Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "kvmarm@...ts.cs.columbia.edu" <kvmarm@...ts.cs.columbia.edu>,
        Thomas Gleixner <tglx@...utronix.de>,
        Will Deacon <will@...nel.org>,
        arm-mail-list <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [RFC PATCH 0/2] MTE support for KVM guest

On 24/06/2020 12:09, Catalin Marinas wrote:
> On Wed, Jun 24, 2020 at 12:03:35PM +0100, Steven Price wrote:
>> On 24/06/2020 11:34, Dave Martin wrote:
>>> On Wed, Jun 24, 2020 at 10:38:48AM +0100, Catalin Marinas wrote:
>>>> On Tue, Jun 23, 2020 at 07:05:07PM +0100, Peter Maydell wrote:
>>>>> On Wed, 17 Jun 2020 at 13:39, Steven Price <steven.price@....com> wrote:
>>>>>> These patches add support to KVM to enable MTE within a guest. It is
>>>>>> based on Catalin's v4 MTE user space series[1].
>>>>>>
>>>>>> [1] http://lkml.kernel.org/r/20200515171612.1020-1-catalin.marinas%40arm.com
>>>>>>
>>>>>> Posting as an RFC as I'd like feedback on the approach taken.
>>>>>
>>>>> What's your plan for handling tags across VM migration?
>>>>> Will the kernel expose the tag ram to userspace so we
>>>>> can copy it from the source machine to the destination
>>>>> at the same time as we copy the actual ram contents ?
>>>>
>>>> Qemu can map the guest memory with PROT_MTE and access the tags directly
>>>> with LDG/STG instructions. Steven was actually asking in the cover
>>>> letter whether we should require that the VMM maps the guest memory with
>>>> PROT_MTE as a guarantee that it can access the guest tags.
>>>>
>>>> There is no architecturally visible tag ram (tag storage), that's a
>>>> microarchitecture detail.
>>>
>>> If userspace maps the guest memory with PROT_MTE for dump purposes,
>>> isn't it going to get tag check faults when accessing the memory
>>> (i.e., when dumping the regular memory content, not the tags
>>> specifically).
>>>
>>> Does it need to map two aliases, one with PROT_MTE and one without,
>>> and is that architecturally valid?
>>
>> Userspace would either need to have two mappings (I don't believe there are
>> any architectural issues with that - but this could be awkward to arrange in
>> some situations) or be careful to avoid faults. Basically your choices with
>> one mapping are:
>>
>>   1. Disable tag checking (using prctl) when touching the memory. This works
>> but means you lose tag checking for the VMM's own accesses during this code
>> sequence.
>>
>>   2. Read the tag values and ensure you use the correct tag. This suffers
>> from race conditions if the VM is still running.
>>
>>   3. Use one of the exceptions in the architecture that generates a Tag
>> Unchecked access. Sadly the only remotely useful thing I can see in the v8
>> ARM is "A base register plus immediate offset addressing form, with the SP
>> as the base register." - but making sure SP is in range of where you want to
>> access would be a pain.
> 
> Or:
> 
> 4. Set PSTATE.TCO when accessing tagged memory in an unsafe way.
> 

Ah yes, similar to (1) but much lower overhead ;) That's probably the 
best option - it can be hidden in a memcpy_ignoring_tags() function. 
However it still means that the VMM can't directly touch the guest's 
memory which might cause issues for the VMM.

Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ