lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <34551b4d-7420-4203-bf14-564b5f3443c4@suse.com>
Date: Fri, 9 Aug 2024 12:46:45 +0200
From: Jürgen Groß <jgross@...e.com>
To: Jan Beulich <jbeulich@...e.com>
Cc: Boris Ostrovsky <boris.ostrovsky@...cle.com>,
 Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
 Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
 "H. Peter Anvin" <hpa@...or.com>, xen-devel@...ts.xenproject.org,
 Marek Marczykowski-Górecki
 <marmarek@...isiblethingslab.com>, linux-kernel@...r.kernel.org,
 x86@...nel.org
Subject: Re: [PATCH 5/5] xen: tolerate ACPI NVS memory overlapping with Xen
 allocated memory

On 09.08.24 11:45, Jan Beulich wrote:
> On 07.08.2024 14:05, Jan Beulich wrote:
>> On 07.08.2024 12:41, Juergen Gross wrote:
>>> In order to minimize required special handling for running as Xen PV
>>> dom0, the memory layout is modified to match that of the host. This
>>> requires to have only RAM at the locations where Xen allocated memory
>>> is living. Unfortunately there seem to be some machines, where ACPI
>>> NVS is located at 64 MB, resulting in a conflict with the loaded
>>> kernel or the initial page tables built by Xen.
>>>
>>> As ACPI NVS needs to be accessed by the kernel only for saving and
>>> restoring it across suspend operations, it can be relocated in the
>>> dom0's memory map by swapping it with unused RAM (this is possible
>>> via modification of the dom0 P2M map).
>>
>> While the kernel may not (directly) need to access it for other purposes,
>> what about AML accessing it? As you can't advertise the movement to ACPI,
>> and as non-RAM mappings are carried out by drivers/acpi/osl.c:acpi_map()
>> using acpi_os_ioremap(), phys-to-machine translations won't cover for
>> that (unless I'm overlooking something, which unfortunately seems like I
>> might be).
> 
> Thinking some more about this, the above may be pointing in the wrong
> direction. If from acpi_os_ioremap() downwards no address translation
> (PFN -> MFN) occurred, then what's coming from AML would still be
> handled correctly as far as page table entries go. The problem then
> might instead be that the mapping would appear to be covering RAM, not
> the ACPI NVS region (and there may be checks for that).

All PTE entries written go through the P->M translation.


Juergen

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ