lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 21 Jul 2021 11:16:02 +0100
From:   Marc Zyngier <maz@...nel.org>
To:     Sergey Senozhatsky <senozhatsky@...omium.org>
Cc:     Will Deacon <will@...nel.org>,
        Suleiman Souhlal <suleiman@...gle.com>,
        Joel Fernandes <joelaf@...gle.com>,
        linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
        linux-kernel@...r.kernel.org,
        virtualization@...ts.linux-foundation.org
Subject: Re: [PATCHv2 2/4] arm64: add guest pvstate support

On Wed, 21 Jul 2021 09:47:52 +0100,
Sergey Senozhatsky <senozhatsky@...omium.org> wrote:
> 
> On (21/07/21 09:22), Marc Zyngier wrote:
> > On Wed, 21 Jul 2021 03:05:25 +0100,
> > Sergey Senozhatsky <senozhatsky@...omium.org> wrote:
> > > 
> > > On (21/07/12 16:42), Marc Zyngier wrote:
> > > > > 
> > > > > PV-vcpu-state is a per-CPU struct, which, for the time being,
> > > > > holds boolean `preempted' vCPU state. During the startup,
> > > > > given that host supports PV-state, each guest vCPU sends
> > > > > a pointer to its per-CPU variable to the host as a payload
> > > > 
> > > > What is the expected memory type for this memory region? What is its
> > > > life cycle? Where is it allocated from?
> > > 
> > > Guest per-CPU area, which physical addresses is shared with the
> > > host.
> > 
> > Again: what are the memory types you expect this to be used with?
> 
> I heard your questions, I'm trying to figure out the answers now.
> 
> As of memory type - I presume you are talking about coherent vs
> non-coherent memory.

No. I'm talking about cacheable vs non-cacheable. The ARM architecture
is always coherent for memory that is inner-shareable, which applies
to any system running Linux. On the other hand, there is no
architected cache snooping when using non-cacheable accesses.

> Can guest per-CPU memory be non-coherent? Guest never writes
> anything to the region of memory it shares with the host, it only
> reads what the host writes to it. All reads and writes are done from
> CPU (no devices DMA access, etc).
> 
> Do we need any cache flushes/syncs in this case?

If you expect the guest to have non-cacheable mappings (or to run with
its MMU off at any point, which amounts to the same thing) *and* still
be able to access the shared page, then *someone* will have to perform
CMOs to make these writes visible to the PoC (unless you have FWB).

Needless to say, this would kill any sort of performance gain this
feature could hypothetically bring. Defining the scope for the access
would help mitigating this, even if that's just a sentence saying "the
shared page *must* be accessed from a cacheable mapping".

> 
> > When will the hypervisor ever stop accessing this?
> 
> KVM always access it for the vcpus that are getting scheduled out or
> scheduled in on the host side.

I was more hinting at whether there was a way to disable this at
runtime. Think of a guest using kexec, for example, where you really
don't want the hypervisor to start messing with memory that has been
since reallocated by the guest.

> > How does it work across reset?
> 
> I need to figure out what happens during reset/migration in the first
> place.

Yup.

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ