[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YPWgyS7i2sMtiX8S@google.com>
Date: Mon, 19 Jul 2021 16:56:57 +0100
From: Quentin Perret <qperret@...gle.com>
To: Marc Zyngier <maz@...nel.org>
Cc: james.morse@....com, alexandru.elisei@....com,
suzuki.poulose@....com, catalin.marinas@....com, will@...nel.org,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
linux-kernel@...r.kernel.org, ardb@...nel.org, qwandor@...gle.com,
tabba@...gle.com, dbrazdil@...gle.com, kernel-team@...roid.com
Subject: Re: [PATCH 09/14] KVM: arm64: Mark host bss and rodata section as
shared
On Monday 19 Jul 2021 at 16:01:40 (+0100), Marc Zyngier wrote:
> On Mon, 19 Jul 2021 11:47:30 +0100,
> Quentin Perret <qperret@...gle.com> wrote:
> > +static int finalize_mappings(void)
> > +{
> > + enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_RWX;
> > + int ret;
> > +
> > + /*
> > + * The host's .bss and .rodata sections are now conceptually owned by
> > + * the hypervisor, so mark them as 'borrowed' in the host stage-2. We
> > + * can safely use host_stage2_idmap_locked() at this point since the
> > + * host stage-2 has not been enabled yet.
> > + */
> > + prot |= KVM_PGTABLE_STATE_SHARED | KVM_PGTABLE_STATE_BORROWED;
> > + ret = host_stage2_idmap_locked(__hyp_pa(__start_rodata),
> > + __hyp_pa(__end_rodata), prot);
>
> Do we really want to map the rodata section as RWX?
I know, feels odd, but for now I think so. The host is obviously
welcome to restrict things in its stage-1, but for stage-2, this is
just 'memory' so far, the host is allowed to patch it if it wants too.
Eventually, yes, I think we should make it RO in the host stage-2, but
maybe that's for another series?
> > + if (ret)
> > + return ret;
> > +
> > + return host_stage2_idmap_locked(__hyp_pa(__hyp_bss_end),
> > + __hyp_pa(__bss_stop), prot);
>
> If the 'locked' state implies SHARED+BORROWED, maybe consider moving
> the ORRing of the prot into host_stage2_idmap_locked()?
Ah no, sorry for the confusion, but 'locked' means that we already hold
the pgtable lock. That is not actually true here, but this is a special
case as only the current CPU can be messing with it at this point in
time so taking the lock would just be wasted cycles.
> > +}
> > +
> > void __noreturn __pkvm_init_finalise(void)
> > {
> > struct kvm_host_data *host_data = this_cpu_ptr(&kvm_host_data);
> > @@ -167,6 +199,10 @@ void __noreturn __pkvm_init_finalise(void)
> > if (ret)
> > goto out;
> >
> > + ret = finalize_mappings();
> > + if (ret)
> > + goto out;
> > +
> > pkvm_pgtable_mm_ops = (struct kvm_pgtable_mm_ops) {
> > .zalloc_page = hyp_zalloc_hyp_page,
> > .phys_to_virt = hyp_phys_to_virt,
Thanks,
Quentin
Powered by blists - more mailing lists