[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZhbIDUO9BULaiSh3@google.com>
Date: Wed, 10 Apr 2024 18:10:37 +0100
From: Vincent Donnefort <vdonnefort@...gle.com>
To: Sebastian Ene <sebastianene@...gle.com>
Cc: catalin.marinas@....com, james.morse@....com, jean-philippe@...aro.org,
maz@...nel.org, oliver.upton@...ux.dev, qperret@...gle.com,
qwandor@...gle.com, sudeep.holla@....com, suzuki.poulose@....com,
tabba@...gle.com, will@...nel.org, yuzenghui@...wei.com,
kvmarm@...ts.linux.dev, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH] KVM: arm64: Add support for FFA_PARTITION_INFO_GET
On Wed, Apr 10, 2024 at 10:18:18AM +0000, Sebastian Ene wrote:
> On Wed, Apr 10, 2024 at 10:53:31AM +0100, Vincent Donnefort wrote:
> > [...]
> >
> > > > > +static void do_ffa_part_get(struct arm_smccc_res *res,
> > > > > + struct kvm_cpu_context *ctxt)
> > > > > +{
> > > > > + DECLARE_REG(u32, uuid0, ctxt, 1);
> > > > > + DECLARE_REG(u32, uuid1, ctxt, 2);
> > > > > + DECLARE_REG(u32, uuid2, ctxt, 3);
> > > > > + DECLARE_REG(u32, uuid3, ctxt, 4);
> > > > > + DECLARE_REG(u32, flags, ctxt, 5);
> > > > > + u32 off, count, sz, buf_sz;
> > > > > +
> > > > > + hyp_spin_lock(&host_buffers.lock);
> > > > > + if (!host_buffers.rx) {
> > > > > + ffa_to_smccc_res(res, FFA_RET_INVALID_PARAMETERS);
> > > > > + goto out_unlock;
> > > > > + }
> > > > > +
> > > > > + arm_smccc_1_1_smc(FFA_PARTITION_INFO_GET, uuid0, uuid1,
> > > > > + uuid2, uuid3, flags, 0, 0,
> > > > > + res);
> > > > > +
> > > > > + if (res->a0 != FFA_SUCCESS)
> > > > > + goto out_unlock;
> > > > > +
> > > > > + count = res->a2;
> > > > > + if (!count)
> > > > > + goto out_unlock;
> > > >
> > > > Looking at the table 13.34, it seems what's in "count" depends on the flag.
> > > > Shouldn't we check its value, and only memcpy into the host buffers if the flag
> > > > is 0?
> > > >
> > >
> > > When the flag is `1` the count referes to the number of partitions
> > > deployed. In both cases we have to copy something unless count == 0.
> >
> > I see "Return the count of partitions deployed in the system corresponding to
> > the specified UUID in w2"
> >
> > Which I believe means nothing has been copied in the buffer?
> >
>
> When the flag in w5 is 1 the size argument stored in w3 will be zero and
> the loop will not be executed, so nothing will be copied to the host
> buffers.
Ha right, all good here then.
>
> > >
> > > > > +
> > > > > + if (ffa_version > FFA_VERSION_1_0) {
> > > > > + buf_sz = sz = res->a3;
> > > > > + if (sz > sizeof(struct ffa_partition_info))
> > > > > + buf_sz = sizeof(struct ffa_partition_info);
> > > >
> > > > What are you trying to protect against here? We have to trust EL3 anyway, (as
> > > > other functions do).
> > > >
> > > > The WARN() could be kept though to make sure we won't overflow our buffer. But
> > > > it could be transformed into an error? FFA_RET_ABORTED?
> > > >
> > > >
> > >
> > > I think we can keep it as a WARN_ON because it is not expected to have
> > > a return code of FFA_SUCCESS but the buffer to be overflown. The TEE is
> > > expected to return NO_MEMORY in w2 if the results cannot fit in the RX
> > > buffer.
> >
> > WARN() is crashing the hypervisor. It'd be a shame here as we can easily recover
> > by just sending an error back to the caller.
>
> I agree with you but this is not expected to happen unless TZ messes up
> something/is not complaint with the spec, in which case I would like to
> catch this.
Hum, still I don't see the point in crashing anything here, nothing is
compromised. The driver can then decide what to do based on that reported
failure.
Powered by blists - more mailing lists