[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1487979807.2190.24.camel@HansenPartnership.com>
Date: Fri, 24 Feb 2017 18:43:27 -0500
From: James Bottomley <James.Bottomley@...senPartnership.com>
To: Jason Gunthorpe <jgunthorpe@...idianresearch.com>
Cc: dhowells@...hat.com, linux-security-module@...r.kernel.org,
tpmdd-devel@...ts.sourceforge.net,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [tpmdd-devel] [PATCH v2 6/7] tpm: expose spaces via a device
link /dev/tpms<n>
On Fri, 2017-02-24 at 16:23 -0700, Jason Gunthorpe wrote:
> On Fri, Feb 24, 2017 at 06:01:00PM -0500, James Bottomley wrote:
>
> > Well, as a glib answer, I'd say the TPM is a device, so the thing
> > which restricts device access to containers is the device cgroup
> > ... that's what we should be plugging into. I'd have to look, but
> > I suspect the device cgroup basically operates on device node
> > appearance, so it should "just work"(tm). I can explore when I'm
> > back home.
>
> Seems reasonable..
>
> It just seems confusing to call something a namespace that isn't also
> a CLONE_NEW* option..
Well, there's namespace behaviour and then there's how you enter them.
We have namespace behaviour with the /dev/tpms<n> but the namespace is
entered on opening the device, even if the same process opens the
device more than once. So we have namespace behaviour with a non clone
entry mechanism. Since we're namespaceing a device, that seems to me
to be the correct semantic.
> FWIW more background on the topic:
>
> Stefan was concerned about information leakage via sysfs of TPM data,
> eg that a container could still touch the host's TPM. I wonder if
> device cgroup could be extended to block access to the sysfs
> directories containing a disallowed 'dev' ?
It doesn't need to. The sysfs entries (those that ask the TPM
something) are surrounded by chip->tpm_mutex, so when it asks, we know
all the spaces are context saved (i.e. the only TPM visible state is
global not anything space local).
> I was also wondering about kernel use from within the container -
> all kernel consumers are locked to physical tpm0.. But maybe the
> kernel can consult the right device cgroup to find an allowed TPM?
I'd use the device cgroup to determine what's allowable per container
(i.e. what tpm you can see) then within the container I'd open the
tpms<n> device ... because the TPM volatile storage is so tiny its not
inconceivable that multiple processes, even within a single container,
need access ... and they'd each need their own "namespace" (which they
get with the current model). However, this is opinion ... we should
try it out and see what works best.
James
Powered by blists - more mailing lists