[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110526094806.GC19536@elte.hu>
Date: Thu, 26 May 2011 11:48:06 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Avi Kivity <avi@...hat.com>
Cc: James Morris <jmorris@...ei.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Kees Cook <kees.cook@...onical.com>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Will Drewry <wad@...omium.org>,
Steven Rostedt <rostedt@...dmis.org>,
linux-kernel@...r.kernel.org, gnatapov@...hat.com,
Chris Wright <chrisw@...s-sol.org>,
Pekka Enberg <penberg@...helsinki.fi>
Subject: Re: [PATCH 3/5] v2 seccomp_filters: Enable ftrace-based system call
filtering
* Ingo Molnar <mingo@...e.hu> wrote:
> You are missing the geniality of the tools/kvm/ thread pool! :-)
>
> It could be switched to a worker *process* model rather easily.
> Guest RAM and (a limited amount of) global resources would be
> shared via mmap(SHARED), but otherwise each worker process would
> have its own stack, its own subsystem-specific state, etc.
We get VM exit events in the vcpu threads which after minimal
processing pass much of the work to the thread pool. Most of the
virtio work (which could be a source of vulnerability - ringbuffers
are hard) is done in the worker task context.
It would be possible to further increase isolation there by also
passing the IO/MMIO decoding to the worker thread - but i'm not sure
that's truly needed. Most of the risk is where most of the code is -
and the code is in the worker task which interprets on-disk data,
protocols, etc.
So we could not only isolate devices from each other, but we could
also protect the highly capable vcpu fd from exploits in devices -
worker threads generally do not need access to the vcpu fd IIRC.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists