[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3142237.YMNxv0uec1@x2>
Date: Wed, 12 Feb 2020 17:38:45 -0500
From: Steve Grubb <sgrubb@...hat.com>
To: linux-audit@...hat.com
Cc: Paul Moore <paul@...l-moore.com>,
Richard Guy Briggs <rgb@...hat.com>, nhorman@...driver.com,
linux-api@...r.kernel.org, containers@...ts.linux-foundation.org,
LKML <linux-kernel@...r.kernel.org>, dhowells@...hat.com,
netfilter-devel@...r.kernel.org, ebiederm@...ssion.com,
simo@...hat.com, netdev@...r.kernel.org,
linux-fsdevel@...r.kernel.org, Eric Paris <eparis@...isplace.org>,
mpatel@...hat.com, Serge Hallyn <serge@...lyn.com>
Subject: Re: [PATCH ghak90 V8 07/16] audit: add contid support for signalling the audit daemon
On Wednesday, February 5, 2020 5:50:28 PM EST Paul Moore wrote:
> > > > > ... When we record the audit container ID in audit_signal_info() we
> > > > > take an extra reference to the audit container ID object so that it
> > > > > will not disappear (and get reused) until after we respond with an
> > > > > AUDIT_SIGNAL_INFO2. In audit_receive_msg() when we do the
> > > > > AUDIT_SIGNAL_INFO2 processing we drop the extra reference we took
> > > > > in
> > > > > audit_signal_info(). Unless I'm missing some other change you
> > > > > made,
> > > > > this *shouldn't* affect the syscall records, all it does is
> > > > > preserve
> > > > > the audit container ID object in the kernel's ACID store so it
> > > > > doesn't
> > > > > get reused.
> > > >
> > > > This is exactly what I had understood. I hadn't considered the extra
> > > > details below in detail due to my original syscall concern, but they
> > > > make sense.
> > > >
> > > > The syscall I refer to is the one connected with the drop of the
> > > > audit container identifier by the last process that was in that
> > > > container in patch 5/16. The production of this record is contingent
> > > > on
> > > > the last ref in a contobj being dropped. So if it is due to that ref
> > > > being maintained by audit_signal_info() until the AUDIT_SIGNAL_INFO2
> > > > record it fetched, then it will appear that the fetch action closed
> > > > the
> > > > container rather than the last process in the container to exit.
> > > >
> > > > Does this make sense?
> > >
> > > More so than your original reply, at least to me anyway.
> > >
> > > It makes sense that the audit container ID wouldn't be marked as
> > > "dead" since it would still be very much alive and available for use
> > > by the orchestrator, the question is if that is desirable or not. I
> > > think the answer to this comes down the preserving the correctness of
> > > the audit log.
> > >
> > > If the audit container ID reported by AUDIT_SIGNAL_INFO2 has been
> > > reused then I think there is a legitimate concern that the audit log
> > > is not correct, and could be misleading. If we solve that by grabbing
> > > an extra reference, then there could also be some confusion as
> > > userspace considers a container to be "dead" while the audit container
> > > ID still exists in the kernel, and the kernel generated audit
> > > container ID death record will not be generated until much later (and
> > > possibly be associated with a different event, but that could be
> > > solved by unassociating the container death record).
> >
> > How does syscall association of the death record with AUDIT_SIGNAL_INFO2
> > possibly get associated with another event? Or is the syscall
> > association with the fetch for the AUDIT_SIGNAL_INFO2 the other event?
>
> The issue is when does the audit container ID "die". If it is when
> the last task in the container exits, then the death record will be
> associated when the task's exit. If the audit container ID lives on
> until the last reference of it in the audit logs, including the
> SIGNAL_INFO2 message, the death record will be associated with the
> related SIGNAL_INFO2 syscalls, or perhaps unassociated depending on
> the details of the syscalls/netlink.
>
> > Another idea might be to bump the refcount in audit_signal_info() but
> > mark tht contid as dead so it can't be reused if we are concerned that
> > the dead contid be reused?
>
> Ooof. Yes, maybe, but that would be ugly.
>
> > There is still the problem later that the reported contid is incomplete
> > compared to the rest of the contid reporting cycle wrt nesting since
> > AUDIT_SIGNAL_INFO2 will need to be more complex w/2 variable length
> > fields to accommodate a nested contid list.
>
> Do we really care about the full nested audit container ID list in the
> SIGNAL_INFO2 record?
>
> > > Of the two
> > > approaches, I think the latter is safer in that it preserves the
> > > correctness of the audit log, even though it could result in a delay
> > > of the container death record.
> >
> > I prefer the former since it strongly indicates last task in the
> > container. The AUDIT_SIGNAL_INFO2 msg has the pid and other subject
> > attributes and the contid to strongly link the responsible party.
>
> Steve is the only one who really tracks the security certifications
> that are relevant to audit, see what the certification requirements
> have to say and we can revisit this.
Sever Virtualization Protection Profile is the closest applicable standard
https://www.niap-ccevs.org/Profile/Info.cfm?PPID=408&id=408
It is silent on audit requirements for the lifecycle of a VM. I assume that
all that is needed is what the orchestrator says its doing at the high level.
So, if an orchestrator wants to shutdown a container, the orchestrator must
log that intent and its results. In a similar fashion, systemd logs that it's
killing a service and we don't actually hook the exit syscall of the service
to record that.
Now, if a container was being used as a VPS, and it had a fully functioning
userspace, it's own services, and its very own audit daemon, then in this
case it would care who sent a signal to its auditd. The tenant of that
container may have to comply with PCI-DSS or something else. It would log the
audit service is being terminated and systemd would record that its tearing
down the environment. The OS doesn't need to do anything.
-Steve
Powered by blists - more mailing lists