[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <07df653066640842400c07aa7a06a0cfc592a854.camel@cisco.com>
Date: Wed, 3 Feb 2021 19:11:45 +0000
From: "Phil Zhang (xuanyzha)" <xuanyzha@...co.com>
To: "Daniel Walker (danielwa)" <danielwa@...co.com>,
"paul@...l-moore.com" <paul@...l-moore.com>
CC: "linux-audit@...hat.com" <linux-audit@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"eparis@...hat.com" <eparis@...hat.com>,
"xe-linux-external(mailer list)" <xe-linux-external@...co.com>
Subject: Re: [PATCH 2/2] audit: show (grand)parents information of an audit
context
On top of what Daniel just said:
As there are many components being tested in regression runs, it's
expensive to look back and reproduce the AVCs. And there's no prior
knowledge of what processes may generate AVCs. The alternative would be
to audit all fork/exec, but this could easily blow up the audit log.
But we'd like to hear alternatives.
On Wed, 2021-02-03 at 18:57 +0000, Daniel Walker (danielwa) wrote:
> On Tue, Feb 02, 2021 at 04:44:47PM -0500, Paul Moore wrote:
> > On Tue, Feb 2, 2021 at 4:29 PM Daniel Walker <
> > danielwa@...co.com
> > > wrote:
> > > From: Phil Zhang <
> > > xuanyzha@...co.com
> > > >
> > >
> > > To ease the root cause analysis of SELinux AVCs, this new feature
> > > traverses task structs to iteratively find all parent processes
> > > starting with the denied process and ending at the kernel.
> > > Meanwhile,
> > > it prints out the command lines and subject contexts of those
> > > parents.
> > >
> > > This provides developers a clear view of how processes were
> > > spawned
> > > and where transitions happened, without the need to reproduce the
> > > issue and manually audit interesting events.
> > >
> > > Example on bash over ssh:
> > > $ runcon -u system_u -r system_r -t polaris_hm_t ls
> > > ...
> > > type=PARENT msg=audit(1610548241.033:255):
> > > subj=root:unconfined_r:unconfined_t:s0-s0:c0.c1023 cmdline="-
> > > bash"
> > > type=PARENT msg=audit(1610548241.033:255):
> > > subj=system_u:system_r:sshd_t:s0-
> > > s0:c0.c1023 cmdline="sshd: root@.../0"
> > > type=PARENT msg=audit(1610548241.033:255):
> > > subj=system_u:system_r:sshd_t:s0-
> > > s0:c0.c1023 cmdline="/tmp/sw/rp/0/0/rp_security/mount/usr/
> > > sbin/sshd
> > > type=PARENT msg=audit(1610548241.033:255):
> > > subj=system_u:system_r:init_t:s0 cmdline="/ini
> > > t"
> > > type=PARENT msg=audit(1610548241.033:255):
> > > subj=system_u:system_r:kernel_t:s0
> > > ...
> > >
> > > Cc:
> > > xe-linux-external@...co.com
> > >
> > > Signed-off-by: Phil Zhang <
> > > xuanyzha@...co.com
> > > >
> > > Signed-off-by: Daniel Walker <
> > > danielwa@...co.com
> > > >
> > > ---
> > > include/uapi/linux/audit.h | 5 ++-
> > > init/Kconfig | 7 +++++
> > > kernel/audit.c | 3 +-
> > > kernel/auditsc.c | 64
> > > ++++++++++++++++++++++++++++++++++++++
> > > 4 files changed, 77 insertions(+), 2 deletions(-)
> >
> > This is just for development/testing of SELinux policy, right? It
> > seems like this is better done in userspace to me through a
> > combination of policy analysis and just understanding of how your
> > system is put together.
>
>
> That's why the patch was create to better understand the system.
>
> > If you really need this information in the audit log for some
> > production use, it seems like you could audit the various
> > fork()/exec() syscalls to get an understanding of the various
> > process
> > (sub)trees on the system. It would require a bit of work to sift
> > through the audit log and reconstruct the events that led to a
> > process
> > being started, and generating the AVC you are interested in
> > debugging,
> > but folks who live The Audit Life supposedly do this sort of thing
> > a
> > lot (this sort of thing being tracing a process/session).
>
> We have a very complex and constantly changing system, and we use
> shell scripts
> some of the time. If a shell script triggers an AVC it will typically
> show the
> tool called instead of the shell script which triggered calling the
> tool.
>
> We do have audit enabled in production systems, and I think we
> collect these
> logs in case of issues on the production system. Phil is better to
> address this.
>
> We're willing to try alternatives like what you suggested above.
>
> Daniel
>
Powered by blists - more mailing lists