[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0706151656170.12349@asgard.lang.hm>
Date: Fri, 15 Jun 2007 17:01:25 -0700 (PDT)
From: david@...g.hm
To: Greg KH <greg@...ah.com>
cc: Crispin Cowan <crispin@...ell.com>, Pavel Machek <pavel@....cz>,
Andreas Gruenbacher <agruen@...e.de>,
Stephen Smalley <sds@...ho.nsa.gov>, jjohansen@...e.de,
linux-kernel@...r.kernel.org,
linux-security-module@...r.kernel.org,
linux-fsdevel@...r.kernel.org
Subject: Re: [AppArmor 39/45] AppArmor: Profile loading and manipulation,
pathname matching
On Fri, 15 Jun 2007, Greg KH wrote:
> On Fri, Jun 15, 2007 at 04:30:44PM -0700, Crispin Cowan wrote:
>> Greg KH wrote:
>>> On Fri, Jun 15, 2007 at 10:06:23PM +0200, Pavel Machek wrote:
>>>> Only case where attacker _can't_ be keeping file descriptors is newly
>>>> created files in recently moved tree. But as you already create files
>>>> with restrictive permissions, that's okay.
>>>>
>>>> Yes, you may get some -EPERM during the tree move, but AA has that
>>>> problem already, see that "when madly moving trees we sometimes
>>>> construct path file never ever had".
>>>>
>>> Exactly.
>>>
>> You are remembering old behavior. The current AppArmor generates only
>> correct and consistent paths. If a process has an open file descriptor
>> to such a file, they will retain access to it, as we described here:
>> http://forgeftp.novell.com//apparmor/LKML_Submission-May_07/techdoc.pdf
>>
>> Under the restorecon-alike proposal, you have a HUGE open race. This
>> post http://bugs.centos.org/view.php?id=1981 describes restorecon
>> running for 30 minutes relabeling a file system. That is so far from
>> acceptable that it is silly.
>
> Ok, so we fix it. Seriously, it shouldn't be that hard. If that's the
> only problem we have here, it isn't an issue.
how do you 'fix' the laws of physics?
the problem is that with a directory that contains lots of files below it
you have to access all the files metadata to change the labels on it. it
can take significant amounts of time to walk the entire three and change
every file.
>>> I can't think of a "real world" use of moving directory trees around
>>> that this would come up in as a problem.
>> Consider this case: We've been developing a new web site for a month,
>> and testing it on the server by putting it in a different virtual
>> domain. We want to go live at some particular instant by doing an mv of
>> the content into our public HTML directory. We simultaneously want to
>> take the old web site down and archive it by moving it somewhere else.
>>
>> Under the restorecon proposal, the web site would be horribly broken
>> until restorecon finishes, as various random pages are or are not
>> accessible to Apache.
>
> Usually you don't do that by doing a 'mv' otherwise you are almost
> guaranteed stale and mixed up content for some period of time, not to
> mention the issues surrounding paths that might be messed up.
on the contrary, useing 'mv' is by far the cleanest way to do this.
mv htdocs htdocs.old;mv htdocs.new htdocs
this makes two atomic changes to the filesystem, but can generate
thousands to millions of permission changes as a result.
David Lang
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists