lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190708122033.GA19624@kroah.com>
Date:   Mon, 8 Jul 2019 14:20:33 +0200
From:   Greg KH <gregkh@...uxfoundation.org>
To:     Oded Gabbay <oded.gabbay@...il.com>
Cc:     "Linux-Kernel@...r. Kernel. Org" <linux-kernel@...r.kernel.org>,
        Omer Shpigelman <oshpigelman@...ana.ai>,
        Tomer Tayar <ttayar@...ana.ai>
Subject: Re: [PATCH] habanalabs: use correct variable to show fd open counter

On Mon, Jul 08, 2019 at 02:51:33PM +0300, Oded Gabbay wrote:
> On Mon, Jul 8, 2019 at 2:43 PM Greg KH <gregkh@...uxfoundation.org> wrote:
> >
> > On Mon, Jul 08, 2019 at 02:30:13PM +0300, Oded Gabbay wrote:
> > > On Mon, Jul 8, 2019 at 2:21 PM Greg KH <gregkh@...uxfoundation.org> wrote:
> > > >
> > > > On Mon, Jul 08, 2019 at 01:43:55PM +0300, Oded Gabbay wrote:
> > > > > The current code checks if the user context pointer is NULL or not to
> > > > > display the number of open file descriptors of a device. However, that
> > > > > variable (user_ctx) will eventually go away as the driver will support
> > > > > multiple processes. Instead, the driver can use the atomic counter of
> > > > > the open file descriptors which the driver already maintains.
> > > > >
> > > > > Signed-off-by: Oded Gabbay <oded.gabbay@...il.com>
> > > > > ---
> > > > >  drivers/misc/habanalabs/sysfs.c | 2 +-
> > > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/drivers/misc/habanalabs/sysfs.c b/drivers/misc/habanalabs/sysfs.c
> > > > > index 25eb46d29d88..881be19b5fad 100644
> > > > > --- a/drivers/misc/habanalabs/sysfs.c
> > > > > +++ b/drivers/misc/habanalabs/sysfs.c
> > > > > @@ -356,7 +356,7 @@ static ssize_t write_open_cnt_show(struct device *dev,
> > > > >  {
> > > > >       struct hl_device *hdev = dev_get_drvdata(dev);
> > > > >
> > > > > -     return sprintf(buf, "%d\n", hdev->user_ctx ? 1 : 0);
> > > > > +     return sprintf(buf, "%d\n", atomic_read(&hdev->fd_open_cnt));
> > > > >  }
> > > >
> > > > Odds are, this means nothing, as it doesn't get touched if the file
> > > > descriptor is duped or sent to another process.
> > > >
> > > > Why do you care about the number of open files?  Whenever someone tries
> > > > to do this type of logic, it is almost always wrong, just let userspace
> > > > do what it wants to do, and if wants to open something twice, then it
> > > > gets to keep the pieces when things break.
> > > >
> > > > thanks,
> > > >
> > > > greg k-h
> > >
> > > I care about the number of open file descriptors because I can't let
> > > multiple processes run simultaneously on my device, as we still don't
> > > have the code to do proper isolation between the processes, in regard
> > > of memory accesses on our device memory and by using our DMA engine.
> > > Basically, it's a security hole. If you want, I can explain more in
> > > length on this issue.
> >
> > But the issue is that you can't "force" this from the kernel side at
> > all.  Trying to catch this at open() time only catches the obvious
> > processes.
> >
> > As I said, how do you check for:
> >         fd = open(...);
> >         fd_new = dup(fd);
> >
> >         write(fd, ...);
> >         write(fd_new, ...);
> >
> > or, pass the fd across a socket?  Or other fun ways of sending file
> > descriptors around a system.
> >
> > You have to trust userspace here, sorry.  If someone wants to do
> > multiple accesses, they can, but again, they deserve the pieces when
> > things fall apart.
> 
> I see what you are saying, but from *security* perspective, I don't
> think I really care about the scenarios above, right ?
> Because that would mean a user duplicated his *own* fd. Sure, things
> won't work for him, but what do I care about that, as you rightly
> said.
> I'm only concerned about the security risk, where there is a
> legitimate user and a malicious one. Because I can't isolate between
> them, I want to be able to allow only one of them to run.

But how can you tell if the first one is the "malicious" one and the
second one is "legitimate"?  You do that by the "normal" file
permissions and the like, you don't try to do a "first one wins!" type
of policy, that's crazy :)

> I don't care if one of them duplicates his own FD, right ?

If you are trying to keep someone from having multiple FD per device,
then yes, you should care.  The point is, you can't know.

> Please tell me if my assumption here is correct or not, because this
> has implications.

Don't rely on "first one wins!" as a security policy to prevent bad
things from happening.  That's never going to work, let userspace police
these things, as that is the only place you can do it properly (or with
a selinux/apparmor/lsm policy).

> > > We have the H/W infrastructure for that, using MMU and multiple
> > > address space IDs (ASID), but we didn't write the code yet in the
> > > driver, as that is a BIG feature but it wasn't requested by anyone
> > > yet.
> > >
> > > So the current solution is to block the ability to open multiple file
> > > descriptors.
> > >
> > > Regarding this specific sysfs property, I don't really care about it.
> > > I simply saw that it is shown in other drivers and I thought it may be
> > > nice for a system admin utility to show it.
> >
> > What drivers show the number of open file descriptors?  Time to go
> > delete them as well :)
> hehe
> I tried to grep for it but I couldn't find any. Strange because I was
> sure I saw this in some driver.

If you run across it again, please let me know.  It's a common
"anti-pattern" that I have been guilty of in the past.

thanks,

greg k-h

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ