lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 3 Dec 2008 23:42:52 +1100
From:	Nick Andrew <nick@...k-andrew.net>
To:	Geoffrey McRae <geoff@...idhost.com>
Cc:	Alan Cox <alan@...rguk.ukuu.org.uk>, linux-kernel@...r.kernel.org
Subject: Re: New Security Features, Please Comment

On Wed, Dec 03, 2008 at 12:44:17PM +1100, Geoffrey McRae wrote:
> On Wed, 2008-12-03 at 00:53 +0000, Alan Cox wrote:
> > > (such as PHP) as the user that owns the website we are forced to fork a
> > > new process per request, then call setuid/gid and then launch the script
> > > language. This ofcource is resource intensive, but at present there is
> > > no other solution.

[...]

> But once this set is introduced a HTTP server could be written that uses
> forked children to handle requests, that have their identity swtiched
> before doing any work, including parsing CGI scripts.

I think we can do that already, using FastCGI.

As I understand it, the traditional CGI server system call flow is:

   accept
   fork
    \ setuid(user)
      exec(cgi script)

And I don't see how your 4 extra system calls would improve that flow.

The FastCGI flow is:

   setuid(user)
   exec(fastcgi script)
   loop receiving requests over a pipe and processing

In this case the handling process has already been forked and exec'ed
so the time-consuming work is done once and the script can then get
on with the business of processing requests as quickly as possible.

I'm sure there are other execution models where the CGI processors
are pre-forked. Executing the CGI script (e.g. through a scripting
language) is presumably the most expensive operation, and doing that
in advance won't be useful if you have large numbers of distinct users
but they won't all be running CGIs all the time. In other words, if you
had 300,000 users and 1 CGI script being run, you won't want to pre-fork
300,000 processes each with a different uid. But if you had 300,000
users and 200,000 different CGI scripts, you also have no choice but
to fork and exec at request time, because there are too many different
scripts.

Nick.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ