lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 22 Dec 2004 12:27:55 -0500
From: Adam Shostack <adam@...eport.org>
To: "D. J. Bernstein" <djb@...yp.to>
Cc: bugtraq@...urityfocus.com
Subject: Re: Local versus remote security holes


There is a rough standard for what local and remote mean.  The
standard may not be as precise as you'd like.  Using old terms with
new definitions doesn't advance the debate, it generates confusion.
This is especially the case when you haven't rigorously defined the
proposed new meanings of the terms.

I've long advocated 'credentialed' to refer to attacks where a user of
the system can execute the attack, and 'anonymous' or
'non-credentialed' to refer to refer to attacks on servers, such as
httpd, ftpd, or named.  These attacks can be launched by anyone, from
anywhere (barring interference from firewalls or the like).

That a user took action to start a server doesn't mean that the
attacker needs credentials to execute it.

Adam

On Wed, Dec 22, 2004 at 07:40:42AM -0000, D. J. Bernstein wrote:
| Stephen Harris writes:
| > In your example, a local user MUST take action in order to perform
| > the exploit, therefore the exploit is local.
| 
| Practically all UNIX security holes are ``local'' according to your
| criterion. A peer-to-peer server, for example, or even a DNS server,
| isn't started without action by a local user.
| 
| If you're trying to say that the rare programs that run by default are
| particularly important, because they're running on so many machines,
| then I'll agree. But this distinction is of no use in filtering a list
| of security holes!
| 
| Suppose there's a security hole in a program that _isn't_ running by
| default. Does this mean that some readers can skip the hole? Of course
| not. Maybe it's a security hole in a browser, or in some other program
| that the reader uses.
| 
| In contrast, there are many security holes that apply only to multiuser
| machines, and not to, e.g., a typical laptop. That's what my ``local''
| label is for. Many users can skip these reports.
| 
| As for NASM, most of the commentators here obviously haven't read the
| security report, which explained the (unusual) limitations of this
| particular security hole in considerable detail. Relevant excerpt:
| 
|    You are at risk if you receive an asm file from an email message (or a
|    web page or any other source that could be controlled by an attacker)
|    and feed that file through NASM. Whoever provides that asm file then has
|    complete control over your account: he can read and modify your files,
|    watch the programs you're running, etc.
|    
|    Of course, if you _run_ a program, you're authorizing the programmer to
|    take control of your account; but the NASM documentation does not say
|    that merely _assembling_ a program can have this effect. It's easy to
|    imagine situations in which a program is run inside a jail but assembled
|    outside the jail; this NASM bug means that the jail is ineffective.
| 
| These limitations---although quite severe---don't confine the problem to
| multiuser machines, so labelling this security hole as ``local'' would
| be wrong.
| 
| ---D. J. Bernstein, Associate Professor, Department of Mathematics,
| Statistics, and Computer Science, University of Illinois at Chicago


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ