lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 21 Feb 2006 03:48:00 -0800
From: Crispin Cowan <crispin@...ell.com>
To: Ansgar -59cobalt- Wiechers <bugtraq@...netcobalt.net>
Cc: bugtraq@...urityfocus.com
Subject: Re: Vulnerabilites in new laws on computer hacking


Ansgar -59cobalt- Wiechers wrote:
> while I agree with you that for learning and practicing it would suffice
> to build your own systems to tamper with, I have to disagree on the part
> that hacking into other people's systems *without* doing any damage
> should be illegal.
>   
But an intrusion that causes no other privacy or integrity violations
DOES do damage. The sys admin has no way of knowing that you did no
damage, and so they have to commit large resources to either auditing
the box, or wiping it and starting over. Both are hugely expensive.

I agree with Paul; people who want to learn to hack can quite easily do
so with their own computers, and people who break into machines that
they are not authorized to use should be prosecuted to the full extent
of the law.

However, there is one hole here. Under the "hack your own machines"
policy, certain large/expensive systems (mainframes) are too expensive
for basement hackers to acquire. Thus they go largely unexamined. This
is a 2-edged sword:

    * reduced expense for the vendor because of a lot less "bug of the
      week" patching
    * increased risk for system owners vs. *professional* intruders;
      because the script kiddies are not attacking these platforms, it
      is a "target rich environment" for professional,
      financially-motivated attackers

So if I was a consumer of these large systems, I would be a lot happier
with the vendor if they made systems available for researchers to attack.

*Note*: Open source vendors get a free ride here, because by definition
researchers can investigate the code all they want.

> In addition to that some vulnerabilities can be discovered only ITW,
> simply because you cannot rebuild that environment in your lab. Two
> years ago we had a case like that over here in Germany [2] (the article
> is in german, but maybe an online translator will help). The OBSOC
> (Online Business Solution Operation Center) system of the Deutsche
> Telekom AG did not do proper authentication, so by manipulating the URL
> you could access other customers' data. How would you detect such a
> vulnerability without actually hacking the system? Is one supposed to
> not notice these things? Will that really make them go away?
>   
This is an example of the hole. The proper thing for the defender to do
would be to put up a test system with fake accounts and invite attack
against the test system. If the site operator chooses not to do so, then
it is at the expense of their customer's risk. But under no
circumstances is it proper for researchers to deliberately hack
production servers that they do not own.

Crispin
-- 
Crispin Cowan, Ph.D.                      http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
	Olympic Games: The Bi-Annual Festival of Corruption




Powered by blists - more mailing lists