lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Wed, 22 Feb 2006 12:16:24 +0100
From: Ansgar -59cobalt- Wiechers <bugtraq@...netcobalt.net>
To: bugtraq@...urityfocus.com
Subject: Re: Vulnerabilites in new laws on computer hacking


On 2006-02-21 Crispin Cowan wrote:
> Ansgar -59cobalt- Wiechers wrote:
>> while I agree with you that for learning and practicing it would
>> suffice to build your own systems to tamper with, I have to disagree
>> on the part that hacking into other people's systems *without* doing
>> any damage should be illegal.
> 
> But an intrusion that causes no other privacy or integrity violations
> DOES do damage. The sys admin has no way of knowing that you did no
> damage, and so they have to commit large resources to either auditing
> the box, or wiping it and starting over. Both are hugely expensive.

But if there really *was* a hole that allowed an actual break-in they
would have to do that anyway, because they wouldn't know if anyone had
broken in before and just wiped his tracks, would they?

> I agree with Paul; people who want to learn to hack can quite easily
> do so with their own computers,

Crispin, please, I expressly said that I do agree with Paul on that
part.

> and people who break into machines that they are not authorized to use
> should be prosecuted to the full extent of the law.

I do not (fully) agree on this part, though. I already gave some reasons
why, e.g. when is one authorized to use a machine? Plus, I do not
believe that companies which won't secure their servers properly (thus
putting themselves and/or their customers at risk) should be protected
by the law in this way. This kind of jurisdiction would encourage people
to care less about security than they already do, because if someone
breaks in, they will be able to sue him.

[...]
>> In addition to that some vulnerabilities can be discovered only ITW,
>> simply because you cannot rebuild that environment in your lab. Two
>> years ago we had a case like that over here in Germany [2] (the
>> article is in german, but maybe an online translator will help). The
>> OBSOC (Online Business Solution Operation Center) system of the
>> Deutsche Telekom AG did not do proper authentication, so by
>> manipulating the URL you could access other customers' data. How
>> would you detect such a vulnerability without actually hacking the
>> system? Is one supposed to not notice these things? Will that really
>> make them go away?
>   
> This is an example of the hole. The proper thing for the defender to
> do would be to put up a test system with fake accounts and invite
> attack against the test system. If the site operator chooses not to do
> so, then it is at the expense of their customer's risk. But under no
> circumstances is it proper for researchers to deliberately hack
> production servers that they do not own.

The OBSOC system is AFAIK closed source and the Deutsche Telekom would
not go to the trouble of putting up a test system for public testing.
The person who broke in was an actual customer. I repeat my question: Do
you really believe that this person should be prosecuted? Should he have
ignored the problem instead, leaving the other customers at risk?

Regards
Ansgar Wiechers
-- 
"Der Computer ist da, um zu rechnen, nicht um Ausreden wie 'Kann nicht
durch Null teilen' auf den Bildschirm zu schreiben."
--Marco Haschka in de.org.ccc


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ