lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date: Fri,  6 Jul 2007 21:27:34 +0200
From: "Harry Behrens (mobile)" <harry@...rens.com>
To: "Rob McCauley" <robm.fd@...il.com>
Cc: full-disclosure@...ts.grok.org.uk
Subject: Re: Does this exist ?

Rob,
while you are essentially right, I believe the original post had the (implicit) assumption that certain _sequences_ of patterns might occur relatively frequently, thus representing positive information or negentropy. By mutually indexing these shared and relatively rare sequences the original idea would make sense. In fact the choice of these patterns can be fully autoamtic - precisely by going for negentropy/information. It only makes sense if we assume payload >> packet ID/hash
And yes, it is analog to what traditional encryption does to documents - instead of (networked) streams.

I do hope I did understand the original post correctly.


regards,

 -h

- original message -
Subject:	Re: [Full-disclosure] Does this exist ?
From:	"Rob McCauley" <robm.fd@...il.com>
Date:		06.07.2007 18:45

Ya know, I don't think he does get that part yet.

This scheme is essentially how data compression already works.  Not in
gigantic swaths of bits, as being proposed here, but in smalish numbers a
few bits represents a bigger set of bits.  Huffman coding is a basic
example.

The infeasability of this idea is all about the data size.  As someone
already pointed out 2^4000 is not 16,000,000 (that's 4000^2).  2^4,000 is
large enough to just call it infinite and be done with it.

For comparison, there's something like 2^100 to 2^130 or so atoms in the
known universe.  The hardware you'd need to implement a database of that
size would require more matter than exists.  Period.

This idea is only interesting if it works at the scale proposed.  It
doesn't.  On a smaller scale, this is how data compression is already done.

Rob

>
> On Fri, Jul 06, 2007 at 01:52:55 -0500, Dan Becker wrote:
> > So we generate a packet using the idpacket field of a database to
> > describe which packets should be assembled in which order then send
> > it. 1 packet to send 500.
>
> Do you realize the id of the packet(s) would be equivalent to the contents
> of the package(s)?
>
> See also
> http://en.wikipedia.org/wiki/Information_entropy#Entropy_as_information_content
>
>
> _______________________________________________
> Full-Disclosure - We believe in it.
> Charter: http://lists.grok.org.uk/full-disclosure-charter.html
> Hosted and sponsored by Secunia - http://secunia.com/
>


_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

Powered by blists - more mailing lists