[<prev] [next>] [day] [month] [year] [list]
Message-ID: <vdYcLmlCSbNw.NKS130Pj@4snewcom.de>
Date: Fri, 6 Jul 2007 21:53:28 +0200
From: "Harry Behrens (4S newcom)" <harry@...ewcom.de>
To: "Harry Behrens \(mobile\)" <harry@...rens.com>
Cc: Rob McCauley <robm.fd@...il.com>, full-disclosure@...ts.grok.org.uk
Subject: correction: Does this exist ?
Bad typo:
"shared and relatively rare sequences" should read "shared and relatively frequent sequences".
By using the sequence index instead of payload it is theoretically possible to reduce payload size, i.e. compress and in the case of not all packets being available to an interceptor also somewhat obfuscate.
The mother of all these schemes - or at least related - is bruce scheier's "shuffle"-based encryption scheme.
sorry for the confusion,
-h
- original message -
Subject: Re: [Full-disclosure] Does this exist ?
From: "Harry Behrens (mobile)" <harry@...rens.com>
Date: 06.07.2007 19:27
Rob,
while you are essentially right, I believe the original post had the (implicit) assumption that certain _sequences_ of patterns might occur relatively frequently, thus representing positive information or negentropy. By mutually indexing these shared and relatively rare sequences the original idea would make sense. In fact the choice of these patterns can be fully autoamtic - precisely by going for negentropy/information. It only makes sense if we assume payload >> packet ID/hash
And yes, it is analog to what traditional encryption does to documents - instead of (networked) streams.
I do hope I did understand the original post correctly.
regards,
-h
- original message -
Subject: Re: [Full-disclosure] Does this exist ?
From: "Rob McCauley" <robm.fd@...il.com>
Date: 06.07.2007 18:45
Ya know, I don't think he does get that part yet.
This scheme is essentially how data compression already works. Not in
gigantic swaths of bits, as being proposed here, but in smalish numbers a
few bits represents a bigger set of bits. Huffman coding is a basic
example.
The infeasability of this idea is all about the data size. As someone
already pointed out 2^4000 is not 16,000,000 (that's 4000^2). 2^4,000 is
large enough to just call it infinite and be done with it.
For comparison, there's something like 2^100 to 2^130 or so atoms in the
known universe. The hardware you'd need to implement a database of that
size would require more matter than exists. Period.
This idea is only interesting if it works at the scale proposed. It
doesn't. On a smaller scale, this is how data compression is already done.
Rob
>
> On Fri, Jul 06, 2007 at 01:52:55 -0500, Dan Becker wrote:
> > So we generate a packet using the idpacket field of a database to
> > describe which packets should be assembled in which order then send
> > it. 1 packet to send 500.
>
> Do you realize the id of the packet(s) would be equivalent to the contents
> of the package(s)?
>
> See also
> http://en.wikipedia.org/wiki/Information_entropy#Entropy_as_information_content
>
>
> _______________________________________________
> Full-Disclosure - We believe in it.
> Charter: http://lists.grok.org.uk/full-disclosure-charter.html
> Hosted and sponsored by Secunia - http://secunia.com/
>
_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/
_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/
Powered by blists - more mailing lists