lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 5 Sep 2016 22:12:49 -0400
From:   Jeffrey Altman <jaltman@...istor.com>
To:     David Howells <dhowells@...hat.com>,
        David Laight <David.Laight@...LAB.COM>
Cc:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-afs@...ts.infradead.org" <linux-afs@...ts.infradead.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net-next 4/9] rxrpc: Randomise epoch and starting client
 conn ID values

Reply inline ....

On 9/5/2016 12:24 PM, David Howells wrote:
> [cc'ing Jeff Altman for comment]
> 
> David Laight <David.Laight@...LAB.COM> wrote:
> 
>>> Create a random epoch value rather than a time-based one on startup and set
>>> the top bit to indicate that this is the case.
>>
>> Why set the top bit?
>> There is nothing to stop the time (in seconds) from having the top bit set.
>> Nothing else can care - otherwise this wouldn't work.
> 
> This is what I'm told I should do by purveyors of other RxRPC solutions.

The protocol specification requires that the top bit be 1 for a random
epoch and 0 for a time derived epoch.
> 
>>> Also create a random starting client connection ID value.  This will be
>>> incremented from here as new client connections are created.
>>
>> I'm guessing this is to make duplicates less likely after a restart?

Its to reduce the possibility of duplicates on multiple machines that
might at some point exchange an endpoint address either due to mobility
or NAT/PAT.
> 
> Again, it's been suggested that I do this, but I would guess so.
> 
>> You may want to worry about duplicate allocations (after 2^32 connects).
> 
> It's actually a quarter of that, but connection != call, so a connection may
> be used for up to ~16 billion RPC operations before it *has* to be flushed.
> 
>> There are id allocation algorithms that guarantee not to generate duplicates
>> and not to reuse values quickly while still being fixed cost.
>> Look at the code NetBSD uses to allocate process ids for an example.
> 
> I'm using idr_alloc_cyclic()[*] with a fixed size "window" on the active conn
> ID values.  Client connections with IDs outside of that window are discarded
> as soon as possible to keep the memory consumption of the tree down (and to
> force security renegotiation occasionally).  However, given that there are a
> billion IDs to cycle through, it will take quite a while for reuse to become
> an issue.
> 
> I like the idea of incrementing the epoch every time we cycle through the ID
> space, but I'm told that a change in the epoch value is an indication that the
> client rebooted - with what consequences I cannot say.

State information might be recorded about an rx peer with the assumption
that state will be reset when the epoch changes.  The most frequent use
of this technique is for rx rpc statistics monitoring.


> 
> [*] which is what Linux uses to allocate process IDs.
> 
> David
> 


View attachment "jaltman.vcf" of type "text/x-vcard" (396 bytes)

Download attachment "smime.p7s" of type "application/pkcs7-signature" (4333 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ