lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1475141514.4676.28.camel@redhat.com>
Date:   Thu, 29 Sep 2016 11:31:54 +0200
From:   Paolo Abeni <pabeni@...hat.com>
To:     Eric Dumazet <eric.dumazet@...il.com>
Cc:     netdev@...r.kernel.org, "David S. Miller" <davem@...emloft.net>,
        James Morris <jmorris@...ei.org>,
        Trond Myklebust <trond.myklebust@...marydata.com>,
        Alexander Duyck <aduyck@...antis.com>,
        Daniel Borkmann <daniel@...earbox.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Tom Herbert <tom@...bertland.com>,
        Hannes Frederic Sowa <hannes@...essinduktion.org>,
        Edward Cree <ecree@...arflare.com>, linux-nfs@...r.kernel.org
Subject: Re: [PATCH net-next v3 2/3] udp: implement memory accounting helpers

On Wed, 2016-09-28 at 18:42 -0700, Eric Dumazet wrote:
> On Wed, 2016-09-28 at 12:52 +0200, Paolo Abeni wrote:
> 
> > +static void udp_rmem_release(struct sock *sk, int partial)
> > +{
> > +	struct udp_sock *up = udp_sk(sk);
> > +	int fwd, amt;
> > +
> > +	if (partial && !udp_under_memory_pressure(sk))
> > +		return;
> > +
> > +	/* we can have concurrent release; if we catch any conflict
> > +	 * we let only one of them do the work
> > +	 */
> > +	if (atomic_dec_if_positive(&up->can_reclaim) < 0)
> > +		return;
> > +
> > +	fwd = __udp_forward(up, atomic_read(&sk->sk_rmem_alloc));
> > +	if (fwd < SK_MEM_QUANTUM + partial) {
> > +		atomic_inc(&up->can_reclaim);
> > +		return;
> > +	}
> > +
> > +	amt = (fwd - partial) & ~(SK_MEM_QUANTUM - 1);
> > +	atomic_sub(amt, &up->mem_allocated);
> > +	atomic_inc(&up->can_reclaim);
> > +
> > +	__sk_mem_reduce_allocated(sk, amt >> SK_MEM_QUANTUM_SHIFT);
> > +	sk->sk_forward_alloc = fwd - amt;
> > +}
> 
> 
> This is racy... all these atomics make me nervous...

Ah, perhaps I got it: if we have a concurrent memory scheduling, we
could end up with a value of mem_allocated below the real need. 

That mismatch will not drift: at worst we can end up with mem_allocated
being single SK_MEM_QUANTUM below what is strictly needed.

A possible alternative could be:

static void udp_rmem_release(struct sock *sk, int partial)
{
	struct udp_sock *up = udp_sk(sk);
	int fwd, amt, alloc_old, alloc;

	if (partial && !udp_under_memory_pressure(sk))
		return;

	alloc = atomic_read(&up->mem_allocated);
	fwd = alloc - atomic_read(&sk->sk_rmem_alloc);
	if (fwd < SK_MEM_QUANTUM + partial)
		return;

	amt = (fwd - partial) & ~(SK_MEM_QUANTUM - 1);
	alloc_old = atomic_cmpxchg(&up->mem_allocated, alloc, alloc - amt);
	/* if a concurrent update is detected, just do nothing; if said update
	 * is due to another memory release, that release take care of
	 * reclaiming the memory for us, too.
	 * Otherwise we will be able to release on later dequeue, since
	 * we will eventually stop colliding with the writer when it will
	 * consume all the fwd allocated memory
	 */
	if (alloc_old != alloc)
		return;

	__sk_mem_reduce_allocated(sk, amt >> SK_MEM_QUANTUM_SHIFT);
	sk->sk_forward_alloc = fwd - amt;
}

which is even more lazy in reclaiming but should never underestimate the
needed forward allocation, and under pressure should eventually free the
needed memory.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ