lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 21 Sep 2021 09:37:40 -0600
From:   Jens Axboe <axboe@...nel.dk>
To:     Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org
Cc:     Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>, linux-kernel@...r.kernel.org,
        Matthew Wilcox <willy@...radead.org>,
        John Garry <john.garry@...wei.com>,
        linux-block@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [RFC v2 PATCH] mm, sl[au]b: Introduce lockless cache

> @@ -424,6 +431,57 @@ kmem_cache_create(const char *name, unsigned int size, unsigned int align,
>  }
>  EXPORT_SYMBOL(kmem_cache_create);
>  
> +/**
> + * kmem_cache_alloc_cached - try to allocate from cache without lock
> + * @s: slab cache
> + * @flags: SLAB flags
> + *
> + * Try to allocate from cache without lock. If fails, fill the lockless cache
> + * using bulk alloc API
> + *
> + * Be sure that there's no race condition.
> + * Must create slab cache with SLAB_LOCKLESS_CACHE flag to use this function.
> + *
> + * Return: a pointer to free object on allocation success, NULL on failure.
> + */
> +void *kmem_cache_alloc_cached(struct kmem_cache *s, gfp_t gfpflags)
> +{
> +	struct kmem_lockless_cache *cache = this_cpu_ptr(s->cache);
> +
> +	BUG_ON(!(s->flags & SLAB_LOCKLESS_CACHE));
> +
> +	if (cache->size) /* fastpath without lock */
> +		return cache->queue[--cache->size];
> +
> +	/* slowpath */
> +	cache->size = kmem_cache_alloc_bulk(s, gfpflags,
> +			KMEM_LOCKLESS_CACHE_QUEUE_SIZE, cache->queue);
> +	if (cache->size)
> +		return cache->queue[--cache->size];
> +	else
> +		return NULL;
> +}
> +EXPORT_SYMBOL(kmem_cache_alloc_cached);

How does this work for preempt? You seem to assume that the function is
invoked with preempt disabled, but then it could only be used with
GFP_ATOMIC.

There are basically two types of use cases for this:

1) Freeing can happen from interrupts
2) Freeing cannot happen from interrupts

What I implemented for IOPOLL doesn't need to care about interrupts,
hence preemption disable is enough. But we do need that, at least.

And if you don't care about users that free from irq/softirq, then that
should be mentioned. Locking context should be mentioned, too. The above
may be just fine IFF both alloc and free are protected by a lock higher
up. If not, both need preemption disabled and GFP_ATOMIC. I'd suggest
making the get/put cpu part of the API internally.

> +/**
> + * kmem_cache_free_cached - return object to cache
> + * @s: slab cache
> + * @p: pointer to free
> + */
> +void kmem_cache_free_cached(struct kmem_cache *s, void *p)
> +{
> +	struct kmem_lockless_cache *cache = this_cpu_ptr(s->cache);
> +
> +	BUG_ON(!(s->flags & SLAB_LOCKLESS_CACHE));

Don't use BUG_ON, just do:

	if (WARN_ON_ONCE(!(s->flags & SLAB_LOCKLESS_CACHE))) {
		kmem_cache_free(s, p);
		return;
	}

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ