lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y9nDXBt2OR3hg5X7@google.com>
Date:   Wed, 1 Feb 2023 10:41:48 +0900
From:   Sergey Senozhatsky <senozhatsky@...omium.org>
To:     Nhat Pham <nphamcs@...il.com>
Cc:     akpm@...ux-foundation.org, hannes@...xchg.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, minchan@...nel.org,
        ngupta@...are.org, senozhatsky@...omium.org, sjenning@...hat.com,
        ddstreet@...e.org, vitaly.wool@...sulko.com, kernel-team@...a.com
Subject: Re: [PATCH] zsmalloc: fix a race with deferred_handles storing

On (23/01/10 15:17), Nhat Pham wrote:
[..]
>  #ifdef CONFIG_ZPOOL
> +static void restore_freelist(struct zs_pool *pool, struct size_class *class,
> +		struct zspage *zspage)
> +{
> +	unsigned int obj_idx = 0;
> +	unsigned long handle, off = 0; /* off is within-page offset */
> +	struct page *page = get_first_page(zspage);
> +	struct link_free *prev_free = NULL;
> +	void *prev_page_vaddr = NULL;
> +
> +	/* in case no free object found */
> +	set_freeobj(zspage, (unsigned int)(-1UL));

I'm not following this. I see how -1UL works for link_free, but this
cast of -1UL to 4 bytes looks suspicious.

> +	while (page) {
> +		void *vaddr = kmap_atomic(page);
> +		struct page *next_page;
> +
> +		while (off < PAGE_SIZE) {
> +			void *obj_addr = vaddr + off;
> +
> +			/* skip allocated object */
> +			if (obj_allocated(page, obj_addr, &handle)) {
> +				obj_idx++;
> +				off += class->size;
> +				continue;
> +			}
> +
> +			/* free deferred handle from reclaim attempt */
> +			if (obj_stores_deferred_handle(page, obj_addr, &handle))
> +				cache_free_handle(pool, handle);
> +
> +			if (prev_free)
> +				prev_free->next = obj_idx << OBJ_TAG_BITS;
> +			else /* first free object found */
> +				set_freeobj(zspage, obj_idx);
> +
> +			prev_free = (struct link_free *)vaddr + off / sizeof(*prev_free);
> +			/* if last free object in a previous page, need to unmap */
> +			if (prev_page_vaddr) {
> +				kunmap_atomic(prev_page_vaddr);
> +				prev_page_vaddr = NULL;
> +			}
> +
> +			obj_idx++;
> +			off += class->size;
> +		}
> +
> +		/*
> +		 * Handle the last (full or partial) object on this page.
> +		 */
> +		next_page = get_next_page(page);
> +		if (next_page) {
> +			if (!prev_free || prev_page_vaddr) {
> +				/*
> +				 * There is no free object in this page, so we can safely
> +				 * unmap it.
> +				 */
> +				kunmap_atomic(vaddr);
> +			} else {
> +				/* update prev_page_vaddr since prev_free is on this page */
> +				prev_page_vaddr = vaddr;
> +			}

A polite and gentle nit: I'd appreciate it if we honored kernel coding
styles in zsmalloc a little bit more. Comments, function declarations, etc.
I'm personally very happy with https://github.com/vivien/vim-linux-coding-style

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ