lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y3f6habiVuV9LMcu@google.com>
Date:   Fri, 18 Nov 2022 13:35:01 -0800
From:   Minchan Kim <minchan@...nel.org>
To:     Johannes Weiner <hannes@...xchg.org>
Cc:     Nhat Pham <nphamcs@...il.com>, akpm@...ux-foundation.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        ngupta@...are.org, senozhatsky@...omium.org, sjenning@...hat.com,
        ddstreet@...e.org, vitaly.wool@...sulko.com
Subject: Re: [PATCH v5 4/6] zsmalloc: Add a LRU to zs_pool to keep track of
 zspages in LRU order

On Fri, Nov 18, 2022 at 03:05:04PM -0500, Johannes Weiner wrote:
> On Fri, Nov 18, 2022 at 11:32:01AM -0800, Minchan Kim wrote:
> > On Fri, Nov 18, 2022 at 10:24:05AM -0800, Nhat Pham wrote:
> > > @@ -1444,6 +1473,11 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
> > > 
> > >  	/* We completely set up zspage so mark them as movable */
> > >  	SetZsPageMovable(pool, zspage);
> > > +out:
> > > +#ifdef CONFIG_ZPOOL
> > > +	/* Move the zspage to front of pool's LRU */
> > > +	move_to_front(pool, zspage);
> > > +#endif
> > >  	spin_unlock(&pool->lock);
> > 
> > Please move the move_to_front into zs_map_object with ZS_MM_WO with
> > comment with "why we are doing only for WO case".
> 
> I replied to the other thread, but I disagree with this request.
> 
> The WO exception would be as zswap-specific as is the
> rotate-on-alloc. It doesn't make the resulting zsmalloc code any

That's true but at least, zs_pool allocators have the accessor so
that's fair place to have the LRU updating. I guess that's why
you agreed that's better place. No?

I understand that's zswap-specific that the bad design keeps
pushing smelly code into allocators and then "push to take it
since other were already doing" with "we will take them off with
better solution in future". I am really struggling to understand
this concept. Johannes, Is that really how we work over a decade?

> cleaner or more generic, just weird in a slightly different way.
> 
> On the other hand, it makes zsmalloc deviate from the other backends
> and introduces new callchains that invalidate thousands of machine
> hours of production testing of this code.

Do you really believe the trival change makes invalidates
the testing?

        ret = zpool_malloc(entry->pool->zpool, hlen + dlen, gfp, &handle);
        if (ret == -ENOSPC) {
                zswap_reject_compress_poor++;
                goto put_dstmem;
        }
        if (ret) {
                zswap_reject_alloc_fail++;
                goto put_dstmem;
        }
        buf = zpool_map_handle(entry->pool->zpool, handle, ZPOOL_MM_WO);
        memcpy(buf, &zhdr, hlen);
        memcpy(buf + hlen, dst, dlen);
        zpool_unmap_handle(entry->pool->zpool, handle);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ