lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y/rbk03cJc44StNr@google.com>
Date:   Sun, 26 Feb 2023 13:09:55 +0900
From:   Sergey Senozhatsky <senozhatsky@...omium.org>
To:     Minchan Kim <minchan@...nel.org>
Cc:     Sergey Senozhatsky <senozhatsky@...omium.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Yosry Ahmed <yosryahmed@...gle.com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCHv2 4/6] zsmalloc: rework compaction algorithm

On (23/02/23 15:46), Minchan Kim wrote:
> >  	spin_lock(&pool->lock);
> > -	while ((src_zspage = isolate_src_zspage(class))) {
> > -		/* protect someone accessing the zspage(i.e., zs_map_object) */
> > -		migrate_write_lock(src_zspage);
> > -
> > -		if (!zs_can_compact(class))
> > -			break;
> > -
> > -		cc.obj_idx = 0;
> > -		cc.s_page = get_first_page(src_zspage);
> > -
> > -		while ((dst_zspage = isolate_dst_zspage(class))) {
> > -			migrate_write_lock_nested(dst_zspage);
> > -
> > +	while (1) {
> 
> Hmm, I preferred the old loop structure. Did you see any problem
> to keep old code structure?

Unfortunately we cannot keep the current structure as it will create
conflicting/reverse locking patterns.

What we currently have is that source page is isolated first and its
migration lock is the outter lock:

	migrate_write_lock src

Destination page is isolated second and its migration lock is nested:

	migrate_write_lock_nested dst

Since destination page lock is nested we always need to unlock it before
we unlock the outer lock (source page migrate lock). If we keep destination
locked (nested lock, which will be a bug) then on the next iteration we will
isolate a new source page and try to migrate_write_lock it except that now
source page migration lock is in fact nested which we take under another
nested lock (which is another bug).

Hence we need to flip the structure: we isolate destination page, its
lock is outter lock, we keep it locked as long as we need and source page
lock becomes nested. I think that's the simplest way.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ