lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 30 Nov 2015 20:14:24 +0900
From:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To:	"kyeongdon.kim" <kyeongdon.kim@....com>
Cc:	Minchan Kim <minchan@...nel.org>,
	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org,
	Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Subject: Re: [PATCH v3 2/2] zram: try vmalloc() after kmalloc()

On (11/30/15 19:42), kyeongdon.kim wrote:
[..]
> Sorry to have kept you waiting,
> Obviously, I couldn't see allocation fail message with this patch.
> But, there is something to make some delay(not sure yet this is normal).

what delay? how significant it is? do you see it in practice or it's just
a guess?

> static struct zcomp_strm *zcomp_strm_alloc(struct zcomp *comp)
> {
> <snip>
> 
>    zstrm->private = comp->backend->create();
> 	            ^ // sometimes, return 'null' continually(2-5times)
> 
> As you know, if there is 'null' return, this function is called again to
> get a memory in while() loop. I just checked this one with printk().

well, not always.

a) current wait_event() for available stream to become idle.
b) once current awaken it attempts to get an idle stream
c) if zstrm then return
d) if there is no idle stream then goto a)
e) else try to allocate stream again, if !zstrm goto a), else return

        while (1) {
                spin_lock(&zs->strm_lock);
                if (!list_empty(&zs->idle_strm)) {
                        zstrm = list_entry(zs->idle_strm.next,
                                        struct zcomp_strm, list);
                        list_del(&zstrm->list);
                        spin_unlock(&zs->strm_lock);
                        return zstrm;
                }
                /* zstrm streams limit reached, wait for idle stream */
                if (zs->avail_strm >= zs->max_strm) {
                        spin_unlock(&zs->strm_lock);
                        wait_event(zs->strm_wait, !list_empty(&zs->idle_strm));
                        continue;
                }
                /* allocate new zstrm stream */
                zs->avail_strm++;
                spin_unlock(&zs->strm_lock);

                zstrm = zcomp_strm_alloc(comp);
                if (!zstrm) {
                        spin_lock(&zs->strm_lock);
                        zs->avail_strm--;
                        spin_unlock(&zs->strm_lock);
                        wait_event(zs->strm_wait, !list_empty(&zs->idle_strm));
                        continue;
                }
                break;
        }

so it's possible for current to zcomp_strm_alloc() several times...

do you see the same process doing N zcomp_strm_alloc() calls, or it's N processes
doing one zcomp_strm_alloc()? I think the latter one is more likely; once we failed
to zcomp_strm_alloc() quite possible that N concurrent or succeeding IOs will do
the same. That's why I proposed to decrease ->max_strm; but we basically don't know
when we shall rollback it to the original value; I'm not sure I want to do something
like: every 42nd IO try to increment ->max_strm by one, until it's less than the
original value.

so I'd probably prefer to keep it the way it is; but let's see the numbers from
you first.

	-ss
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ