lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 20 Nov 2015 09:49:04 +0900 From: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com> To: "kyeongdon.kim" <kyeongdon.kim@....com> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>, minchan@...nel.org, Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org, sergey.senozhatsky@...il.com Subject: Re: [PATCH] zram: Prevent page allocation failure during zcomp_strm_alloc Hello, On (11/19/15 22:49), kyeongdon.kim wrote: [..] > I know what you mean (streams are not free). > First of all, I'm sorry I would have to tell you why I try this patch. nothing to be sorry about. > When we're using LZ4 multi stream for zram swap, I found out this > allocation failure message in system running test, > That was not only once, but a few. Also, some failure cases were > continually occurring to try allocation order 3. > I believed that status makes time delay issue to process launch. ahhh, I see. repetitive allocation failures during heavy I/O and low memory conditions. hm... does it actually make any sense to make something like this (*just an idea*) --- diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c index 5cb13ca..049ae6b 100644 --- a/drivers/block/zram/zcomp.c +++ b/drivers/block/zram/zcomp.c @@ -124,6 +124,19 @@ static struct zcomp_strm *zcomp_strm_multi_find(struct zcomp *comp) zstrm = zcomp_strm_alloc(comp); if (!zstrm) { spin_lock(&zs->strm_lock); + /* + * If current IO path has failed to allocate a new + * stream then most likely lots of subsequent IO + * requests will do the same, wasting time attempting + * to reclaim pages, printing warning, etc. Reduce + * the number of max_stream and print an error. + */ + if (zs->max_strm > 1) { + zs->max_strm--; + pr_err("%s: reduce max_comp_streams to %d\n", + "Low memory", + zs->max_strm); + } zs->avail_strm--; spin_unlock(&zs->strm_lock); wait_event(zs->strm_wait, !list_empty(&zs->idle_strm)); --- A 'broken English' comment can shed some light on the idea. Hopefully, by the time we reduce ->max_strm there are N (>1) streams already. In the worst case we go down to a single stream, but that is something what would have happened anyway -- we don't have memory for N streams. We don't rollback the max_stream value (at least not in this version); we basically don't know when the low memory condition goes away -- may be never. > So I tried to modify this by pre-allocating and solved it, even if there > was small memory using in advance. > > But because of this patch, if there is an unneeded memory using. > I want to try new patch from another angle. > Firstly, we can change 'vmalloc()' instead of 'kzalloc()' for the > 'zstrm->private". Yes, we can try this, I guess. > Secondly, we can use GFP_NOWAIT flag to avoid this warning after 2nd > stream allocation. GFP_NOWAIT. hm... no, I don't think so. -ss -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists