lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 24 Oct 2016 13:54:13 +0900
From:   Minchan Kim <minchan@...nel.org>
To:     Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc:     Minchan Kim <minchan@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] zram: adjust the number of zram thread

On Fri, Oct 21, 2016 at 03:23:27PM +0900, Sergey Senozhatsky wrote:
> On (09/22/16 15:42), Minchan Kim wrote:
> [..]
> > +static int __zram_cpu_notifier(void *dummy, unsigned long action,
> > +				unsigned long cpu)
> >  {
> >  	struct zram_worker *worker;
> >  
> > -	while (!list_empty(&workers.worker_list)) {
> > +	switch (action) {
> > +	case CPU_UP_PREPARE:
> > +		worker = kmalloc(sizeof(*worker), GFP_KERNEL);
> > +		if (!worker) {
> > +			pr_err("Can't allocate a worker\n");
> > +			return NOTIFY_BAD;
> > +		}
> > +
> > +		worker->task = kthread_run(zram_thread, NULL, "zramd-%lu", cpu);
> > +		if (IS_ERR(worker->task)) {
> > +			kfree(worker);
> > +			pr_err("Can't allocate a zram thread\n");
> > +			return NOTIFY_BAD;
> > +		}
> 
> well, strictly speaking we are have no strict bound-to-cpu (per-cpu)
> requirement here, we just want to have num_online_cpus() worker threads.
> if we fail to create one more worker thread nothing really bad happens,
> so I think we better not block that cpu from coming online.
> iow, always 'return NOTIFY_OK'.

If it doesn't make code complicated, I will do that in next spin.
Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ