lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160307101233.GA10690@swordfish>
Date:	Mon, 7 Mar 2016 19:12:33 +0900
From:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To:	Jan Kara <jack@...e.cz>
Cc:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
	akpm@...ux-foundation.org, jack@...e.com, pmladek@...e.com,
	tj@...nel.org, linux-kernel@...r.kernel.org,
	sergey.senozhatsky@...il.com
Subject: Re: [RFC][PATCH v2 1/2] printk: Make printk() completely async

Hello,

On (03/07/16 09:22), Jan Kara wrote:
[..]
> > hm, just for note, none of system-wide wqs seem to have a ->rescuer thread
> > (WQ_MEM_RECLAIM).
> > 
> > [..]
> > > Even if you use printk_wq with WQ_MEM_RECLAIM for printing_work work item,
> > > printing_work_func() will not be called until current work item calls
> > > schedule_timeout_*(). That will be an undesirable random delay. If you use
> > > a dedicated kernel thread rather than a dedicated workqueue with WQ_MEM_RECLAIM,
> > > we can avoid this random delay.
> > 
> > hm. yes, seems that it may take some time until workqueue wakeup() a ->rescuer thread.
> > need to look more.
> 
> Yes, it takes some time (0.1s or 2 jiffies) before workqueue code gives up
> creating a worker process and wakes up rescuer thread. However I don't see
> that as a problem...

yes, that's why I asked Tetsuo whether his concern was a wq's MAYDAY timer
delay. the two commits that Tetsuo pointed at earlier in he loop (373ccbe59270
and 564e81a57f97) solved the problem by switching to WQ_MEM_RECLAIM wq.
I've slightly tested OOM-kill on my desktop system and haven't spotted any
printk delays (well, a test on desktop is not really representative, of
course).


the only thing that so far grabbed my attention - is

	__this_cpu_or(printk_pending)
	irq_work_queue(this_cpu_ptr(&wake_up_klogd_work));

a _theoretical_ corner case here is when we have only one CPU doing a bunch
of printk()s and this CPUs disables irqs in advance
	local_irq_save
	for (...)
		printk()
	local_irq_restore()

if no other CPUs see `printk_pending' then nothing will be printed up
until local_irq_restore() (assuming that IRQ disable time is withing
the hardlockup detection threshold). if any other CPUs concurrently
execute printk then we are fine, but
	a) if none -- then we probably have a small change in behaviour
and
	b) UP systems

[..]
> > such usage is quite possible.
> > 
> > problems that I have with console_lock()/console_unlock() is that
> > these functions serve a double purpose: exclusive printk() lock and a
> > console_drivers list lock.
> 
> Well, but changing how console locking works is a separate issue, isn't it?
> So please as a separate patch set if you want to try it.

absolutely agree, this is a separate thing.


> Actually I don't think changing the locking will be so easy.

again, agree. splitting any lock is always tricky and risky.
especially if we talk about console_sem. it can easily add up
new deadlocks, make some fbcon unhappy, etc. etc.

register_console()
	write_lock_console_lock()
		if (error)
			printk()
				printk_lock()
					read_lock_console_lock() <- eadlock

and so on and so forth; I'm not very enthusiastic to change this at
the moment.

	-ss

> console_lock/unlock is used e.g. for console blanking where you need to
> block any printing while you call ->unblank() for each console. That being
> said I don't think improvement is impossible, just given my experiences
> with console / printk code there will be surprises waiting for you :).
> 
> 								Honza
> -- 
> Jan Kara <jack@...e.com>
> SUSE Labs, CR
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ