[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160307154227.GA7065@htj.duckdns.org>
Date: Mon, 7 Mar 2016 10:42:27 -0500
From: Tejun Heo <tj@...nel.org>
To: Jan Kara <jack@...e.cz>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
akpm@...ux-foundation.org, jack@...e.com, pmladek@...e.com,
linux-kernel@...r.kernel.org, sergey.senozhatsky@...il.com
Subject: Re: [RFC][PATCH v2 1/2] printk: Make printk() completely async
Hello,
On Mon, Mar 07, 2016 at 09:22:30AM +0100, Jan Kara wrote:
> > > I don't know what MAYDAY is. I'm talking about a situation where printing_work
> > > work item is not processed (i.e. printing_work_func() is not called) until
> > > current work item calls schedule_timeout_*().
That was because the work item was percpu and not marked
CPU_INTENSIVE. Either using an unbound or CPU_INTENSIVE workqueue
should be enough.
> > > We had a problem that since vmstat_work work item was using system_wq,
> > > vmstat_work work item was not processed (i.e. vmstat_update() was not called) if
> > > kworker was looping inside memory allocator without calling schedule_timeout_*()
> > > due to disk_events_workfn() doing GFP_NOIO allocation).
> >
> > hm, just for note, none of system-wide wqs seem to have a ->rescuer thread
> > (WQ_MEM_RECLAIM).
Because WQ_MEM_RECLAIM only guarantees concurrency of 1, it doesn't
make sense to set it to a shared workqueue. A dedicated workquee
should be created per domain which needs forward progress guarantee.
> > hm. yes, seems that it may take some time until workqueue wakeup() a ->rescuer thread.
> > need to look more.
>
> Yes, it takes some time (0.1s or 2 jiffies) before workqueue code gives up
> creating a worker process and wakes up rescuer thread. However I don't see
> that as a problem...
I don't think it matters. At that point, the system should already be
thrashing heavily and everything is crawling anyway. A couple jiffies
delay isn't gonna be noticeable.
Thanks.
--
tejun
Powered by blists - more mailing lists