lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20120308234016.GA925@redhat.com>
Date:	Thu, 8 Mar 2012 18:40:16 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Josh Hunt <joshhunt00@...il.com>
Cc:	Jens Axboe <axboe@...nel.dk>, linux-kernel@...r.kernel.org,
	tj@...nel.org
Subject: Re: multi-second application stall in open()

On Thu, Mar 08, 2012 at 04:22:31PM -0600, Josh Hunt wrote:

[..]
> A crude bisection seems to show that if I revert "blkio: Set
> must_dispatch only if we decided to not dispatch the request"
> (bf7919371025412978268efca4b09dd847acb395) I no longer see the stalls
> in 2.6.35. However this does not seem to solve the problem if I revert
> it in 2.6.38.

Strange. If this was the problem it should have fixed it in 2.6.38 also.
BTW, the blktrace you sent was from 2.6.38 or 2.6.35 kernels.

> 
> By setting slice_idle to 0, does this basically disable plugging?

It disables idling and not plugging.

> Based on the blktrace info it seems that something is going wrong with
> plugging with my testcase. I'm just wondering why setting slice_idle
> to 0 seems to resolve my issue? Also, since we see unplugs in the
> blktrace how could the requests still not be getting sent to the disk?

Unplug will just try to kick the queue. That does not mean that request
will be dispatched. And that's the question that why are we not
dispatching requests.

I had another look at traces and I think it is not just async write, but
there is was sync write request queued and we have not dispatched that
too for a long time.

Added request here.

  8,0    1    36921  5028.492019664   162  A  WS 63 + 8 <- (8,1) 0
  8,0    1    36922  5028.492021620   162  Q  WS 63 + 8 [sync_supers]
  8,0    1    36923  5028.492029721   162  G  WS 63 + 8 [sync_supers]
  8,0    1    36924  5028.492040617   162  I   W 63 + 8 (   10896)
[sync_supers]
  8,0    1        0  5028.492044807     0  m   N cfq162 insert_request
  8,0    1        0  5028.492046763     0  m   N cfq162 add_to_rr

And after a long time we dispatched the request.

  8,0    0        0  5050.116841906     0  m   N cfq162 set_active wl_prio:0 wl_type:1
  8,0    0        0  5050.116844979     0  m   N cfq162 fifo=ffff8800e8787aa0
  8,0    0        0  5050.116846655     0  m   N cfq162 dispatch_insert
  8,0    0        0  5050.116849728     0  m   N cfq162 dispatched a
request
  8,0    0        0  5050.116851683     0  m   N cfq162 activate rq, drv=1
  8,0    0    36518  5050.116853360   166  D   W 63 + 8 (21624812743)
[kblockd/0]
  8,0    0    36519  5050.117236650  9671  C   W 63 + 8 (  383290) [0]

So it is not async requestss being starved by sync request issue, most
likely.

Are you using any of the blk cgroup stuff?

Can you put some more trace messages to figure out what's happening.
I think you can try putting some trace messages in following functions.

__blk_run_queue()
cfq_select_queue()

and try to narrow down why CFQ refuses to dispatch the request when
this happens.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ