lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 21 Nov 2010 21:16:06 +0100
From:	Richard Kralovic <Richard.Kralovic@....fmph.uniba.sk>
To:	Jeff Moyer <jmoyer@...hat.com>
CC:	linux-kernel@...r.kernel.org, axboe@...nel.dk
Subject: Re: CFQ and dm-crypt

>> On 11/03/10 04:23, Jeff Moyer wrote:
>>>>> CFQ io scheduler relies on using task_struct current to determine which
>>>>> process makes the io request. On the other hand, some dm modules (such
>>>>> as dm-crypt) use separate threads for doing io. As CFQ sees only these
>>>>> threads, it provides a very poor performance in such a case.
>>>>>
>>>>> IMHO the correct solution for this would be to store, for every io
>>>>> request, the process that initiated it (and preserve this information
>>>>> while the request is processed by device mapper). Would that be feasible?
>>> Sure.  Try the attached patch (still an rfc) and let us know how it
>>> goes.  In my environment, it speed up multiple concurrent buffered
>>> readers.  I wasn't able to do a full analysis via blktrace as 2.6.37-rc1
>>> seems to have broken blktrace support on my system.
>>
>> Thanks for the patch. Unfortunately, I got a kernel panic quite soon
>> after booting the patched kernel. I was not able to reproduce the
>> panic in a virtual machine, so I had to manually note the backtrace,
>> thus I apologize that it's incomplete:
> 
> Hi, Richard,
> 
> I have another patch for you to try.  This one holds up pretty well in
> my testing using a dm-crypt target.  Without the patch, I see no
> priority separation:

Hello,

I am sorry for a late reply (it took me some time to try the patch on
my main machine). Thank you for your work; unfortunately, I'm still
getting a panic at early boot stages: The BUG_ON assert in
cfq-iosched.c:cic_free_func failed, with a call trace as follows:

<IRQ>
cfq_free_io_context
put_io_context
cfq_put_request
elv_put_request
blk_finish_request
...

I applied your patch at Linus's git tree (commit
b86db4744230c94e480de56f1b7f31117edbf193), it applied cleanly.

I tried a little to find the source of the problem, but without
success so far. However, I think that calling put_io_context in
bio_endio only may introduce memory leaks under certain situations
(although this is probably not related to the panic I get):

In blk-core.c:blk_rq_prep_clone, __bio_clone allocates the io_context,
but if subsequent allocations fail, bio_free in free_and_out does not
free it.

Similarly, in dm-crypt.c:crypt_alloc_buffer, bio_put does not
free the io_context.

Or have I misunderstood something?

Greets
        Richard
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ