lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 20 Dec 2010 09:57:08 +0800
From:	Li Yu <raise.sail@...il.com>
To:	Jeff Moyer <jmoyer@...hat.com>
CC:	bcrl@...ck.org, viro@...iv.linux.org.uk, linux-aio@...ck.org,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] aio: using hash table for active requests



于 2010年12月16日 06:02, Jeff Moyer 写道:
> Li Yu <raise.sail@...il.com> writes:
> 
>> This patch remove a TODO in fs/aio.c, that is to use hash table for
>> active requests.
> 
> It's remained a TODO item for this long because it hasn't really
> mattered in the past.  Was there other motivation for this patch besides
> the TODO comment?
> 

:), indeed, to solve a TODO is my major motivation.

>> I prefer add an iocb at tail of collision chain, so I do not use hlist
>> here.
> 
> Why do you prefer to add to the tail?  It makes more sense to add
> collisions at the head, since if you're going to cancel everything, you
> have a better chance of cancelling stuff that was queued later than
> earlier (assuming you can cancel anything at all, which for most cases,
> you can't).  Also, you're halving the number of buckets, so you should
> have a good reason.
> 

I misunderstanded major feature of these code ... it just only can speed up a bit for io_cancel() syscall.
However, the "add to tail" design is wrong here, I ever thinked each hash bucket acts as a FIFO queue, so add an element to tail of list can reduce dequeue time.

> What sort of testing did you do?
> 
> I've made some cursory comments below.  I'll more fully review this if
> you can provide a bit more justification and some proof of testing.  I'd
> be really surprised if this helped any real workloads, though.

Yes, I have some testings for this patches, it seem that its performance impact can be ignored, I tested by fio, iodepth=65535, size=1024m.

Thanks for your review time!

Yu

> 
> Cheers,
> Jeff
> 
>> +static int ioctx_active_reqs_init(struct kioctx *ctx)
>> +{
>> +	int i;
>> +
>> +	ctx->active_reqs_table = kmalloc(AIO_ACTREQ_BUCKETS*sizeof(struct list_head), GFP_KERNEL);
> 
> Fit this into 80 columns, please.  You may just want to run checkpatch
> over the whole thing.
> 
>> +static inline void aio_cancel_one(struct kioctx *ctx, struct kiocb *iocb)
> 
> I see no good reason to inline this.
> 
>> @@ -465,10 +506,12 @@ static struct kiocb *__aio_get_req(struct kioctx *ctx)
>>  	/* Check if the completion queue has enough free space to
>>  	 * accept an event from this io.
>>  	 */
>> +	bucket = hash_long((unsigned long)tohash, AIO_ACTREQ_BUCKETS_SHIFT);
> 
> hash_ptr?
> 
>> -	struct list_head	active_reqs;	/* used for cancellation */
>> +	struct list_head*	active_reqs_table;	/* used for cancellation */
> 
> Coding Style
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists