lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <48C540DC.3070306@weinigel.se>
Date:	Mon, 08 Sep 2008 17:12:28 +0200
From:	Christer Weinigel <christer@...nigel.se>
To:	Ben Dooks <ben-linux@...ff.org>
CC:	linux-kernel@...r.kernel.org
Subject: Re: [patch 2/3] s3cmci - call pio_tasklet from IRQ

Ben Dooks wrote:
> On Mon, Sep 08, 2008 at 02:48:50PM +0200, Christer Weinigel wrote:
>> Scheduling a tasklet to perform the pio transfer introduces a bit of
>> extra processing, just call pio_tasklet directly from the interrupt
>> instead.  Writing up to 64 bytes to a FIFO is probably uses less CPU
>> than scheduling a tasklet anyway.
> 
> Hmm, i'd be interested to find out how long these are taking... I might
> try and rig up something to test the time being taken via an SMDK.
> 
> If the fifo read/writes are taking significant amounts of time, then the
> pio tasklet will at least improve the interrupt latencies invloved, as
> iirc we're currently running the main irq handler in IRQ_DISABLED mode
> to stop any problems with re-enternancy.... I'll check this and see what
> is going on.

Ok, I just measured this on the 200 MHz S3C24A0, when running the SDIO 
bus at 10 MHz, the longest time I saw the driver spend in the pio_read 
function was ~10us.  I guess that means that the hardware managed to 
empty the fifo enough to do yet another spin through the loop.  So with 
a faster SDIO clock the time spent in pio_read ought to go up, and for a 
long transfer it could grow without bounds.

I also tried to run the code with the schedule_tasklet still there, and 
then I saw ~13us as the longest time spent in the loop, and every now 
and then there was a ~10ms gap when the clock stopped.

If we have working DMA, I think the PIO tasklet is unneccesary, then 
we'll do PIO for short transfers which won't affect latency much, and 
use DMA for long transfers that would have affected latency if done with 
PIO from interrupt context.

   /Christer

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ