lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.61.0805071820010.25395@chaos.analogic.com>
Date:	Wed, 7 May 2008 18:34:29 -0400
From:	"linux-os (Dick Johnson)" <linux-os@...logic.com>
To:	"Morten Welinder" <mwelinder@...il.com>
Cc:	"linux-kernel" <linux-kernel@...r.kernel.org>
Subject: Re: Deleting large files


On Wed, 7 May 2008, Morten Welinder wrote:

> Hi there,
>
> deleting large files, say on the order of 4.6GB, takes approximately forever.
> Why is that?  Well, it is because a lot of things need to take place to free
> the formerly used space, but my real question is "why does the unlink caller
> have to wait for it?"
>
> I.e., could unlink do the directory changes and then hand off the rest of the
> task to a kernel thread?
>
> Morten
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>

Suppose you had an N GB file that just filled up the disk. You now
delete it, but get control back before it is really deleted. You
now start to write a new file that will eventually just fill up
the disk. Your task will get a media full error long before
the media is really full because the old file's data space
hasn't been freed yet. So, to "fix" this, you modify the file-
system to defer your logical writes until all the previous
spaces has been freed (writes to the physical media are deferred
anyway as long as there is RAM available). The result is that
your new data, that may be precious from a quasi-real-time source,
will fail to be written. To "fix" this, you queue everything.
This will eventually fail because the disk and RAM are of
a finite size. The size of the disk is known, but you don't
know what will be deleted before the queued writes have
completed, so you really don't know when to tell the writer
that there is no more space available.

That's why the task that deletes data can't get control back
until is has been deleted. However, for user applications, at
the user's risk, one can do `rm filename &` and let the shell
do the waiting.


Cheers,
Dick Johnson
Penguin : Linux version 2.6.22.1 on an i686 machine (5588.29 BogoMips).
My book : http://www.AbominableFirebug.com/
_


****************************************************************
The information transmitted in this message is confidential and may be privileged.  Any review, retransmission, dissemination, or other use of this information by persons or entities other than the intended recipient is prohibited.  If you are not the intended recipient, please notify Analogic Corporation immediately - by replying to this message or by sending an email to DeliveryErrors@...logic.com - and destroy all copies of this information, including any attachments, without reading or disclosing them.

Thank you.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ