lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131129215616.GA32242@rhlx01.hs-esslingen.de>
Date:	Fri, 29 Nov 2013 22:56:16 +0100
From:	Andreas Mohr <andi@...as.de>
To:	venkata koppula <vmrkoppula@...il.com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: Copying large files eats all of the RAM

Hi,

> My laptop has 4GB of ram, before I issue the command around 1.5GB of
> memory
> is used, when I issue the cp command around 3.7GB of memory is used.
> And the cp command takes a lot of time to copy.
> 
> I am not able to launch other applications(take a lot of time) and
> even compiz freezes frequently. my laptop has Ubuntu installed on it.
> 
> Is this the problem with only my system or it is a common problem with
> Linux?.
> 
> Is there any way to stop any copy command to use all of my memory.

The purpose of a good operating system is *exactly* to optimize the
*all* RAM is used, *all* the time optimum to the highest degree.
Or would you want to have your power supply used to power useless
memory that's sitting idle?

It's probably a good idea to read up on the many sites which explain
important OS caching mechanisms.

That said, there may of course be situations where too much
competition/contention for resources occurs, or where calculation of the
kept-free-for-reuse memory amount is sub-optimal, leaving overly scarce
amounts of memory available for immediate use.
But that should be a matter of optimizing core kernel algorithms
even more than they already are.

And it's also known that for certain situations (e.g. trying to push very large
amounts of data over a lowly USB 1.1 USB stick connection),
Linux does (or did?) tend to have issues with that cached data piling up
in somewhat negative ways prior to getting flushed over the connection,
thereby causing system performance to degrade (I'm not in the know of
how much that still applies to very new Linux kernel versions).

But in your case that might simply be a problem of your particular
hardware (IRQ issues, improperly implemented drivers, ...).
Some benchmarking activities might be able to provide more details
(e.g. hdparm -tT, bonnie++ results, memory performance tests, etc.).

cat /proc/slabinfo
ought to provide an initial overview of which cache elements
manage to keep the largest memory areas in service.

HTH,

Andreas Mohr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ