Joe Rosato via plug on 28 May 2020 09:52:44 -0700 |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: [PLUG] memory usage in Linux |
Yep, but it’s not always so historical 😉.
I’ve always thought the way AIX manages memory was better than Linux’s in a number of ways. Mostly around the amount of flexibility and tooling you get (albeit proprietary).
Here’s are some great (historical) articles on AIX’s Page Replacement process:
https://www.ibm.com/developerworks/aix/library/au-vmm/index.html
https://www.ibm.com/support/pages/aix-virtual-memory-manager-vmm
Thanks,
Andy Wojnarek | Principal Solutions Architect
andy.wojnarek@theatsgroup.com |717.856.6901
Innovative IT consulting & modern infrastructure solutions
From: plug <plug-bounces@lists.phillylinux.org> on behalf of Joe Rosato via plug <plug@lists.phillylinux.org>
Reply-To: Joe Rosato <rosatoj@gmail.com>
Date: Thursday, May 21, 2020 at 8:51 PM
To: Dustin Black <dustin@redhat.com>
Cc: Philadelphia Linux User's Group Discussion List <plug@lists.phillylinux.org>, Rich Freeman <r-plug@thefreemanclan.net>
Subject: Re: [PLUG] memory usage in Linux
To put things into perspective, I used to admin AIX nodes and memory was a LOT different. You can set a value (percentage) for what were called file and work pages. File pages were just files loaded into memory, and dropping them out had no performance penalty. Work pages were just that, work, and did not exist anywhere.. so if they had to get flushed they had to go to paging space which is very expensive. You would have to balance the box so that you had enough work pages, to stop flushing to disk and slowing things down. The tradition was to leave a buffer in the center, so that you had your work pages almost perfectly covered and then you would fill the remainder with file pages.
Just some old historical info..
On Thu, May 21, 2020 at 5:31 PM Dustin Black via plug <plug@lists.phillylinux.org> wrote:
On Wed, May 20, 2020 at 1:34 PM Rich Freeman via plug <plug@lists.phillylinux.org> wrote:
On Tue, May 19, 2020 at 2:35 PM Eric Lucas via plug
<plug@lists.phillylinux.org> wrote:
>
> I'm running Kubuntu 18.04 [1] on a Dell Laptop with a 2-core i7 [2] and 16 GB of RAM (the max apparently.)
> I tend to leave lots of windows open on two or three of my four virtual desktops.
> Sometimes things get slow (like participating in a zoom chat with a few dozen fellow homebrewers last night.)
> I installed 'cockpit' to see what's up. All is okay except for RAM. It shows near 16GB of RAM used.
> I seem to recall (from years ago) that Linux "hoovers up" all the RAM and then parcels it out to applications as needed.
> If so, what's the point of telling me how much RAM I have used if it's always 99+%?
> How do others monitor and/or manage RAM usage in Linux?
>
Here is a typical output of the "free" command:
free
total used free shared buff/cache available
Mem: 16373372 8077212 2613968 2766012 5682192 5184788
Swap: 0 0 0
Total is just the amount of RAM in the system - 16GB in this case
(same as with you).
Used is total - free - buffers - cache.
Available is what is available for use, taking into account reclaiming
buffers and cache. It isn't the same as total - used because not all
of this memory can be reclaimed. This is what I would probably focus
on when monitoring.
Free is memory that is completely unused. As you say the kernel tries
to minimize this, but if a big process dies there will be free ram
until it gets filled with cache/etc, which won't happen until there is
IO.
I would just focus on available memory from a memory management standpoint.
I'm sure there are GUI applications that can monitor this as well.
MemAvailable also shows up in /proc/meminfo.
Monitoring swap use is also important, but you really want to see how
much is swapping in and out - a slow memory leak might lead to a bunch
of swap getting used and isn't ideal, but probably won't impact actual
performance.
--
Rich
___________________________________________________________________________
Philadelphia Linux Users Group -- http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug
Seconding Rich, here.
Memory is fast and disk I/O is slow, so Linux by default tries to cache as many disk pages as possible into memory. The memory consumed by buffers/cache is generally available to be reclaimed for use by applications as it is demanded, so in the output of `free` pay attention primarily to the "available" value.
All that said, you could very much have enough open that you're getting some competition for RAM that's slowing things down. I have similar habits for leaving a lot of $stuff open all the time. One thing that has helped me out a ton is to use some tools to manage my browser's (Chrome, in my case) resource usage. There is a very useful extension available for Chrome called The Great Discarder that automatically kills Chrome tab processes that have been unused for a period of time. As soon as you click that tab again, it is reloaded. There may be something similar for Firefox, but last I remember FF used threads for tabs, not processes, so that could make things more complicated.
HTH!
Dustin BlackRed Hat Performance & Scale Engineering
___________________________________________________________________________
Philadelphia Linux Users Group -- http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug
___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug