Please read this before reporting a bug:
https://wiki.archlinux.org/title/Bug_reporting_guidelines
Do NOT report bugs when a package is just outdated, or it is in the AUR. Use the 'flag out of date' link on the package page, or the Mailing List.
REPEAT: Do NOT report bugs for outdated packages!
https://wiki.archlinux.org/title/Bug_reporting_guidelines
Do NOT report bugs when a package is just outdated, or it is in the AUR. Use the 'flag out of date' link on the package page, or the Mailing List.
REPEAT: Do NOT report bugs for outdated packages!
FS#56681 - [procps-ng] top shows impossibly high RAM usage
Attached to Project:
Arch Linux
Opened by Richard Neumann (rne) - Monday, 11 December 2017, 15:01 GMT
Last edited by Gaetan Bisson (vesath) - Friday, 15 December 2017, 20:42 GMT
Opened by Richard Neumann (rne) - Monday, 11 December 2017, 15:01 GMT
Last edited by Gaetan Bisson (vesath) - Friday, 15 December 2017, 20:42 GMT
|
DetailsDescription:
Today I encountered that running a plain top as standard user resulted in displaying an impossibly high RAM usage (see screenshot) while free showed usual RAM usage. free -m total used free shared buff/cache available Mem: 7812 2233 3351 867 2226 4413 Swap: 0 0 0 I restarted top several times while the issue remained. After a reboot of the machine, the problem is gone now and I am unable to reproduce it. Before the issue occured I had high memory usage on the machine due to (another bug in?) gnome-builder, which I subsequently killed (SIGTERM). Additional info: * procps-ng 3.3.12-1 * No vimrc, aliases or so. Steps to reproduce: I was not able to reproduce this issue |
This task depends upon
Closed by Gaetan Bisson (vesath)
Friday, 15 December 2017, 20:42 GMT
Reason for closing: Fixed
Additional comments about closing: procps-ng-3.3.12-2 in [testing]
Friday, 15 December 2017, 20:42 GMT
Reason for closing: Fixed
Additional comments about closing: procps-ng-3.3.12-2 in [testing]
Probably you want to report this upstream: https://gitlab.com/procps-ng/procps
(@rne, thanks for messaging me about this, next time you can just file a reopen request though with the link and a request to backport this.)