Why are programs allowed to overtake processor and memory to the point of stalling the system?
Forum rules
Do not post support questions here. Before you post read the forum rules. Topics in this forum are automatically closed 6 months after creation.
Do not post support questions here. Before you post read the forum rules. Topics in this forum are automatically closed 6 months after creation.
-
- Level 2
- Posts: 61
- Joined: Mon Nov 22, 2010 12:46 pm
Why are programs allowed to overtake processor and memory to the point of stalling the system?
I find it bewildering that stuff like that can actually happen. One would think that there would be some kind of a mechanism that would prevent programs from making the system unresponsive.
Last edited by LockBot on Wed Dec 28, 2022 7:16 am, edited 1 time in total.
Reason: Topic automatically closed 6 months after creation. New replies are no longer allowed.
Reason: Topic automatically closed 6 months after creation. New replies are no longer allowed.
- AZgl1800
- Level 20
- Posts: 11145
- Joined: Thu Dec 31, 2015 3:20 am
- Location: Oklahoma where the wind comes Sweeping down the Plains
- Contact:
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
System Monitor sure shows that in a hurry.
I have more trouble with the browser trying to load a bad page, than anything else.
cpu#3 goes to 100% and game over.
I hit ^Q and wait, and eventually it exits.
I have more trouble with the browser trying to load a bad page, than anything else.
cpu#3 goes to 100% and game over.
I hit ^Q and wait, and eventually it exits.
-
- Level 2
- Posts: 61
- Joined: Mon Nov 22, 2010 12:46 pm
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
Yeah, it's also mainly browser for me. Especially with YouTube lately.
Though I generally wonder why it's allowed to happen.
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
This has been the situation with operating systems since ... well, since the beginning. What usually happens is programs (can be any program, not just browsers) request resources and the O/S delivers - that's it's job. If memory is requested and swappiness is set too high and unallocated memory runs low, then the O/S will make use of the swap file/partition - if there is one - rather quickly.
Setting swappiness to a lower value (see this page, left column, item 1.6 for details) will allow the computer to avoid using swap a bit longer and, maybe, completely - just depends on what the user is doing and how much memory is being requested relative to how much memory is available on his/her computer.
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
I'm guessing to pressure you into upgrading so that hardware manufacturers make more money
to prevent you from running anything else so you have to stay in YouTube
but its most likely because occasional intense labor will help your system build muscles, & get healthier, sometimes your system tries to punch above its weight and gets KO'd by drug abusing athletes like YouTube
jokes aside, I think the only way to prevent that is by Not allowing the cpu (i really rarely worry about memory) to ever run 100%, & that will probably prevent a lot of programs from even launching, apart from known software that need cpu like games & web browsing, programs cpu use peaks when they get launched, anything they do after wont be as bad..
& i dont really worry about memory, i think its always predictable when you're running out of memory (unless bug), so it is easy to manage, & ofc not as bad as when the cpu gets pissed
-
- Level 2
- Posts: 61
- Joined: Mon Nov 22, 2010 12:46 pm
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
I don't recall ever encountering this before 2016. I think it could have something to do with website size bloat but also, not really, since back in 2009-2013 I had a computer with 758mb ram instead of 12GB - so even if websites became 15 times larger, it would be compensated by ram size - having GBs of free ram to work with wasn't a thing for me back then.srq2625 wrote: ⤴Thu Sep 20, 2018 6:36 amThis has been the situation with operating systems since ... well, since the beginning. What usually happens is programs (can be any program, not just browsers) request resources and the O/S delivers - that's it's job. If memory is requested and swappiness is set too high and unallocated memory runs low, then the O/S will make use of the swap file/partition - if there is one - rather quickly.
Something else had to change.
From my experience it happens when memory and swap file runs out and the computer just stalls.srq2625 wrote: ⤴Thu Sep 20, 2018 6:36 amSetting swappiness to a lower value (see this page, left column, item 1.6 for details) will allow the computer to avoid using swap a bit longer and, maybe, completely - just depends on what the user is doing and how much memory is being requested relative to how much memory is available on his/her computer.
Can't help an impression that perhaps nowadays one needs to have, like 30GB swap or something similarly ridiculous.
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
You can prevent certain programs from using too many resources with cgroups, but it's not done automatically.Morgan Krieg wrote: ⤴Wed Sep 19, 2018 8:47 pm I find it bewildering that stuff like that can actually happen. One would think that there would be some kind of a mechanism that would prevent programs from making the system unresponsive.
- Portreve
- Level 13
- Posts: 4882
- Joined: Mon Apr 18, 2011 12:03 am
- Location: Within 20,004 km of YOU!
- Contact:
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
I think the point Morgan King is trying to get at is: why would the system grant resources which would lead to its own detriment? or perhaps why would the system be capable of granting such resources?
I guess I'm kind of curious, too, what the legitimate answer to this will be.
I guess I'm kind of curious, too, what the legitimate answer to this will be.
Flying this flag in support of freedom 🇺🇦
Recommended keyboard layout: English (intl., with AltGR dead keys)
Podcasts: Linux Unplugged, Destination Linux
Also check out Thor Hartmannsson's Linux Tips YouTube Channel
Recommended keyboard layout: English (intl., with AltGR dead keys)
Podcasts: Linux Unplugged, Destination Linux
Also check out Thor Hartmannsson's Linux Tips YouTube Channel
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
It's definitely an interesting question. I don't know of any OS that prevents this; individual programs may have controls that limit its resource usage (like the Folding@Home client), but nothing system-wide. At least Linux/UNIX systems tend to recover better when this happens.
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
Easy, efficiency in view of badly programmed applications that request more resources than they need. If the kernel's memory controller didn't allow overcommitting of memory you'd be able to run much less applications than you usually do, and some of them not at all. As with everything, you can disable that feature. There's also various ways to limit the CPU allocation of a process but again the default is to let the user decide what and how they run it, thank you very much.Portreve wrote: ⤴Fri Oct 18, 2019 7:53 pm I think the point Morgan King is trying to get at is: why would the system grant resources which would lead to its own detriment? or perhaps why would the system be capable of granting such resources?
I guess I'm kind of curious, too, what the legitimate answer to this will be.
- Spearmint2
- Level 16
- Posts: 6900
- Joined: Sat May 04, 2013 1:41 pm
- Location: Maryland, USA
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
youtube means browser use. Firefox browser has a cache size you can limit. I would limit it to somewhere between 500-1000 MB size, depending on how much RAM you have. If you have only 2 GB RAM, don't set the cache size over 500 MB in Firefox.
All things go better with Mint. Mint julep, mint jelly, mint gum, candy mints, pillow mints, peppermint, chocolate mints, spearmint,....
-
- Level 6
- Posts: 1282
- Joined: Mon Nov 24, 2014 9:17 am
- Location: Chrząszczyżewoszyce, powiat Łękołody
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
Don't want unresponsive system? Mechanism already exists. Never use 10yo junks today and upgrade your system.Morgan Krieg wrote: ⤴Wed Sep 19, 2018 8:47 pm I find it bewildering that stuff like that can actually happen. One would think that there would be some kind of a mechanism that would prevent programs from making the system unresponsive.
Windows assumes I'm stupid but Linux demands proof of it
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
I blame global warming. Or at least, related phenomena.
My 9-year-old Laptop (Win 7 Home Edition) was getting slower and slower, which is not entirely unexpected with Windows. I played with PuppyLinux for some years, and went for dual-boot LM 18.1 about 18 months back. Four incentives: Unix-type tools (which has been my work environment for 40 years), performance, reliability, and a deep loathing of M$.
Performance seems to have been dropping off in Linux though. I fiddled with inxi, and took a look at the CPU and GPU temperatures, and wondered what they ought to be. So I Googled, and then monitored temps every minute for the last week, and also enabled the sensors applet.
That's about when I discovered that the CPU down-rates itself when it gets beyond the safe zone. I never knew. Just opening Opera and reading my email bumped the temps from 48 to 70+, at which point the CPU shifts into neutral for all the good it does. So I read about thermal paste needing renewed every 5 years too.
I also ran memtest86+ just to see whether I had errors in there, but it told me it got to 84 deg C, and I hit the power button. And then I prayed.
I spent this morning opening up the casing (for the first time) and clearing a pile of dust from the fan and the heat sink/exchanger. It is now fairly stable, between 48 and 56. The peaks go away in a much shorter time, too. Off now to find some thermal paste in this rural backwater, and finish the job.
However, performance is a whole lot brisker than it was yesterday. I only have 4GB, but (FAIK) I never used any swap or clagged up any Unix/Linux box. I'm interested what the OP workload mix is like, to be stalling the system.
Analogy: I drive a big Volvo estate, which I use for people, moving tools/furniture, towing a boat (not all at the same time). But I know when to hire a bus, a truck, or a towing rig. Capacity planning is part of requirement specifications.
I do remember that some OS (like RedHat) can reserve cores for specific processes to fast-track them. Also that schedulers typically penalise processes that use their entire CPU time slice by incrementing a temporary "nice" value, to put them to the back of the scheduler queue. (Most processes yield their slice by making a system call, usually for I/O). But then, if they are holding onto a lot of memory, extending their run-time just prolongs the agony.
I believe RedHat also has a task that detects "harmful" processes, and kills them off automatically. Linux has quotas per user that can limit resources, but that's no help on a single-user machine. You can always schedule tasks (with cron or at) for when you don't need hands-on.
My 9-year-old Laptop (Win 7 Home Edition) was getting slower and slower, which is not entirely unexpected with Windows. I played with PuppyLinux for some years, and went for dual-boot LM 18.1 about 18 months back. Four incentives: Unix-type tools (which has been my work environment for 40 years), performance, reliability, and a deep loathing of M$.
Performance seems to have been dropping off in Linux though. I fiddled with inxi, and took a look at the CPU and GPU temperatures, and wondered what they ought to be. So I Googled, and then monitored temps every minute for the last week, and also enabled the sensors applet.
That's about when I discovered that the CPU down-rates itself when it gets beyond the safe zone. I never knew. Just opening Opera and reading my email bumped the temps from 48 to 70+, at which point the CPU shifts into neutral for all the good it does. So I read about thermal paste needing renewed every 5 years too.
I also ran memtest86+ just to see whether I had errors in there, but it told me it got to 84 deg C, and I hit the power button. And then I prayed.
I spent this morning opening up the casing (for the first time) and clearing a pile of dust from the fan and the heat sink/exchanger. It is now fairly stable, between 48 and 56. The peaks go away in a much shorter time, too. Off now to find some thermal paste in this rural backwater, and finish the job.
However, performance is a whole lot brisker than it was yesterday. I only have 4GB, but (FAIK) I never used any swap or clagged up any Unix/Linux box. I'm interested what the OP workload mix is like, to be stalling the system.
Analogy: I drive a big Volvo estate, which I use for people, moving tools/furniture, towing a boat (not all at the same time). But I know when to hire a bus, a truck, or a towing rig. Capacity planning is part of requirement specifications.
I do remember that some OS (like RedHat) can reserve cores for specific processes to fast-track them. Also that schedulers typically penalise processes that use their entire CPU time slice by incrementing a temporary "nice" value, to put them to the back of the scheduler queue. (Most processes yield their slice by making a system call, usually for I/O). But then, if they are holding onto a lot of memory, extending their run-time just prolongs the agony.
I believe RedHat also has a task that detects "harmful" processes, and kills them off automatically. Linux has quotas per user that can limit resources, but that's no help on a single-user machine. You can always schedule tasks (with cron or at) for when you don't need hands-on.
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
This I think is the most likely cause, as electrical components heat up the conductivity goes down so the hotter the machine runs the less efficient the components are. They have to draw more current to compensate which heats up the component even more causing the conductivity to drop, eventually they have to shut themselves down or meltdown.Paul_Pedant wrote: ⤴Sat Oct 19, 2019 7:58 am I fiddled with inxi, and took a look at the CPU and GPU temperatures, and wondered what they ought to be.
RAM usage seems to have little to do with it on my machine. The system monitor never shows the RAM using more than 2 or 3 of the 8 gigs while palemoon maxes out my cpu.
There is defiantly something going on with the software being inefficient, though, because palemoon locks up and becomes unresponsive, but if I render a high res image with blender it will max out the cpu for hours without locking up.
- Spearmint2
- Level 16
- Posts: 6900
- Joined: Sat May 04, 2013 1:41 pm
- Location: Maryland, USA
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
Desitin Baby Diaper Creme can be used, if it's 40% zinc oxide version for thermal paste. It's on my Sempron right now, and still OK after 2 years from application. Basically the same as zinc based thermal paste, only outdone by the silver paste.Off now to find some thermal paste in this rural backwater, and finish the job.
All things go better with Mint. Mint julep, mint jelly, mint gum, candy mints, pillow mints, peppermint, chocolate mints, spearmint,....
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
I was unaware of PaleMoon. First thing I see in Wikipedia is
I do notice that my CPU-bound non-threading processes flip between my 2 cores, always totalling 100% CPU (out of the 200% available). Does your System Monitor show any deviation from a flat-line 100% (or does SysMon just not get enough cycles to update).
Have to admit I'm a bit spoilt when it comes to machine power. In the 80's I was system architect for a Distributed Array Processor -- 4096 processors cross-linked, sharing an instruction stream. That opened up a lot of possibilities for image processing: our main sponsor's app was an airborne radar real-time analysis. We had an English Electric Canberra that flew the test rig.
I don't know if that means it has only one process but it multi-threads, and if it does I'm not clear if threads of one process can schedule in multiple cores. I suspect they must, because threads would be pointless otherwise.Always runs in single-process mode
I do notice that my CPU-bound non-threading processes flip between my 2 cores, always totalling 100% CPU (out of the 200% available). Does your System Monitor show any deviation from a flat-line 100% (or does SysMon just not get enough cycles to update).
Have to admit I'm a bit spoilt when it comes to machine power. In the 80's I was system architect for a Distributed Array Processor -- 4096 processors cross-linked, sharing an instruction stream. That opened up a lot of possibilities for image processing: our main sponsor's app was an airborne radar real-time analysis. We had an English Electric Canberra that flew the test rig.
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
It's not single-threaded. What it refers to is this, taking Chromium as an example, the browser which pioneered this:Paul_Pedant wrote: ⤴Sat Oct 19, 2019 1:16 pm I was unaware of PaleMoon. First thing I see in Wikipedia isI don't know if that means it has only one process but it multi-threads, and if it does I'm not clear if threads of one process can schedule in multiple cores. I suspect they must, because threads would be pointless otherwise.Always runs in single-process mode
Every tab and logically separate parts of the browser are run in separate processes for performance, security and stability reasons, but at the cost of higher memory usage. Mozilla eventually copied this for Firefox, but PaleMoon did apparently not follow in that, assuming that wiki page is (still) correct (I'm no PaleMoon user).
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
I can't really wait for nine months ...Spearmint2 wrote: Desitin Baby Diaper Creme can be used, if it's 40% zinc oxide version for thermal paste.
- Spearmint2
- Level 16
- Posts: 6900
- Joined: Sat May 04, 2013 1:41 pm
- Location: Maryland, USA
Re: Why are programs allowed to overtake processor and memory to the point of stalling the system?
It's good for chafing too.
All things go better with Mint. Mint julep, mint jelly, mint gum, candy mints, pillow mints, peppermint, chocolate mints, spearmint,....