how do I change sudo timeout, and what to? and why?

Chat about Linux in general
Forum rules
Do not post support questions here. Before you post read the forum rules. Topics in this forum are automatically closed 6 months after creation.
Locked
User avatar
linx255
Level 5
Level 5
Posts: 668
Joined: Mon Mar 17, 2014 12:43 am

how do I change sudo timeout, and what to? and why?

Post by linx255 »

Hi,

I'm using Mint 16 Mate 64-bit and having trouble changing the sudo timeout. Two issues; I would like to:

1) be able to change the timeout to either HH:MM:SS or NEVER, and I would like to be able to apply these changes to either or both the current terminal & the current Mint session.

2) determine how long of a timeout is really appropriate for me, or at least what factors should be considered when determining appropriate timeouts.

Awhile back I tried changing /etc/sudoers as follows:

Code: Select all

Defaults	env_reset,timestamp_timeout=60
which does nothing.

For example, my terminal session doesn't remember the sudo password I entered from even 5 seconds earlier. If I do:

Code: Select all

sudo cat /etc/sudoers
Then I immediately do:

Code: Select all

cat /etc/sudoers
I get:
cat: sudoers: Permission denied
Another example is how I have to enter the password to open software manager EVERY time even if I had just accessed it 5 seconds earlier.

So first, I know I don't want to have enter the sudo password this often, so I need to how what configuration file is responsible for these settings and how to change them.

Second, it's difficult for me to ascertain how long of a timeout is really appropriate because I don't understand how long is too long, in terms of security. If I'm going to leave my workstation I'll just lock the screen so to not worry about unauthorized passersby. Is there a reason why I shouldn't just configure it to never expire for the entire session? I think if the concern is that malware will exploit root that it wouldn't matter whether the timeout is instant or 24 hours because it's easy to exploit even a nanosecond of time where root access is granted, right? What is the idea for this protection here and why is it relevant? I mean, if the average Linux user's OS is so insecure that we have to worry about malware doing stuff because we are granted root access for say, over an hour, it would seem the focus of system security should be on how to prevent that malware from getting on the system to begin with, rather than bothering with a workaround like timing out sudo, which reduces productivity especially for longer passwords, and even moreso for certain developers. If malware is already on the system that means root has likely already been compromised right? And if that's the case it's too late to timeout sudo. Or in some situation someone might even be able to run:

Code: Select all

gksu --description 'Update Manager' --print-pass > file
without my knowledge, fooling me into thinking it's a legitimate password request, when really it just saves my password in plain text to a file. Or maybe I'm not understanding what the timeout is really for.

Please advise.

Thanks :)
Last edited by LockBot on Wed Dec 28, 2022 7:16 am, edited 1 time in total.
Reason: Topic automatically closed 6 months after creation. New replies are no longer allowed.
- I'm running Mint 18 Mate 64-bit
- 4.15.0-34-generic x86_64
- All my bash scripts begin with #!/bin/bash
User avatar
xenopeek
Level 25
Level 25
Posts: 29505
Joined: Wed Jul 06, 2011 3:58 am

Re: how do I change sudo timeout, and what to? and why?

Post by xenopeek »

Did you relogin / reboot after change the sudoers file? Configuration changes aren't reread just by saving.
Image
User avatar
linx255
Level 5
Level 5
Posts: 668
Joined: Mon Mar 17, 2014 12:43 am

Re: how do I change sudo timeout, and what to? and why?

Post by linx255 »

Yes, I rebooted. It has no effect on either the terminal window or the session. I don't get the feeling /etc/sudoers really controls these two settings or that this variable is not active for some reason. Or maybe another configuration file somewhere else is interfering?
Last edited by linx255 on Fri May 16, 2014 7:30 pm, edited 1 time in total.
- I'm running Mint 18 Mate 64-bit
- 4.15.0-34-generic x86_64
- All my bash scripts begin with #!/bin/bash
bobafetthotmail

Re: how do I change sudo timeout, and what to? and why?

Post by bobafetthotmail »

linx255 wrote:For example, my terminal session doesn't remember the sudo password I entered from even 5 seconds earlier. If I do:

Code: Select all

sudo cat /etc/sudoers
Then I immediately do:

Code: Select all

cat /etc/sudoers
I get:
cat: sudoers: Permission denied
You still need to write sudo to run as root. Password is remembered and not asked again for a long while already (I think it's 10 minutes or so), but if you don't keep writing sudo before anything that needs to be run as root, the system has no idea that you want to run it as root.
linx255 wrote:Another example is how I have to enter the password to open software manager EVERY time even if I had just accessed it 5 seconds earlier.
Different thing. Stuff that uses the popup window is using gksudo or gksu, that does remember password only as long as the program launched by it is active.
Is there a reason why I shouldn't just configure it to never expire for the entire session?
Anything asking root access by using sudo after you entered the password for something else (in the same terminal) gets it without the slightest prompt. It's not very safe.
I think if the concern is that malware will exploit root that it wouldn't matter whether the timeout is instant or 24 hours because it's easy to exploit even a nanosecond of time where root access is granted, right?
Sudo and gksudo grants root access to a specific program/script only, it's not system-wide access.

When run inside a terminal it's relatively unsafe because anything run in the terminal with sudo after the first sudo and password request does get root access.
Or in some situation someone might even be able to run:
Which is why you should NOT write passwords if prompted for stuff you know you never asked for. :lol: Tricking the user has always been the easiest route, regardless of OS. Any decent hacker can whip up a convincing application that mimicks whatever password asking system around on any OS, not new, not gonna change.
Android manages to do better than most so far because by default it never gives the user root access, period. But this only keeps the system itself safe, not the user's data.
I mean, if the average Linux user's OS is so insecure
Safety comes from layers of defenses. More layers, more safety.
Please advise.
If you need to run specific programs very frequently and have a decent grasp of what you are doing, it may be better to add them to the list of stuff that gets root access by default instead of mucking around with timeouts.
This tutorial (read and use with caution) http://askubuntu.com/questions/159007/h ... a-password

Or simply ask your favorite terminal program to run as a root terminal as explained here (again read and use with caution) http://ubuntuforums.org/showthread.php?t=2100406
User avatar
linx255
Level 5
Level 5
Posts: 668
Joined: Mon Mar 17, 2014 12:43 am

Re: how do I change sudo timeout, and what to? and why?

Post by linx255 »

Please bear with my long reply. This topic is really driving me nuts because it's eluding my understanding! :? I will digress a bit but keep it relevant and bring it back. And I wasn't sure if I should keep this going here or post in general discussions, but...

I can't say I have a grasp of what I'm doing but I definitely would like to. I would like to know why exactly I should have to think twice about entering my password or granting system-wide root access. Forgive my ignorance if I just don't understand Linux; I've been using and researching Linux for years and I'm still struggling to grasp its most basic and essential features. Maybe you can point out some holes in my knowledge / logic.
Anything asking root access by using sudo after you entered the password for something else (in the same terminal) gets it without the slightest prompt. It's not very safe.
I suppose I could add programs to the sudo list, but sometimes I need to actually enable root. Can you give me a few examples of the biggest threats of extending root access? If I alone have physical access to the machine, I'm firewalled, I'm not installing untrusted software, and I'm not messing with system files in ways I'm not qualified to, then why should I worry? I'm trying to understand this with real clarity and confidence.
Sudo and gksudo grants root access to a specific program/script only, it's not system-wide access...Safety comes from layers of defenses. More layers, more safety.
What are the reasons for having sudo timeout specifically? Attacks and accidental damage? Whatever the case, I don't see the point. Having to type the password all the time isn't productive and I don't see how a timeout of 10 minutes is any safer than a timeout of a picosecond.

One link I found states:
If you disable the sudo password for your account, you will seriously compromise the security of your computer. Anyone sitting at your unattended, logged in account will have complete Root access, and remote exploits become much easier for malicious crackers.
If it is purely a measure to guard against attack, then it seems there should be a way to use root without compromising security and inconveniencing to the administrator / user. If the attacks we're worried about are physical, then why not just password protect the screen when leaving the system unattended or otherwise only leave unlocked around trustworthy individuals? If it's about network attacks-- why should any networked computer allow a signal from the outside world to make changes to the system just because root is enabled? If the answer to the latter is, to maintain the functionality of remote login-- then disabling remote login and keeping my firewall up should protect me from attack over the network while root is enabled, right? If not, then it seems there isn't a very strong boundary between my machine and the outside world and we need to address and remedy these attack vectors point-by-point so we can enable root without worry! If enabling root risks attack, then I can't say I feel very comfortable with using it even for a picosecond, let alone 10 minutes or 60 minutes.

It's just my opinion-- if it is purely a measure to guard against accidental destruction to the system, then I feel a bit overprotected because I'm actually willing to take the risk of damage and learn from any mistakes, even if catastrophic. ( Granted, I understand a complex Linux system could be brutal and unforgiving to an inexperienced user, but what better way to learn? ) I found a very simple and effective solution against accidental destruction: If you just disk dump the OS partition in working state to a .ISO image on a backup drive, then within 20-60 minutes you can restore any system corrupted for any reason under the sun back to the exact working state it was before the mishap, with just one 'dd' command run from the LiveCD. No big deal at all! Or second best, you can make backups of your system files in working state and restore them via LiveCD if you don't want to image the entire disk / partition.
Which is why you should NOT write passwords if prompted for stuff you know you never asked for.
Well, I might have asked for it-- hypothetically, if an attacker man-in-the-middles my system update then I suppose the first thing they would do would be to gain root access. A compromised system update might, I don't know-- replace an application or modify a launcher to open a fake password / password-skimming window. I may then open an application that I always use, expecting a password prompt, but end up compromising root unknowingly. I'm just not clear on where the real security boundaries lie and would really like to know; I mean I'm sure there are dozens of books and thousands of articles out there, and I have read several, but nothing so far that answers my questions. As going through everything out there would take a lifetime I'm hoping someone can give a quick, yet technical explanation, if you will.
Android manages to do better than most so far because by default it never gives the user root access, period. But this only keeps the system itself safe, not the user's data.
Yeah, but a linux junkie needs the functionality of root access. :) I suppose there would be an advantage to most users by not giving user root access by default, and perhaps warn them before enabling it. As a semi-intermediate Linux user and IT systems developer this would be very annoying for me, however.

I realize every user has different needs / preferences; some or even most may not need root access. So apparently functionality competes with security and there are trade-offs creating conveniences and headaches. ( I.e. More functionality with their conveniences & headaches at the expense of security -or- less functionality and greater security with different conveniences & headaches. )

Regarding an appropriate timeout setting...

If I, the superuser, can't use root access for longer than the default timeout without feeling safe beyond a reasonable doubt then I feel it is no safer to use for less than 10 minutes either, and this should be considered a security bug. As a superuser, software developer, and IT worker, I feel a great responsibility to myself and others to know ( within my means ) exactly which systems and functionalities can be safely used to the fullest extent, what the limitations of systems are, and where the real ownership of a system lies in terms of boundaries ( I.e. Is it truly "my" machine / or can others easily gain access to or control over it under certain conditions, and how? ). Of course, I don't want to take responsibility for using functionalities I don't fully understand or are considered unsafe even by experts unless the primary risk is due to my own unintentional destruction of the system files-- in which case I'm willing to take the risk and learn from the failure, because I learn so much about the system this way. However, the risk of attack would seem a bit unacceptable a consequence of merely enabling root, and as such makes me question if enabling root even for a picosecond is safe. If I lock out root access 10 minutes after having used it, then why was I safe during those 10 minutes but not after? If it's really just a matter of reducing the amount of time the access is granted, then that doesn't seem like much protection to me; 10 minutes is a good amount of time for someone to do something malicious, right? Yes, sure a lot more can be done with unlimited time, but it seems like an attacker, especially if using a script or application, could still do crazy damage in 10 minutes or even a picosecond. How exactly did we come up with 10 minutes as a default? It seems arbitrary / meaningless. Is it based on the average user's statistical need for root access? If I'm worried about someone brute-forcing root, then I should use a stronger password, or disable remote connections, right? If we need the functionality of remote control, then end-to-end encryption should be applied to the connection, right? Now, I'm sure it's even possible that an accidental change to the system could create an exploit, leading to damage such as data theft or vandalism-- but even so it begs the question: why doesn't the system automatically detect a catastrophic change before it occurs, freeze the screen, and ask the user what to do? That way I could use root access all I wanted and not worry about corrupting my system or being exploited. Why don't we have any such protection if enabling root is such a hazard?

A metaphor:

If I have to lock the door to my house just to leave for a few minutes to check my mail box at the street, only 20 feet away, then there must be something extremely unsafe about my neighborhood and I should probably never go outside and keep the door locked at all times, right? Otherwise I might be really OCD / paranoid. :) ...

If I use a built-in and supported feature or functionality of the OS that's believed to be susceptible to attack by unauthorized users-- then the administrator is not the true owner of the totality of the system; at least not with satisfactory confidence. If I must relinquish use of that function to feel safe, then I am not maximizing the OS' full potential. If I must knowingly risk ownership of my OS in order to use a functionality then that should be considered a bug in my opinion. Use of a function should not weaken security unless intentionally unguarding the system boundary, which should not happen just by enabling root. I understand in Linux there must exist a balance between shielding noobs from hazards and maximizing availability of powerful computing features to advanced users ( and also shielding advanced users from hazards ). And the Linux skill and experience levels of all users is diverse therefore each distro must decide its philosophy. I opine that distros like Mint should not be so over-protective until it detects a system breakage about to occur. Then it can warn and prompt and we could enable root without worry.

Regarding enabling root, one link I found states:
If that user's account is compromised by an attacker, the attacker can also gain root privileges the next time the user does so. The user account is the weak link in this chain, and so must be protected with the same care as Root.
But if something already got on the system to take advantage of root, then there must have been a security failure elsewhere that needs to be addressed and remedied. This should be a separate issue from enabling root, right?

If any substantial safety concern causes me to even be intimidated from using a given functionality to its fullest then my machine becomes a sort of bully that owns me and enables attackers and self-sabotage should I rebel. Also, if I refrain from using this stress-inducing functionality then I have relinquished ownership of the totality of the machine, or otherwise I am forced to accept that my machine cannot be used to its fullest without anxiety, and whichever case, I find these options, well, unempowering. Lol --No offense, Mint is great, wonderful, plenty fun, and the best OS I've used yet!

However, I want to be able to say,
This is my machine; I fully own, access, and control its totality without exception; I rest assured all its functions are safe to use beyond the reasonable doubts of the developers and core users, and I enjoy system protection features that warn me before any action performed as root may cause system breakage.
Granted, the reality is everyone who made my hardware and software has at least part ownership of "my" machine and the real ownership stake I hold corresponds with the degree that the trust I place in its developers is respected / appropriately placed, and my acceptance of its known weaknesses. The majority of consumers think of their computer or phone as "theirs", and [ assume they ] / [ expect to ] have the same level of confidence that they will maintain ownership and control over it as they would with non-electronic household items such as furniture. You can keep your furniture in your house, move it where you want to, and only you control it. It's unlikely an attacker will burst out from the seat cushion and run off with your recliner. ( I'm sure some day recliners will have motorized wheels and built-in computers like raspberry pi's, and remote attackers will be able to hack it, causing it to roll out the door, down the street, and catch fire. I'd hate to think what smart motorized wheelchairs, microwave ovens, range stovetops, heaters, and cars of the future will be capable of! In fact, it's already been demonstrated that a hacker can remotely disable brakes and unlock doors of cars. We think we own our cars, but the truth is attackers may own them. My point here is: I observe a substantial discrepancy between the way most users perceive or quantify the ownership stake they hold in a system and what actually is. A computer system should be designed to meet the same expectations of ownership confidence that we have of any consumer good. If at any time you don't control it, you don't really own it to begin with. If I have to worry my recliner will spontaneously combust just for extending the leg rest then I don't own my recliner; it owns me. Even if it is advertised as subject to auto-ignition when using the leg rest, as a consumer, I am stuck choosing between having no recliner and having a not-so-relaxing recliner. Lol But a serious example is how DRM ( digital rights management ) can be built-in to an isolated region of an ARM processor so that you don't really own or control your computer, the data in it, or what it does with it.

Despite all the security hazards coming to the light these days it seems like much attention is devoted to advertising advanced features on Linux distros and not as much attention to maximizing the safety and potential power of the basic, core Linux design / functions. Don't get me wrong, I think Mint does an amazing job of striking a balance for the technical and non-technical users, keeping things flexible, and so forth, but I must question why there isn't at least a system integrity protocol that checks key files upon boot and restores any ones detected as corrupt to their original state ( with the user's permission, of course )? I have on occasion broke my system to the point of requiring re-installation, due to improperly using root access, and no such system integrity checker ever kicked in to save the day.

At any rate, all this to inquire: 1) I don't see the point in sudo timeouts; can't we do better?, and 2) I would still like to be able to resolve the problem of sudoers ignoring 'timestamp_timeout=60'. Are there any explanations why this variable is ignored even after rebooting?

Thanks for reading, and btw, I don't mean to be arrogant with my digressions but seriously critical. :wink:
Last edited by linx255 on Fri Mar 13, 2015 1:54 pm, edited 2 times in total.
- I'm running Mint 18 Mate 64-bit
- 4.15.0-34-generic x86_64
- All my bash scripts begin with #!/bin/bash
bobafetthotmail

Re: how do I change sudo timeout, and what to? and why?

Post by bobafetthotmail »

Please note that I'm not a certified security expert, although I do have large experience in cleaning PCs of people from trashware and malware (mostly with windows, but the same reasoning applies to Linux).

Also to a long post I answer with an even longer one lol.
linx255 wrote:I suppose I could add programs to the sudo list, but sometimes I need to actually enable root. Can you give me a few examples of the biggest threats of extending root access?
The malware category called "rootkits". Linux is not immune to malware, it is just immune to (most) Windos malware, which is NOT the same thing.

By how linux is made, unless something has root access, it's not going to be able to do much (can still nuke the PC if done cleverly but that's the less sophisticated and less dangerous category of malware around). On Windows it's amazing what malware can do even without admin/root privileges (or how easy is for them to get Admin privileges if there isn't a decent antivirus).
For what each and every critter of the rootkit category does, you can google "linux rootkits" and you'll find security blogs and reports made by actual experts that talk more in depth for each. Google up "botnet" to see some practical uses a pro has for this "taking control" of your PC. Just trashing other people's PC is a sign of a kiddy.

Then, prompts also make sure that you know that a specific program/command is operating on system files or doing something serious, so if something else breaks shortly after you can track down the responsible more easily.

If you look at sudo's manual by writing

Code: Select all

man sudo
you also see that it is supposed to log all its users and/or commands given, which is useful for forensic uses to find out when stuff happened, and can be set up to instant-notify any failed attempt to login to someone (usually the system admin).
What are the reasons for having sudo timeout specifically?
Convenience. You can leave the terminal window open for days (showing other stuff and output) and it won't be a security risk. I personally use guake a lot (drop down terminal I call with a hotkey, totally recommended), and not having to close the terminal or give another command to end the sudo session is convenient.

Please note that the timeout is "timeout between commands". If you send another

Code: Select all

sudo whateverterminalcommand
before the timer of 10 minutes or whatever expires, it is reset and starts counting down again from 10 minutes or whatever.

How much times you are looking at a terminal panel for more than 10 minutes while not doing anything?

Btw, whatever was granted access by sudo or gksudo retains root access until the program has terminated the current job, so if you were using Synaptic to pull down 5 gb of packages from the repos it will not ask passwords again even if it will very likely not take 10 minutes to do that.
why not just password protect the screen when leaving the system unattended
I'm throwing around some security jargon I barely understand to show off how cool I am... 8) much lower attack surface (=code that can contain bugs that can be exploited)
Lock screens in modern distros use rather large libraries and code just to look cooler, modern, and integrate with desktop looks. This makes them much more prone to having bugs that can be exploited while a simple sudo is tiny and is therefore MUCH easier to keep bug-free.

There have been been bugs in some screen lockers in the past that allowed an attacker to crash them with a combination of keys and mouse input. I know first-hand of screen lockers that crash on their own or when interacting with something else.
While this is admittedly a bit weird and not really commonplace, note that an attacker can use a device like say an android phone with an OTG USB port disguising like a keyboard/mouse that can send thousands of keypresses/inputs per second without a sweat (like say this http://www.tomshardware.com/news/phone- ... 12056.html ), or do whatever else like spamming bluetooth requests or whatever, and if this causes the screen locker to crash, the attacker has access to the system.

Crashing sudo and granting full root access? Much much much harder. If not impossible at all.

Which is why there is no way in hell that anyone doing something serious trusts a lockscreen to keep safe the system from a real attack.
why should any networked computer allow a signal from the outside world to make changes to the system just because root is enabled?
Because it does use programs that talk with this outside world. Bugs in these programs are the issue. All vulnerabilities work like that regardless of OS, an attacker (actually a bot, an automated program made by the actual bad guy so he can go and relax) fakes a communication directed to a program or system component and exploiting the bug he can then make the PC run a script or malware that has then full root access and does whatever this guy needs(=full system access).

If the root access is restricted to a terminal or a specific program, the bug in other non-root-access programs can be used to actually inject the malware, but the critter will then fail to get root access as sudo slams the door on its face, and sit there harmlessly until the next disk format (or until a system admin comes checking who was that unauthorized user that failed to get root access).

This is especially true for Windows because it has a ridiculously huge attack surface because Net framework and their proprietary crap. Lately there was a funny vulnerability in Internet Explorer allowing a bad guy to do just that. Inject stuff and take FULL control of the system on any version of Internet Explorer. Scary stuff man, scary stuff. They also pushed the patch on XP even if it is technically not supported anymore, just to show how much they were worried by it lol. :lol:
an attacker man-in-the-middles my system update
All packages in a repository of any modern linux distro are signed (encrypted) with a key that ensures the package is read-only (can be extracted but not tampered with). If the key in the downloaded package does not match a proper key it will be auto-dumped as corrupted by the package-handling program (with or without telling you of the issue).
Even PPAs have their own key. Non-official repositories or PPAs can of course contain whatever malicious stuff they want, but the key ensures that the package isn't modified by third parties while en-route.
Check the Software Sources program to see the repos and the (pubblic part of) the key.
If you want to see more of this stuff in action add a random PPA with command line and see what is the output. http://www.webupd8.org/2012/02/how-to-u ... emove.html

Android does this thing too, and being a younger system it still had some vulnerabilities in this sign-checking systems, fixed as of version 4.1 or higher (and in most custom ROMs at 2.3 and 4.0 where the dev isn't a moron).

Then again, it's not 100% safe, nothing is. But it's a so huge pain in the backside to sidestep (if possible at all... they need to analyze stuff to find bugs they can exploit, hacking it's not magic and it costs a lot of resources to do so) that none will really try to pull it off on you. Unless you pissed off a large group of wealthy people that is.
replace an application or modify a launcher to open a fake password / password-skimming window.
This remains an issue for packages not coming from official repos (that are compiled from source by the distro mantainers, so if there is bull*** like this they will catch it), or not coming from a place/dev(s) with a good reputation. (technically speaking, all open-source stuff has the source... well.. open, so anyone with some skill can go and read it, and if they find stuff like this his reputation gets a pretty large blow as all blogs and sites caring about it start echoing the news)
Yeah, but a linux junkie needs the functionality of root access.
That's why most geeks run a rooted phone or install custom (again rooted) ROMS.
Unless your device is a knock-off (and quite a few times even when it is) rooting is usually doable. Then you use an app that does permission management (prompts you to ask if app X can get root access and remembers choice of apps that should get it without asking (like say the Apps that control governors and frequencies, or set kernel/system parameters for better user experience).
Which is what does sudo here in a PC.
If I lock out root access 10 minutes after having used it, then why was I safe during those 10 minutes but not after?
You were not safe (from threats exploiting the specific program you granted root access to). It's a good idea to minimize the time you are unsafe isn't it?

Injecting malware and taking control of a PC is basically istantaneous. If it happens during the time the bugged program used as a trojan horse has root access your system is compromised, if it happens at a time when the program has no root access it fails miserably.

So yeah, by expanding on your example, it's the difference between not locking the door while you talk at a neighbor on the other side of the street and leaving the door ALWAYS unlocked 24/7. If there was a skillful thief around you would probably be screwed anyway, but you are restricting the chances of that happening just because the skillful thief must show up AT A VERY SPECIFIC MOMENT and not whenever the heck he wants.
How exactly did we come up with 10 minutes as a default?
Dunno, never really was an issue for me anyway, although I rarely need to run as root for extended periods of time.
If I'm worried about someone brute-forcing root
No offense, but brute-force is the crappiest method evar, if you can get in a system by brute force password cracking then it's probably not even worth the time. Even with a crappy and retarded password encryption system like in Windows you can't get past 6 letter passwords by brute force.

All password-cracking systems I know use clever exploits of crappy implementations in the password system, like say the stuff based on rainbow tables that can rape any windows password with letters and numbers up to 15-16 char length within 10 minutes or so using the very same PC booted from a USB drive with the tool (with non-encrypted hard drives of course, but I have not seen a whole lot of people outside of large companies encrypt the hard drives...).
Why don't we have any such protection if enabling root is such a hazard?
As said above, bugs. Hacking is basically finding ways to exploit bugs to do things the program/device was not supposed to do.

Bug testing is a very lengthy and thus VERY expensive process if someone has to pay the programmers doing it, because you are trying to find human mistakes that are not dumb typos any decent spell-checker can detect but code written with a wrong reasoning (not apparent at the time it was written) for various reasons (missing a check on some operations, writing temporary files in a specific location that becomes unsafe, whatever).
Then fixing the mess requires much more than adding punctuation.

Debian Stable is VERY STABLE, but is also quite outdated (security and stability patches added on newer stuff are ported on older stuff that thus does not add more bugs). The same goes for devices made for critical applications (military usually), they can stay 5 or more years debugging the same firmware running an appliance doing stuff (like say an onboard flight computer) and still not catch something that has to be patched on the live devices when in the field.
But if something already got on the system to take advantage of root, then there must have been a security failure elsewhere that needs to be addressed and remedied. This should be a separate issue from enabling root, right?
Well, it's yours the system that gets compromised. Shaking a stick at the dev that made buggy software is not useful, all linux stuff waives any and all responsibility over anything. (commercial licenses of Windows are remarkably close to that, even if they fudge more with words to make you think it's more covering than it really is)
This is my machine; I fully own, access, and control its totality without exception; I rest assured all its functions are safe to use beyond the reasonable doubts of the developers and core users, and I enjoy system protection features that warn me before any action performed as root may cause system breakage.
You want the cake and eating it too. You can only have one or the other, or a compromise in between.
When even the code to make a decent menu like Cinnamon's or WIsker Menu in XFCE is so damn huge, nevermind looking at any program that is doing something serious like a media player or even the linux kernel, there is no real way of predicting if some actions will cause system breakage nor finding out all bugs on all possible systems and hardware/software combinations before a program goes live. You would need a badass AI program that can outsmart the developers of the programs it is overseeing. We are not anywhere near there yet. :lol:

So yeah, the prompts and timeouts on root are because entrusting the machine to take decisions on anything not really basic (what means allowing all programs to run as root) is generally not a good idea unless it is running a read-only firmware (and even then...).
They exist to actually empower the user and move the responsibility of most stuff the machine does on his own shoulders, even if annoying. Because if stuff goes wrong who's the guy that cleans up the mess? The PC? Nope.
Not even Windows can even remotely claim to be able to do that. Mac is technically worse in this respect but given the average userbase it has far less chances of going fubar.
It's you, the user (or a tech support guy he pays).
A computer system should be designed to meet the same expectations of ownership confidence that we have of any consumer good.
Confidence that is always misplaced badly because 99% of the appliances are cheap crap full of bugs that can be exploited.
But a serious example is how DRM ( digital rights management ) can be built-in to an isolated region of an ARM processor so that you don't really own or control your computer, the data in it, or what it does with it.
That's what open-source is for. Hardware does not run on its own, and with an open source you can go and remove the drivers running the DRM device, or for the very least make the device ignore its screams and isolate it from network, effectively neutralizing the threat.

An obvious example is how they employed the retarded limitation on a DVD's localization so that devices outside of a zone cannot play it just because DRM. Any self-respecting PC can run programs that sidestep or outright don't care of this limitation even if technically in the DVD drive's own firmware.

And please note, this is not because I'm pro piracy, but because making ransomware is not the best ethical choice to be payed for your hard work.
why there isn't at least a system integrity protocol that checks key files upon boot and restores any ones detected as corrupt to their original state ( with the user's permission, of course )?
Even a simple MD5 check would take 5 minutes or so, and given the complexity of the systems involved, compounded by the fact that each and every distro (and true linux user) does its own customizations it would become more a pita to mantain than anything else.

Windows does have a similar feature (kinda) called by the terminal command "sfc/ scannow" that can be asked to check system files and restore them if needed (takes half an hour on a decent machine), but if I have to say how useful doing that was most times... meh.

Unless there is some reason, restoring a backup or a system image you made with whatever is the safest and fastest way.
I have on occasion broke my system to the point of requiring re-installation, due to improperly using root access, and no such system integrity checker ever kicked in to save the day.
No offence intended, but this approach reeks a lot of Windows. Linux is not a closed box. You can usually fix most of these "oh crap it does not boot anymore" situations on your own (by reading some documentation from the internet and tweaking some files) if you have another bootable drive to boot from and go rescue the dead one from a full environment, even a usb flash drive or hard drive is fine, or using a live-cd (possibly from a USB drive cuz it is a lot faster).

Same applies to Android. Most devices from reputable manufacturers have a recovery partition containing a tiny OS you can use to do some troubleshooting on its own (backup and restore) or just connect to a PC and move/modify stuff with terminal commands to fix the issues you just caused in the last half our of tinkering.

I do backups of system folder with programs like rsnapshot (after the first backup it actually saves only files that changed, everything else is a hard link to the same file in the first backup, this saves boatloads of space for incremental backups, same mechanic is implemented in the incremental backups of the ROM of my Android phone for that matter) and keep a USB drive with a fully functional Mint install or even a clone of your system partition you can boot from in case of problems to then go and fix stuff manually by restoring the most recent files from the backups. I've yet to find a good reason to not have a system partition with a 25-30 GB size.
Heck you can ask it to do a full filesystem check every boot (it is usually set to be running every 30 or so boots) and with a so small system partition it will take 5 seconds, then if it finds an issue it prompts you to ask directions and you can ask it to go and fix the issues (pretty fast again) or not.

I mean, if you swap the system files you changed recently before disaster with ones from a backup the system will be back like that day. And maybe you can look at them, look up some documentation that isn't total sorcery like the guides for Windows so you can learn how the system works, even if at a somewhat rudimentary level like me because you are not a badass coder.
This allows you better understanding of the device and of the system, and enhances the sense of "I OWN YOU", at least for me.

Now, while someone can complain and say "why the PC can't do this on its own? Windows has the System Restore functionality that does more or less the same".

It's part of the standard linux approach, admittedly more apparent in distros like Arch or Gentoo where even forums users tend to push away newbies, but it is still there for most others too.
If you want to REALLY own your device, you need to know the heck it is doing. To know the heck it is doing at any given time you need to learn and configure your set of mission-critical programs and the relevant system files for them yourself, not click on pretty buttons. As simple as that. It's not horribly hard as in most cases is just installing a couple packages and writing a few lines in a couple rather talkative text files to add settings. Then you can easily automate everything with scripts and auto-starting stuff at conditions you set, so you never ever have to touch it again.

One-click solutions don't give the user power over the machine, only the impression of it. Because the user is just clicking a pretty button. Power is not clicking pretty buttons, it is knowing the heck is going on. So you can go and have better chances of fixing it on your own if an update to something else breaks it without waiting for a dev somewhere else to try to understand what went wrong in a handful of PCs and finding the time to fix stuff in his code for a tiny userbase, or tweak it to your exact right requirements (it's amazing how much can be tweaked and automated).


For customers that want appliance-like stuff (average mac user and people that want stuff to "just work", rather common users if I might say), Android is the way to go. System stuff is walled off and all applications installable by user are COMPLETELY self-contained or relying on system stuff that can't be changed.
Android has its weaknesses, but was designed from the ground up to cater that specific userbase.
But it's still linux, so if you are an advanced user you can do the same stuff you do with a linux PC, relatively speaking of course. (terminal and text commands to work with system for the more hardcore, scripts to automate things and tweak dynamically system settings on boot for the less hardcore)
At any rate, all this to inquire: 1) I don't see the point in sudo timeouts; can't we do better?
Well, linux (or Unix systems like BSD, but they have their own sudo too lol) is the OS running the overwhelming majority of web servers around, plus the totality of real network infrastructure (inside routers, modems and appliances, albeit it's a rather cut-down system as it has to fit in less than 50 mb of system drive), where security is an obvious high priority for everyone (excluding cheap home router manufacturers). It's safe to assume that if they did not come up with a better way for all these years it's not easy to come up with something better.
I would still like to be able to resolve the problem of sudoers ignoring 'timestamp_timeout=60'.
I told above that you still need to write sudo or the terminal line is not executed as root. In the first post you don't write sudo again and the command fails. This is normal and intended.
The timeout is for the password check. You must still write

Code: Select all

sudo someterminalcommand
for everything you need to be executed as root.
If you want to run a terminal as root and forget about sudo until you close it, I already linked the forum where they talk about it, and the answers for adding programs to the sudoers list so they are always run as root regardless of anything.
User avatar
linx255
Level 5
Level 5
Posts: 668
Joined: Mon Mar 17, 2014 12:43 am

Re: how do I change sudo timeout, and what to? and why?

Post by linx255 »

There have been been bugs in some screen lockers in the past that allowed an attacker to crash them with a combination of keys and mouse input. I know first-hand of screen lockers that crash on their own or when interacting with something else.
Yikes. I hope today and forever this will never be an issue for any Linux application!

---
Lately there was a funny vulnerability in Internet Explorer allowing a bad guy to do just that. Inject stuff and take FULL control of the system on any version of Internet Explorer.
At the Pwn2own contest in March 2014, hackers presented vulnerabilities in all browsers, though I'm not sure if they gained full control of a Linux system. I opine browsers should not be allowed to execute code that accesses such security-sensitive parts of the system. I, for one, do not need any functionality so badly that I would be willing to risk attack. Simple, no-non-sense web pages with just information is all I need. At the very least, browsers should offer the ability to sandbox the environment so you don't have to setup a VM just to browse safely.

---
all open-source stuff has the source... well.. open, so anyone with some skill can go and read it, and if they find stuff like this his reputation gets a pretty large blow as all blogs and sites caring about it start echoing the news)
True that, and I stick with Mint repository only, so far. But to play devil's advocate, how many folks have abundant reading skills, how many packages actually get thoroughly checked? And how thorough is enough? Mint describes level 3 packages as "assumed safe", and I don't know if any are installed by default, but "assumed safe" is far from re-assuring! A lot of things "assumed safe" without examination fail us in really awful ways. My personal philosophy is safety before functionality; if it's not checked I opine

There is also a problem of code obfuscation. Sometimes things can get slipped in that would go unnoticed without thorough reverse engineering of the code. Recently, I saw a video on http://www.trustycon.org demonstrating how code obfuscators used a single extra asterisk in some code to exploit a system. I hope we at least run everything through some sort of automated checker.

---
No offense, brute-force is the crappiest method evar
None taken; I might be ignorant on this topic but quite the expert on others; there are too many fields of computer expertise for anyone to learn it all. I'm not a hacker so I wouldn't know the preferred methods; you're right, I don't see why anyone would waste their time with brute force either; someone on another topic suggested to me this might be a concern. However, apparently it won't be long before quantum computers will be used to crack 256-character passwords in a very reasonable time, purely by trying all possible combinations.

---
Bug testing is a very lengthy and thus VERY expensive process...
I know what you mean. I once worked as an engineer debugging circuits with a quarter billion devices and 7 layers of metallization interconnect. It could several days to find a short in millions of buses running all over the place through massive hierarchical structures like a billion tons of spaghetti flowing through every street, building, and room of a large city, every which-way; and so far there is no tool smart enough to automate this process. However, if we spend half a decade debugging firmware and still don't catch bugs that lead to major security compromises, then I submit we are over-extending functionality. If computers had developed free of defense and economic constraints perhaps we would have ideally perfected lower level systems before attempting to build something as complex as the internet. Throughout the course of its development, so many millions of decisions were made in a hurry ( and for profit ), often without sufficient criticism or feedback, if any, from diverse groups of engineers and citizens. We had better start developing more consciously, because like you say, large groups of folks with funding can and will attack, and even unprovoked ( having a different political ideology could be provocation enough ). Shouldn't the open-source community at large settle for less functionality and just stick to what's tried and true until newer stuff is abundantly debugged so we have a more complete and true-to-reality understanding of the all possible attack vectors for the software that everyone's using? I guess it doesn't work that way. Users and companies are addicted to functionalities because they're useful, fun, and make money, but it seems like we should make developmental decisions in good conscience regarding which kinds of functionalities are sustainable in terms of hardening them and maintaining them to bear attacks from even the most advanced threats. Debugging may take time and money, but we can't cut corners and give up. It's either done and done right or it could be a hazard to the world. It's really a matter of the quality of the work we're willing to put up with, right? Unless there are political constraints unfairly influencing the organization for some reason that are out of the control of the lay developers and consumers. Just my point of view, but what do I know? I'm not a Linux dev.

---
Shaking a stick at the dev that made buggy software is not useful, all linux stuff waives any and all responsibility over anything.
:D Well, I appreciate Linux devs more than anyone, and I am software dev myself, so I'm not attacking their personhood, just expressing the problems I seem to observe in the hopes they can be resolved to everyone's benefit, as community liability and not so much legal liability should drive efforts anyway. But no work or idea is immune to criticism. In fact, I do expect and encourage the harshest, even for myself. Not to be arrogant, but it's a beautiful thing to be able to have your own work destroyed by critics to force you into a stronger, more powerful you, and make the community better as a result. We should all mercilessly critique each other's work and ideas like wolves ravaging another's prized prey, all while remaining cooperative, productive, cordial colleagues and associates. :lol:

---
They exist to actually empower the user and move the responsibility of most stuff the machine does on his own shoulders, even if annoying.
Ok, I am definitely willing to bear annoyances if that is THE only / best way. I was just thinking there could be even more security layers to make enabling root a non-issue, regardless of how buggy a package may be. ( Maybe integrating sandboxing into application management, then giving the user the option to allow or deny I/O connections of each program, the default setting being whatever the devs say? )

It just feels like we're kind of in over our heads if enabling root makes us vulnerable to every bug in every package. Maybe there should be two roots: one for local machine root access, and one for applications that connect to the outside world, except they are mutually exclusive, meaning enabling root access will not allow root access to a function connecting to the outside world, and the other will never access root to local functions. I wonder if this would be a practical solution because it would define a clear boundary between "self and other" at least. You could use the same 'sudo' command, but just specify options to which parts of the system to access. If you still wanted true system-wide root, you could still enable both roots, but in most cases you may want to reserve the flexibility to deny root to network-related functions. Or maybe there should be an entire 'root manager' application in Control Center that lets you create a custom scheme to manage root access for local / network functions, applications, users, and so forth. Does not something like this exist already, or can it be built without editing the kernel? If it's only accomplished with configuration files I'd like to see something like this implemented into a GUI.

I don't know if anything like this exists, but it would be nice if Linux had built-in sandboxing, allowing users to create root access schemes that governs boundaries and rules based on any criteria we can think of without having to modify the kernel, of course. For example, it could be setup so that if network-based applications gained your root password they could still not gain system-wide access, because any file or program in the memory would have an attribute preventing it from doing so due to the source of the malicious instruction. Then you could never get attacked by a networked application without having explicitly signed over I/O access; if the application needed to do something as root to the local machine, then you would have to prompt the user to grant access. This way the user could make a decision based on trust and not have any application automatically cross the boundary. ( Or something along these lines. :?: )

---
Confidence that is always misplaced badly because 99% of the appliances are cheap crap full of bugs that can be exploited.
True that. I hope developers and consumers tolerate even less cheap crap with time. For example, it really counts when it comes to healthcare, whether it's the secure storage of patient info or the secure operation of a pacemaker, or otherwise life-sustaining machine! I'm amazed by the way we use these systems!

---
That's what open-source is for. Hardware does not run on its own, and with an open source you can go and remove the drivers running the DRM device, or for the very least make the device ignore its screams and isolate it from network, effectively neutralizing the threat.
DRM may require the OS to opt-in as of today, but hardware can and does run on its own, and has for decades. There are a number of systems on motherboards ( even dating back to the 386 ) that have used hard-coded / NVM-based instruction on numerous components to gain full access to the RAM and network, independent of, and invisible to, the OS. Now these isolated systems are very powerful and robust. And the processor is just one area of concern; have you seen the latest Trustycon presentation? The motherboard has entirely separate systems of isolated processors, volatile and non-volatile memory-- sometimes with vast power and capacity-- going off doing all kinds of things no OS can see. These features may have originally been intended to be, as advertised, 'enhanced security features', but when I really think about it I can think of no worse security hazard than isolating the user from having control over what is supposed to be his / her own hardware. How can any PC be self-respecting with this kind of design? Afterall, it's made by the corporation for corporate profit, and not consumer well-being, except of course to the extent the consumer holds them liable, which isn't enough. As long as hardware is proprietary, lacking sufficient influence from the open-source community, and having too much influence from the profit motive self-betraying hardware seems to remain a real potential hazard for years to come.

---
An obvious example is how they employed the retarded limitation on a DVD's localization so that devices outside of a zone cannot play it just because DRM. Any self-respecting PC can run programs that sidestep or outright don't care of this limitation even if technically in the DVD drive's own firmware.
Exactly-- the concern is not really copy protection; you are right, they will never be able to stop circumvention. The real concern I'm addressing is malicious or clumsy use of DRM or otherwise isolated hardware components running invisible instructions hardwired / installed by manufacturers / distributors, which could not only give them control but other attackers as well, whether or not the OS sidesteps it. These kinds of vulnerabilities are impervious to OS operation and even re-installs.

---
please note, this is not because I'm pro piracy, but because making ransomware is not the best ethical choice to be payed for your hard work.
I'm not politically correct whatsoever...I opine information should be free and legal, and the boundary between what is mine and yours should not be blurred, obscured, and violated because you may own a billion-dollar monopoly. OK, off my soapbox now. :)

---
You can usually fix most of these "oh crap it does not boot anymore" situations on your own (by reading some documentation from the internet and tweaking some files) if you have another bootable drive to boot from and go rescue the dead one from a full environment, even a usb flash drive or hard drive is fine, or using a live-cd (possibly from a USB drive cuz it is a lot faster).
I am really good at accidentally screwing up systems. :D Sometimes, though rarely, it can be irreparable to the point you can't even troubleshoot it. It happens because I go exploring and try to see what different files do. You know...to try and learn something? :lol: Then I break it, "oh crap" :roll: , and sometimes there is no documentation, forum, or clue to find what broke and undo it. Not worth the time it would take anyway. But then at least I learn what never to do, and usually come away with a better understanding of the system. Thanks to disk dump, I am setup now where such breaks are no biggie and my data is just as safe.

---
backups of system folder with programs like rsnapshot...full filesystem check every boot...will take 5 seconds...prompts you to ask directions and you can ask it to go and fix the issues...you are not a badass coder
Excellent! Never heard of rsnapshot; will try; thanks! I'm not a badass Linux coder, however I am kind of a badass scripter. I've written my own synchronization tool that I like better than any program I've used in the Mint repository, no offense to devs. It runs as a 'master script' and lets you select groups of tasks to perform ( individual bash scripts with instructions ), rather than merely sync files, so it's way more powerful: you can really tailor your sync operations to your organizational needs by creating lists of tasks ( rsync, encrypt, compress, rename, duplicate, log, notify, or anything else you want ) to execute. Plus it's semi-graphical ( uses drag & drop to select tasks ). I'm currently using scripts to backup / manage file versions, and if I encounter a problem with a file changing when it's not supposed to, I overwrite it with the proper archive copy and immute it so long as it doesn't interfere with anything or need to be upgraded later. I was going to make scripts to check files on boot, but maybe rsnapshot is all I need.

---
sense of "I OWN YOU"
I like this term. Is there a single word for it? "Ownership" is too broad.

---

"You must still write"

Yes, my bad. :oops: That works.

Thank you, very helpful!
- I'm running Mint 18 Mate 64-bit
- 4.15.0-34-generic x86_64
- All my bash scripts begin with #!/bin/bash
bobafetthotmail

Re: how do I change sudo timeout, and what to? and why?

Post by bobafetthotmail »

linx255 wrote:I opine browsers should not be allowed to execute code that accesses such security-sensitive parts of the system.
That's what everyone wants. Point is, with the complexity of anything modern, bugs can and do happen.
But to play devil's advocate, how many folks have abundant reading skills, how many packages actually get thoroughly checked?
Anything in official repos is checked (and fixed to some extent) before compilation by the maintainers, that are programmers. Most stuff in Mint technically comes directly from Ubuntu repos (mint has its own repos with the mint-specific packages), and Ubuntu takes large amounts of packages from Debian Unstable branch (source code). There is a lot of people looking at code, especially if the program is known and used.

Does not mean it is 100% safe, just that what hits you are only real bugs and not shenanigans from a dishonest developer.

For larger projects like linux kernel, the mantainers are mostly gatekeepers, and any new patch or addition is checked by them before integration.
Linus Torvalds (the creator of Linux) does bash developers sending useless or bad code changes and lately did even take more extreme measures like suspending integration of patches from a dev that refuses to conform to kernel standards with his own project (and crashing the systems used to test this patch before integration as a result). http://www.networkworld.com/news/2014/0 ... 80404.html
Mint describes level 3 packages as "assumed safe", and I don't know if any are installed by default, but "assumed safe" is far from re-assuring!
Devs think it is safe for release but have no way to be sure it will be on any of the gazillion combinations of hardware and software.
My personal philosophy is safety before functionality; if it's not checked I opine
Then go with Debian Stable. Or with LMDE (mint debian) which is based off Debian Testing branch that is already rock-solid enough for a user. (servers need top security/stability and care less for features so Debian Stable for them)
There is also a problem of code obfuscation. Sometimes things can get slipped in that would go unnoticed without thorough reverse engineering of the code.
Open source. Reverse-engineering is unnecessary when you can look at the source code. Obfuscated code becomes obfuscated when compiled. Source code is human-readable... because human programmers need to understand it.
I hope we at least run everything through some sort of automated checker.
Yes but it's not going to catch the real mistakes, only humans can do that because it requires a brain and understanding of what the code will do on a real system. Also testing is needed in most cases.
However, apparently it won't be long before quantum computers will be used to crack 256-character passwords in a very reasonable time, purely by trying all possible combinations.
Incorrect. Quantum computers aren't particularly faster than a conventional computer on normal operation, the difference is that they can run algorithms that a normal PC simply cannot do, so in some tasks where they can use such shortcuts they are MUCH faster, but only because they can take shortcuts.
They can crack things using algorithms that analyze some kind of data, not brute force.
Cracking something live is another matter alltogether. Because any decent admin sets limits at the amounts of password requests per second, so regardless of how fast the attacker is, the victim is capping the cracking speed, so any attack requires billions of years.
Shouldn't the open-source community at large settle for less functionality and just stick to what's tried and true until newer stuff is abundantly debugged so we have a more complete and true-to-reality understanding of the all possible attack vectors for the software that everyone's using?
The code is too large and the possible interactions too huge.

Not even most developmental releases (alpha, beta and release candidates) would be properly debugged without the help from brave souls that try a possibly dangerous program in their own specific environment and report bugs. These are generally volunteers and while they aren't usually devs or mantainers, they give a very important contribution to the community.

Real software debugging is not something you can automate or run on a realistic amount of platforms in your lab, yes the first runs are done like that and ensure that the software isn't a complete pile of trash, but the real fun happens when the software is released in the wild and encounters any possible configuration. Users report bugs that eventually get fixed.

You cannot have a stable program if you did not release it into the wild and let the bug reports from users come in. It's simply not possible to test it throughly enough if you don't have a few million workstations with wildly different hardware and software in them, and millions of trained monkeys doing any possible thing and user error.

But this does not mean that you have to be always on the edge or you are using outdated crap. Open source allows useful tricks as you can move newer source code patches wherever you want.

Highly stable OSs like Debian Stable branch uses rather outdated code as a base, but maintainers applied all the patches that fix bugs on the newest versions of the code (where applicable), so it is safer and more stable than the versions on the packages can make you think it is. It's NOT "old" nor outdated, it just has less newer features (and less newer bugs) while always featuring the best safety and stability possible at any given moment.
It's server-grade, and is running on large numbers of servers.

The same goes for linux kernel, if someone discovers some real issue or vulnerability, it's going to be added to the distro's current kernel by that distro's kernel maintaners and then pushed in the repos so you can get it with automatic updates without having to manually update kernel to another version (potentially introducing more bugs than it fixes).

But someone has to use the newer code so he can find issues and report them. If everyone started using Debian Stable any development would stop as none is finding new bugs in newer code.
Users usually prefer the higher risk of issues to the surety that they won't have the latest features, heck even Ubuntu isn't so bad.
I was just thinking there could be even more security layers to make enabling root a non-issue, regardless of how buggy a package may be.
Android is the only OS that does something in this direction afaik. Allows users to set their things and install apps without asking root access all the time.
Read only system folder/partition, fully sandboxed applications without any dependency that can screw other applications, sandboxing apps in a VM you can lockdown form the outside, application code automatically optimized for the processor architecture (the optimizer is called dalvik, and optimized code is in the dalvik cache).
Newer versions support x86 and there are projects that port it to work on a PC.
I don't know if anything like this exists, but it would be nice if Linux had built-in sandboxing
There is SELinux and AppArmor. I never really went that way, so I can't say who is better or what they actually do.
https://en.wikipedia.org/wiki/Apparmor
https://en.wikipedia.org/wiki/SELinux

There are very interesting ways to virtualize stuff in linux without using the usual way of making a full VM with its own ram and all, like KVM https://en.wikipedia.org/wiki/Kernel-ba ... al_Machine
or Vserver https://en.wikipedia.org/wiki/Linux-VServer

But again I never really went far into them because I never had to make ultra-safe systems, nor true server farms.
I hope developers and consumers tolerate even less cheap crap with time.
Very few if any customers ever notice how unsafe stuff is, nor know that it's the manufacturer's fault.
Anyway, no manufacturer has anywhere near the amount of resources needed to not do crappy software nor to keep updating it for any decent timescale.
Google saw this and in its infinite benevolence they made Android. Most companies now only need to tweak a couple things for proper hardware support (drivers are made and packaged by the actual chip manufactures so it's not particularly hard) and flash a pre-made image.
It's not bulletproof, but much better and safer than anything they could realistically come up with. Simply because development of that OS is mostly funded by a far larger company that sells content through that platform (instead of trying to make money by slapping license fees on the products), and because large parts of the code are developed by volunteers in other open-source linux projects.
The motherboard has entirely separate systems of isolated processors, volatile and non-volatile memory-- sometimes with vast power and capacity
This stuff is confined to very specific applications. Even if mass-produced it would be ridiculously more expensive than a non-locked product. Heck, they have yet to find a way to have ECC ram drop in price (and that would be an useful feature).

Then again, don't underestimate their skill in making unbelievably crappy firmware running these things. If the board is physically accessible, anyone with a hot air soldering bench (and some experience in its use) can safely remove components or solder ram or nand sniffing devices they can then use to see what is going on with the usual reverse-engineering tools.
As long as hardware is proprietary, lacking sufficient influence from the open-source community, and having too much influence from the profit motive self-betraying hardware seems to remain a real potential hazard for years to come.
Intel and AMD participate a lot in linux development (for things concerning the hardware they sell, so it's mostly kernel and low-level system stuff), and Nvidia is carefully making steps in the same direction. Also Qualcomm/Atheros, and quite a few other companies do care, and release sources of all drivers for their products.
ARM and all the other manufacturers are basically linux-only (varying levels of open-source, but usually good enough).

And it's not just because ideals. It's because once it is open-source they get fixes from other guys for free, and if the device is known enough, volunteers will take over the mindboggling annoyance of legacy hardware support. For example AMD is directing users of their older graphics chips to use the open-source driver, that has full hardware support for 3D and media decode, and stopped making official proprietary drivers for relatively older graphics, saving a ton of cash.
I was going to make scripts to check files on boot, but maybe rsnapshot is all I need.
To do a file system check on boot you only need to set a variable with tune2fs tool, here the manpage:
http://linux.die.net/man/8/tune2fs
If it finds fs issues at boot it will pull an error and stop boot to ask directions, since you have command line, you can then ask a file system fix with fsck /path/of/partition or whatever.
Yes, my bad. :oops: That works.
Good. That's because you are not becoming root, you are passing sudo your terminal commands that are then executed as root by sudo.
Sudo and other system-critical binaries always run as root themselves to do their things needed for the system to operate at all. Most of the other non-critical software usually does not need it or require your authorization to get it.
User avatar
linx255
Level 5
Level 5
Posts: 668
Joined: Mon Mar 17, 2014 12:43 am

Re: how do I change sudo timeout, and what to? and why?

Post by linx255 »

That's what everyone wants. Point is, with the complexity of anything modern, bugs can and do happen.
I digress from the original topic. I think complexity is not the cause of bugs so long as it's developed sustainably; of course the market doesn't usually drive technology sustainably; at least not initially, and sometimes decreasingly with time. I think we're pretty addicted to non-essential functionalities that serve the interests of attackers, :lol:. It's just my opinion, but provisions for attacker-friendly functionalities are themselves bugs, regardless of how fancy a web page or OS we get out of it. How hard can it be to create a clean boundary so that when I browse the web my machine remains my machine? Pure and simple, right? Just eliminate those functionalities; they don't belong! Code should only be interpreted by a browser to do a very limited number of things. We added too much funtionality all at once. A great deal of web sites won't let us not use java and flash now. You have to block the crud out of scripts and hope the page still works... How ridiculous and unnecessary; so what are we waiting for? :lol: There's no question that everyone in the open-source community wants this, but if *everyone* really wanted security we would have done away with all these non-sense over-reaching functionalities, awhile ago, right?
Devs think it is safe for release but have no way to be sure it will be on any of the gazillion combinations of hardware and software.
Clarity appreciated. So worst case they won't be compatible, and a mere incompatibility is not expected to create a security hazard?
Then go with Debian Stable. Or with LMDE
I'm definitely at a pivotal point. I read a recent distrowatch review that said LMDE was too buggy for use as a desktop OS, and the author of that review personally convinced me that LMDE isn't worth it. I researched about 20 other Debian Stable distros and didn't find them suitable for one reason or another. I don't know which Mint packages present vulnerabilities that Debian Stable doesn't have or have any idea what the difference in stability amounts to. I've already gone through my Mint packages and removed most of the ones I don't want, but I have no way of knowing if this is sufficient for my needs. I'm thinking about Arch, but I read that is comparable to Debian Unstable; so I'm not sure if Arch would be more stable than Mint or not.

---

Regarding "code obfuscation", are there are multiple contexts for this term? One being compiling, and the other, very hard to find bugs / exploits in human-readable code? Maybe I don't know what I'm talking about. There are actually contests for creating the best hidden, yet in-plain-sight exploits where this is referred to as obfuscation. I perhaps too loosely applied the term "reverse-engineering"; assuming you don't already have an idea of what to look for then reverse-engineering would be necessary to understand what's going on in the code of any sophisticated application. Do all the applications in the repository have some sort of "overview" documentation with flow charts, schedules and such, or do code checkers look at code exclusively?
Yes but it's not going to catch the real mistakes
Indeed!
Google saw this and in its infinite benevolence
Again, I'm not politically correct, and maybe I don't know all the facts, so I hope not to offend anyone, but my opinion of Google is pretty low; not to make a huge fuss. This may be an attractive distro except I must play devil's advocate: Surely Android employs Google-serving features / functionalities that get in the way of the interests of hard-core Linux enthusiasts, right? I don't want Google doing things with my OS just for their business, but it may be structured to well suit Google and friends above the general Linux community; I don't know. I usually get irritated when my idea of what an OS should be differs from that of big corporate agendas. :lol: It's hard to imagine that Android would be setup to not advantage Google with some kind of impedence to Linux users at large, ( such as privacy-invading "features" ), unless Google really and truly found it advantageous to allow it to serve the good of all without corporate agenda interfering to the users' detriment. Is that the case? Some Android headlines: "Android tracks users' movements thousands of times a day without user knowledge...Google does not screen, review, or police apps in the Android market before released to public...By default Android apps do not need permission to get a user's photos" Then there is some unethical management issues: "Google settles with DOJ and paid $500M [ wrist-slap fee ] for aiding and abetting illegal importation of prescription drugs into US [ via ] geo-targeted ads...Street view exposes the interiors of people's homes, making them vulnerable to burglars and stalkers." There are a lot of people upset about a lot of things they're doing. All this and more, and they still think that I, a serious Linux buff, should consider switching to Android, built and maintained in large part by a company directed under that kind of management and vision? With all due respect, I don't see how anyone working for Google can stand straight-faced behind that kind of non-sense. Google lost my trust and business permanently, a couple years ago. I mean, I don't have a problem if you use their stuff, but gosh, it's all too much for me. :lol: They think just because they have a cute logo, creative homepage, infinite developing power, and superior maps that I'm just going to keep using their stuff and forget about everything. Anyway, it certainly doesn't sound like Android is nearly as secure as Mint, even with sandboxing built-in. But again, I don't know all the facts.

"companies do care"

Wellll...Just to play devil's advocate: Some people care; developers may care. Corporations do NOT care in the least unless it leads to more money; I can guarantee that in more than one way as a Silicon Valley IC design professional and insider. :) I mean, open source code may be reviewed by all, but the proprietary hardware typically requires expensive equipment and specialized man power to reverse engineer that only large companies have access to. There has lately been a lot of publicity / criticism of commonplace hardware manufacturers doing things not in the interests of consumers.

"only need to set a variable with tune2fs tool"

Gnarly! That was there all this time. Which variable?
- I'm running Mint 18 Mate 64-bit
- 4.15.0-34-generic x86_64
- All my bash scripts begin with #!/bin/bash
bobafetthotmail

Re: how do I change sudo timeout, and what to? and why?

Post by bobafetthotmail »

linx255 wrote:if *everyone* really wanted security we would have done away with all these non-sense over-reaching functionalities, awhile ago, right?
It's a matter of risk/benefit assessment.
Top security is text-only interface and completely manual setup of each and every tiny bit of damn thing in the device (top does not mean 100%), then lockdown everything else. For servers it is more or less the norm, but most people prefer a compromise.

To pull up your example from a post ago, it's like securing your home with trenches, anti-air defenses and paramilitary armed guards on constant patrol.
You only need to be able to defend your system from the parties that actually care about attacking you. Unless you are storing gold bars in your basement, a normal door with a good lock (and decent windows) is enough.

The same for the average user. In 99% of the cases, and assuming you are managing it with some healthy distrust for unknown programs, the average linux distro is plenty of safety for you.

If you are setting up a server that is supposed to manage hundreds of transactions per second, and moving tons of private data and/or cash, then you might need more security than plain Ubuntu.
Clarity appreciated. So worst case they won't be compatible, and a mere incompatibility is not expected to create a security hazard?
Interactions with other software (and its bugs) can lead to security issues.
It shouldn't, and it usually does not, but none in his right mind can claim it is 100% safe.

I think you are going a bit too overboard with this though. You can't make a 100% safe system no matter what you do or what operating system you use. Vulnerabilities are always unknown, when they are known they get patched fast and stop being vulnerabilities.

But it does not mean you should be afraid of your own shadow. Even standard Ubuntu is pretty damn safe.
LMDE was too buggy for use as a desktop OS
Cinnamon desktop is buggy, Mate or XFCE (this is installable from repositories after you have installed either Cinnamon or MATE LMDE, my personal choice as clearly superior) are good.
I run LMDE on both my netbook and my old rig, and there are no issues afaik.
It is a bit harder to install new stuff from outside repositories, but most debs that run on Ubuntu run in them too.
I don't know which Mint packages present vulnerabilities that Debian Stable doesn't have
Patches are applied retroactively to everything. The only things that are not patched are issues still unknown to everyone. It is widely assumed that newer packages with newer features also have more unknown bugs.
I have no way of knowing if this is sufficient for my needs.
What are your needs?
I'm thinking about Arch, but I read that is comparable to Debian Unstable
Arch is for hardcore, and is usually based off the very latest sources, it's much more bleeding edge than Ubuntu or Debian Unstable. Also the forum is not friendly to newcomers without some experience.
Regarding "code obfuscation", are there are multiple contexts for this term?
Probably my mistake. Most packages have multiple developers contributing from anywhere at any time, and as such, the code can't be really obfuscated in the source.
More often than not, the code is widely commented inside the source to explain what each part of the code is doing.
But in a few select cases it isn't (Like with GNOME desktop environment) and documentation sucks. Most contributors need to "read the code to learn". Hence why it's popularity is sinking a brick.
In any case, any programmer can understand what is going on much better by just looking at the code than by reverse-engineering compiled binaries.
Do all the applications in the repository have some sort of "overview" documentation with flow charts, schedules and such, or do code checkers look at code exclusively?
Only code afaik, some have extensive readme or whatever, but most is done by looking at source code directly.
opinion of Google is pretty low
I was kidding, I'm not a fanboy either, but I do recognize that they are the best company dealing with IT stuff and innovation at the moment.
Surely Android employs Google-serving features / functionalities that get in the way of the interests of hard-core Linux enthusiasts, right?
Wrong. :P
Google stuff is not a core service nor inextricably part of the main OS (they are just apps in the system partition and Android works perfectly fine even without, of course won't connect to Google's App Store but can use other App Store apps or whatever you manually install and isn't a Google App like say Youtube official app). Once you are root you can lock them down or delete them pretty easily, or keep locked only the specific services you don't want.
There are also apps (root required) that can do firewall duties, to isolate from the network (wifi or 3/4G) any app you don't like.

For that matter it's the same as with Chrome, main part is an open-source project and just sponsored by Google, the part with spyware and stuff is a module they add at the end. It is not critical for the main thing to work. Chrome without the Google-only additions is Chromium and from an user's viewpoint it is exactly the same as Chrome (apart from Google's flash plugin, but there are ways to have this plugin in Chromium too)

Main issues in Android come from the hardware manufacturers and the OEMs assembling the device. As long as your device is using hardware that has open source drivers there is a very thriving community of ROM modders that keep porting newer Android versions to your device well beyond the end of the official supported version from the OEM. If the hardware manufacturer or the OEM is an ass (has no open source drivers, nor gives ways to unlock bootloaders of the device), then yeah, you're marooned with whatever they think is good for you.
I'm running the latest Android (4.4) on my Xperia phone from 2012 (last official and kinda buggy version is Android 4.0). That's because Sony did its part (and sources for some things leaked from the manufacturer). And it runs well (thanks to the fact that 4.4 is designed to run well in low ram devices too).

Seems like Google understood that geeks are a tiny minority of the market, and that in most cases geeks still buy apps and media from their store, so they don't seem to care about trying so hard to control rooted devices too. Pissing off geeks is bad, as it's geeks that contribute and keep alive the platform.

Normal non-root users can tweak settings to disable most stuff too. Besides, most users don't even want to.

Android tracks users' movements thousands of times a day without user knowledge
That's because the user is a moron and does not disable tracking and syncing from the options. Or because the user actually likes to know where the phone has been or improve his GPS location with Internet feeds, or use some of the anti-theft features. Or whatever.
Google does not screen, review, or police apps in the Android market before released to public
They do control and stuff has been blocked in the past (now can also scan apps you install from other sources too). This mechanism isn't perfect though. Apple does the same, with mixed results again.
Ms never ever did check what you install on your PC and none is complaining to them.
By default Android apps do not need permission to get a user's photos
All apps state their "needed permissions" manifest before installation and at update time (permissions change). You can find a cool app, try to install, see it asks permissions to do stuff it should not ask (like making direct calls when it is NOT related to that), abort, go find another. Easy and safe. If you don't look and click click click it's your own fault the same way as it is on a PC.
Also as of Jelly Bean (Android 4.2 I think) you can revoke any of such permissions to each app even if you don't have root access, from the android settings panels. (the app might misbehave afterwards, and you get a warning to tell you this)
aiding and abetting illegal importation of prescription drugs into US [ via ] geo-targeted ads...
Wow, blown out of proportion. This is just Google not filtering ads, so US customers saw ads about Canadian pharmacies selling stuff to US customers "illegally". Being Canada a civilized nation, I doubt they sold anything particularly dangerous or illegal. Some pharmaceutical companies in the US are probably very butt-hurt about this (and are probably the force behind these exaggerations), but most other people won't care.
Also, shame on US customs for not catching that stuff.
Street view exposes the interiors of people's homes, making them vulnerable to burglars and stalkers
Afaik this is vastly overblown, showing what can be seen from a window without curtains at a specific point in time (now vastly outdated) is not really dangerous (because burglars don't usually use binoculars or their own eyes already lol). Street View is just still images taken from a camera mounted on top of a car passing by.
With all due respect, I don't see how anyone working for Google can stand straight-faced behind that kind of non-sense.
I'm not saying they are good, just that most others are much worse.
Google is also actively developing interesting stuff like Project Loon (wifi baloons to enhance internet coverage), paying research about rocketry with the LunarX Prize, developing a REVOLUTIONARY way of assembling embedded devices with Project Ara. It would allow true recycle and scavenging of components from older embedded devices to make new ones, like say turning old phones/tablet components into routers, NAS boxes or media centers among other things.
To do what happens now with PCs even with embedded devices, that are now single-use no-maintenance things.

Mind me, in either case they are expecting revenue from it (more wifi/more devices = more internet users = more google services users), but this is in line with the needs of many, so it gets my approval too. And my general support for them.

In the mean time Facebook plunders WatsApp cellphone databanks, tracks each and every user by embedding its "like" buttons every friggin were without giving a horribly interesting return (Facebook itself is a glorified forum system at best), Microsoft acts like a thug as always trying to push W8.1 down your throat regardless of your feelings about it, Apple cares only about big-wallet customers and actively discourages intelligent use of their devices.

Google tracks your stuff but can be set to not do that, delivers tons of ads (although mostly non-intrusive and not so horribly annoying), but everyone does that already (usually much more annoying than google ads like popups), does some shenanigans at the corners while none is watching, but everyone else does worse.

But hey, they revolutionized the embedded device market. There are some sweet Android Media centers that are basically mini-pc and cost around 80$ or less depending on how much connectivity you need.

Code: Select all

I mean, open source code may be reviewed by all, but the proprietary hardware typically requires expensive equipment and specialized man power to reverse engineer that only large companies have access to.
It's called "releasing decent hardware documentation and driver sources", which they all have since they have designed and built the hardware device. And is the same documentation they hand to their software developers tasked to make drivers for the hardware without knowing a whole lot about the hardware itself.
Releasing this data is not hard per-se. Most companies are just afraid of losing the edge or releasing sensitive info that can be copied by competition. Which can or cannot be true depending on each specific case.

Intel is doing this all the time, AMD/ATI is doing this all the time, NVIDIA is starting to do this where they think they can.
ARM does too (also because they are just licensing the design to third parties that actually manufacture the chip).
"only need to set a variable with tune2fs tool"

Gnarly! That was there all this time. Which variable?
That was for file system check at boot, the linux equivalent of checkdisk, but since ext4 is superior, the check is pretty fast on reasonably-sized (30gb or less) system partitions. See the linked page paragraph about the -c parameter of the command line options of tune2fs

The full command is

Code: Select all

sudo tune2fs -c 1 <partition>
<partition> is /dev/sda1 if you only have one disk and one partition, if you ahve more you need to look at what is its /dev/sd.. name from Gparted first.
bobafetthotmail

Re: how do I change sudo timeout, and what to? and why?

Post by bobafetthotmail »

It's called "releasing decent hardware documentation and driver sources"
An example of such documentation is the document here http://www.raspberrypi.org/wp-content/u ... herals.pdf
It is a short from of the documentation for the BCM2835 system chip used in the Raspberry Pi, that Broadcomm seems to have released in the open after the foundation's pressure, for the sake of helping third parties in making peripherals for the newer Raspberry (and selling moar of their BCM2835 now that they are way too crappy to be used in a smartphone).
This post goes in more detail http://www.raspberrypi.org/two-things-y ... ng-to-get/
User avatar
linx255
Level 5
Level 5
Posts: 668
Joined: Mon Mar 17, 2014 12:43 am

Re: how do I change sudo timeout, and what to? and why?

Post by linx255 »

It's a matter of risk/benefit assessment. Top security is text-only interface and completely manual setup of each and every tiny bit of damn thing in the device (top does not mean 100%), then lockdown everything else. For servers it is more or less the norm, but most people prefer a compromise...If you are setting up a server that is supposed to manage hundreds of transactions per second, and moving tons of private data and/or cash, then you might need more security than plain Ubuntu.
I am interested in learning how to setup a server that can manage tons of private data & financial transactions. I'm actually using a powerful server as my main PC because I want to eventually turn it into a money-making web server of some kind. The kinds of things I want locked down are basically access to files on my system and for example, I don't want my browser to have the authority to prevent me from right-clicking on an image to save or selecting text to copy. Today I encountered such a web site and I could disable scripts, but then I could not access the page. The web browser should never restrict the operation of the OS or prevent the user from accessing or reproducing information it has downloaded already. If that information is on my computer and even I can't access it then I am not in control of, and do not own, my computer ( fully ).

I think you are going a bit too overboard with this though.
Maybe. My concerns are more about where things are going; they may not seem bad now, but they could get that way. But also I'm trying to learn exactly where the limitations of systems lie, because I want to know what I'm doing and what's possible. These discussions are good for me because I gain an understanding of where Linux & Mint stand in all this.

I run LMDE on both my netbook and my old rig, and there are no issues afaik. It is a bit harder to install new stuff from outside repositories, but most debs that run on Ubuntu run in them too.
Hm, I'm tempted to try Mate LMDE. I've got a spare partition to test it out.

What are your needs?
Yet to be determined. My needs depend on the limitations of my system. I bought my server with the intent of building some kind of web-based business--as a side project to be tackled at my leisure. Right now I'm just trying to learn what it's capable of and create my own conventions for setting it up. I've written dozens of pages of documentation on how to set it up the way I like and have created a system of rapid OS deployment & configuration and efficient & secure maintenance using boot disks, disk dumping, SSD drives, scripts, customized system files, conventions for directory structures, file names, programming, and even documentation style. My intention is to learn how to make a stable, secure system that never / rarely goes down, and when it does it can be rapidly restored, and easily maintained. I don't know where all this is going, but I know I don't want to jump into a business venture that depends on a system I can't depend on and totally control and understand. So my general aim is web-based business; what kind, I don't know yet. Also I may be "going overboard" because even though I might not need super duper security for myself, I am in a position where people come to me for technical direction and I want to make sure I do not give them bad advice, so if I know how "super duper" works then I have a--not 100%, but a robust solution for a problem in the event standard out-of-the-box Linux / Mint doesn't cut it.

Chrome without the Google-only additions is Chromium and from an user's viewpoint it is exactly the same as Chrome
Would you recommend Chromium over Firefox? I am not a fan of Firefox now that for some strange reason, one day my settings were reset, it set my homepage to Yahoo, and enabled the crash reporters, among other things. I couldn't figure out what happened or how. I just deleted the entire .mozilla directory and replaced it with an archived copy and now it's fine. But I don't know how to prevent that from happening again. I can't make the .mozilla folder or files immutable because Firefox has to make changes to them. I kind of feel breached: somehow they were able to modify my settings files; this is over-reaching and it's wrong. I use few plug-ins: Noscript, flashblock, Adblock plus, ghostery, and HTTPS Everywhere. Maybe one of them was responsible for this breach, but I feel Mozilla / Linux should be responsible, because who in their right mind wants their settings files mucked with?

Main issues in Android come from the hardware manufacturers and the OEMs assembling the device...
Ah, I see.

Pissing off geeks is bad, as it's geeks that contribute and keep alive the platform.
"We are Anonymous" instantly springs to mind. :lol:

That's because the user is a moron
In a way, but a lot of folks, I suppose older generations, who aren't technically inclined, buy this stuff and aren't informed about features. They're not morons necessarily, maybe they just want a smart phone but not as a hobby in which they customize it and spend hecka time learning to use everything it does-- granted, it is easy to turn off those features. But I know a lot of folks expect the kind of privacy they had in the 20th century and a lot of companies aren't in the business of honoring privacy. The reason I stopped using Gmail ( long time ago ) was because not only was I willing to pay for adless email with features that are really valuable, like aliases, Gmail scans the email contents and generates ads based on that data-- it never informed me it was going to do this. I'm very technically inclined and spend a lot of time with computers and technology, but there are so many areas of knowledge and Gmail "features" just didn't make it to my head because I was too busy to thoroughly research it or read any policies if that's what they expect us all to do. My point is it snuck up on me and it wasn't always practical for me to have been aware from the get go; at the time I just wanted to use Gmail as a free e-mail and I don't think it occurred to very many people that they were sieving through e-mails and storing that sensitive info somewhere, and then not even bothering to enable encryption by default until people complained. Do you remember when Google street view came out? It wasn't until people got pissed off at them that they blurred out people's faces and license plates. I never forgot that; I remember street view before it was censored. The fact that they would create such a feature and have no regard for privacy until threatened says something important about their character. And: http://techland.time.com/2011/04/11/ala ... n-germany/ Taking pictures of people picking their nose, or taking a dump, plus they got sued for trespassing. The folks responsible for managing street view are morons, I submit. They don't care about us. Oh yeah, I'm not a fan of Google Glass’s ability to turn ordinary humans into invisibly recording surveillance cyborgs, either, for one.

wifi baloons
Funny you mention, I'm actually near one and call me ridiculous and paranoid, maybe I am, but I don't like it because I don't know for a fact that's all it's being used for. Or even if that's all it's used for now I don't know what they'll turn it into later. Maybe free-floating drone balloons that navigate over your house and track your movements or EM signals from your devices as part of an agreement with a spy agency. :lol The sky's the limit, literally. Fool as I may, I am grounded in the fact that profit motive and government strong-arming is the breeding ground for concealed acts of tyranny. Companies are obligated to do right even if "right" limits profits. So...I say eliminate corporate personhood! :lol:

Wow, blown out of proportion...
Ok, point taken. But again, it doesn't seem like there was much thought put into such a powerful project as geo-targeted ads without considering how it might be abused. Google technology is so powerful. They can deploy a project rapidly with little or no public debate about the impacts. They know well that there can be adverse effects to some of these things.

Being Canada a civilized nation
:lol: Whatever!! There are atrocities and corruption everywhere on earth, and Canada is far from an exception.
shame on US customs for not catching that stuff
The DOJ and DEA seize all kinds of illegal imports and shut down the operations responsible but they don't have the resources to catch it all, yet.

I do think it's the kind of situation where technology exploded faster than the government was ready for or will ever be able to handle without funding for better technology and changing laws to create a more advanced Orwelian style surveillance state that nobody wants except the plutocrats...like Goog & MS execs, who attend Bilderberg meetings, where certain folks have gone on record openly talking about manifesting population control tactics ( mass genocide ) among other awful things. That's the biggest reason I don't like them. Anyone who attends those meetings won't get my business if I can help it.

It's called "releasing decent hardware documentation and driver sources
Devil's advocate: sure the documentation will help us make drivers, but how do we know it doesn't omit things we all need to know, like what it's doing behind our backs on undocumented circuit blocks, or even using documented circuit blocks secretly? If they had a secret agenda they wouldn't tell us and we wouldn't find out without ballsy whistleblowers.

Maybe it's not even a problem right now. But I think it could become one in the near future if it's not already. Trustycon argues it's been a problem for decades.
Last edited by linx255 on Thu Jun 05, 2014 4:30 pm, edited 2 times in total.
- I'm running Mint 18 Mate 64-bit
- 4.15.0-34-generic x86_64
- All my bash scripts begin with #!/bin/bash
DrHu

Re: how do I change sudo timeout, and what to? and why?

Post by DrHu »

Second, it's difficult for me to ascertain how long of a timeout is really appropriate because I don't understand how long is too long, in terms of security.
For your own local user, any timeout you like: you are in complete control
--most exploits that can get your OS are local: any remote explots are few and far between, therefore of little security concern

I think the point about timed out access is simply a security mantra, and we don't have to agree with them all, as they tend to be generic in nature to apply to the most situations
--for example I don't agree to the mantra of change your password often: I don't unless I suspect a covert entry into my system
  • I can always use some security tools
  • Bastille - harden the system scripts
  • AIDE - intrusion detection
  • chrootkit - rootkit detection
  • Selinux kernel -- mandatory access control (MAC)
  • Apparmor - application sandboxing
  • logview - keep an eye on logs and who has accessed the system
For your terminal commands: after restting the timeout value, you should relog your session
http://askubuntu.com/questions/14948/ho ... o-time-out
--either with logoff or ctrl + alt +bksp (crash the X-server desktop), assuming you included that function your startup services, I think Ubuntu had it turned of after it being a standard feature for umpteen years, becuse of new Linux user confusion or FAT finger problems with the keyboard.

A quich look gets me this
https://help.ubuntu.com/community/RootSudoTimeout

http://itsfoss.com/change-sudo-password-timeout-ubuntu/
https://scottlinux.com/2012/08/04/chang ... d-timeout/
http://ubuntuhandbook.org/index.php/201 ... inux-mint/
--looks like same info
  • Does that work
User avatar
linx255
Level 5
Level 5
Posts: 668
Joined: Mon Mar 17, 2014 12:43 am

Re: how do I change sudo timeout, and what to? and why?

Post by linx255 »

Oh yes, I got the timeout to change; I just forgot to sudo. Thanks

---
most exploits that can get your OS are local: any remote explots are few and far between, therefore of little security concern
Really? It seems like quite a bit of the fuss nowadays is about remote exploits. I don't download any programs from untrusted sources; I'm not sure how I'd get attacked locally unless someone physically accessed my system. As I mentioned, something funny happened the other day: my Firefox configuration files got hosed...Altered without my consent or notification-- it changed my homepage to Yahoo, enabled crash reporter, and some other things. I'm stunned and outraged; I don't know what happened or how to prevent it. I've since deleted my .mozilla folder with a copy in my archive to change everything back to what I had. Perhaps one of my few and supposedly respected plug-ins is malicious; no way to know. Was it a local or remote attack? I've checked for rootkits with multiple checkers and none found.

However, chkrootkit shows "suckit" rootkit present in /sbin/init but I read this is a bug...They should probably fix it because how would I really know whether or not I'm infected if it's always flagged. Suckit not found by rkhunter, but it gives me warnings for "passwd file changes, group file changes, /dev for suspicious file types, hidden files and directories". ( 0 rootkits found out of 292 checked for. )

Which package do you use to install apparmor? There are several, and the ones named apparmor nad apparmor-notify are poorly rated. I'm not sure either apparmor or SELinux are practical for me but I'm willing to try them out. One review says apparmor isn't compatible with LMDE, which I'm considering switching to for security reasons. Now I have to weigh Mint with apparmor &/ SELinux against LMDE w/ SELinux. SELinux seems like it might be a bit overkill for me, but I'm not expert enough to know. As stated I may want to use my server as a server instead of just a home PC, and so I'd probably switch over to Debian stable or Arch assuming it supports all the packages I need. There are so many factors; the decisions to be made are overwhelming. What's the ideal combination of distro and hardening tools for a "semi-intermediate" Linux user wanting to boldly host a web server from home or office as a novice web server admin?

Couldn't get "Aide" to work. No configuration file found. I tried using one provided in a tar.gz file that came with the package, but it wouldn't take.

According to bastille-linux.sourceforge.net, "Bastille currently supports the Red Hat (Fedora Core, Enterprise, and Numbered/Classic), SUSE, Debian, Gentoo, and Mandrake distributions, along with HP-UX. It also supports Mac OS X." If I switch to Debian stable then I will try it out.

Couldn't find a "logview" package anywhere. Command not found.
- I'm running Mint 18 Mate 64-bit
- 4.15.0-34-generic x86_64
- All my bash scripts begin with #!/bin/bash
WinterTroubles

Re: how do I change sudo timeout, and what to? and why?

Post by WinterTroubles »

linx255

If you are considering switching to LDME you might want to read what is said about security updates in this topic I read recently http://forums.linuxmint.com/viewtopic.p ... 9&p=869743
User avatar
linx255
Level 5
Level 5
Posts: 668
Joined: Mon Mar 17, 2014 12:43 am

Re: how do I change sudo timeout, and what to? and why?

Post by linx255 »

OK, I've received far more discouragement from switching to LMDE than encouragement. I'm not impressed with it from what I've read. It sounds like SolydXK might be ideal for my needs. It's designed for business, comes with business software, it's based on Debian Stable, and it's full rolling release... What more could I ask for? I'm sure I could even virtualize Mint if I just needed to use an app not compatible with SolydXK. 8)

Only a few concerns: I'm not a big fan of KDE or XFCE ( though I could probs live with XFCE ), and SolydXK would have to provide Intel Haswell 4400 graphics driver, just as Mint does. And since rolling releases, while keeping me up to date, can apparently break my system, I'd need some way to ensure my system stays up 24 / 7 / 365 if I'm going to be hosting a business web server. I definitely can't have my OS break on update as it did for a tester trying out Manjaro, which is rolling release based on Arch. I wonder how often that happens and if there's a way to avoid it. That could devastate business for everybody in all kinds of awful ways. :shock: ( For that matter I also need a battery backup power supply and to pay extra for some kind of super-reliable ISP. :roll: ) Perhaps delaying updates until I've skimmed the web for signs of success would prevent a crash, but then I wouldn't benefit from those potentially critical updates during that time, which may be worse than a crash. With it being Debian Stable I wonder if delaying updates long enough to wait for problems to be fixed would even be a problem. :?:

:idea: Maybe take a disk dump image of the OS just before installing the update. Then if I had to revert back, it would only be down tops 30 minutes. :?: That's my solution to everything lately...disk dump! :lol: But even 30 minutes down could cost sales, customers, and reputation. :(

Thoughts / suggestions appreciated. Thanks everyone! By the way, is "Mr. Green" smiley evil? I can't tell if it's an evil monster with green glowing eyes or just a jolly green face with eyes closed. :mrgreen: :?: :lol:
- I'm running Mint 18 Mate 64-bit
- 4.15.0-34-generic x86_64
- All my bash scripts begin with #!/bin/bash
User avatar
linx255
Level 5
Level 5
Posts: 668
Joined: Mon Mar 17, 2014 12:43 am

Re: how do I change sudo timeout, and what to? and why?

Post by linx255 »

By the way, regarding screen locker attacks: I use i3lock; might it susceptible to the attacks you mentioned? If I wanted to break through my own screen locker what would I do? I don't use bluetooth devices due to security concerns. I've setup Mint so any USB drives inserted will not execute anything. If I could plug-in a special keyboard that generates a bazillion key strokes a second wouldn't programmers have found a way to ignore all input over a certain strokes / second frequency? I depend on i3lock; should I not feel safe with it? I couldn't find any reports of it crashing for any reason.
- I'm running Mint 18 Mate 64-bit
- 4.15.0-34-generic x86_64
- All my bash scripts begin with #!/bin/bash
bobafetthotmail

Re: how do I change sudo timeout, and what to? and why?

Post by bobafetthotmail »

linx255 wrote:I am interested in learning how to setup a server that can manage tons of private data & financial transactions. I'm actually using a powerful server as my main PC because I want to eventually turn it into a money-making web server of some kind.
Then head over to Debian or RedHat forums. There you can find people with experience in that field (those are server distros). Mint is a desktop OS, and people here are mostly users, not server admins.

There are also manuals you can read to learn the ropes of linux server setup and administration, which you should read asap if that's what you want to do. People don't like to repeat the basics to every newcomer in most forums.
The kinds of things I want locked down are basically access to files on my system and for example
file access can be controlled with different users, You are User1, and you can set file permissions to deny any access (read/write/execute) to everyone that isn't root or User1, like for example another user you make and call User2 so you can log in to run other programs (say a webserver) and leave there running while you switch back to your main user.

Pros setup a hypervisor. It's a linux OS whose main role is just to run more virtual machines on the same hardware.
The virtual machines are completely unaware of their status and of the presence of anything else the hypervisor isn't showing them, and can be terminated if needed (upgrades or something fishy going on) without crashing the server or disrupting the service (as the other virtual machines doing the same job remain online).
I linked a couple systems like that above. KVM and another. Xen is a pain in the backside to setup and unless you need to run Windows systems it's not really a good idea imho.

Hypervisors allow you to run different virtual machines at the same time, or dozens of clones of the same machine, while also running your own personal virtual machine (this last thing is not a good idea when the server starts doing a real work).
Consider that a Debian server virtual machine (lacking any kind of user interface) needs very little RAM for itself (depending on what you ask it to do anyway) so you can have dozens of them running at the same time on a server with 32GB or more RAM.
I don't want my browser to have the authority to prevent me from right-clicking on an image to save or selecting text to copy.
Use the right addons. https://addons.mozilla.org/en-US/firefo ... httoclick/

Or save the page and extract text and links manually with a text editor.
The web browser should never restrict the operation of the OS or prevent the user from accessing or reproducing information it has downloaded already.
The content is usually copyrighted by someone. If they decide you should not copy it, it's their right to do so. You can rationalize how you want, but you are technically going against the will of the author/site/whatever, and the browser is technically right in obeying their will on this matter.
one day my settings were reset, it set my homepage to Yahoo, and enabled the crash reporters, among other things.
This is standard settings on Ubuntu firefox afaik. The responsible is likely xul-ext-ubufox package that somehow got installed/updated/reinstalled and failed to realize you had your settings there. http://packages.ubuntu.com/trusty/xul-ext-ubufox

Being a package installed by Synaptic or whatever that are run as root, it has full rights to erase and replace whatever.

I always nuked that package on new Mint-ubuntu installs (just as I nuked virtualbox guest packages in a real PC) to get a Firefox that looks normal, and because I dislike stuff from Ubuntu in general.
"We are Anonymous" instantly springs to mind. :lol:
Anonymous are the cybernetic equivalent of a street gang. They can deface some website, do some DNS denial attacks, but it's all more or less equivalent to kids spray-painting obscenities on the wall of a company building. Google has too much infrastructure in place for such dumb attacks to even affect everyone. They can cause local disruption at best and hope that Google does not find a way to track them down and spank them HARD.
See this http://xkcd.com/932/

I was talking of developers. Developing quality stuff is expensive, they can't do without volunteers.
In a way, but a lot of folks, I suppose older generations, who aren't technically inclined, buy this stuff and aren't informed about features.
Intelligence is the ability of asking the right question to the right person. Most people that does not know technology very well usually rely on someone, be it friend, offspring or guy at the PC shop that help them reach their abstract goals ("safe navigation", "good performance".. and so on).

Users that just assume it's fine, they know enough and use stuff like it's magic are morons, as simple as that.
The fact that they would create such a feature and have no regard for privacy until threatened says something important about their character.
While I admit I'm probably a minority, I've never really saw that as a massive privacy violation.
That's a single image aimed like crap, where my face is maybe not even easy to see, without any indication of who I am, nor what I'm doing there, that is going to become horribly outdated within months.
In Italy it's fine law-wise as it is the same as a photo/video of a very large subject and/or multitude of people, which is allowed without requiring a written consent.

It's not like Facebook that (can) auto-tag your face in all photos loaded in its database EVAR, and you have to waste 15 mins to find how to disable that and lock down your profile.
Maybe free-floating drone balloons that navigate over your house and track your movements or EM signals from your devices as part of an agreement with a spy agency.
In their scheme for world domination, wifi baloons are just wifi baloons.
All the tracking and spying is done by wearable devices like google glasses and google watches and whatever, connected to something else through abovementioned wifi baloons.
.like Goog & MS execs, who attend Bilderberg meetings, where certain folks have gone on record openly talking about manifesting population control tactics ( mass genocide ) among other awful things.
If I had a big company I would like to know what the elite is thinking too, to be able to maneuver without pissing them off or prepare for what they will do. I really hope you heard that wrong, as population control can be done in much quicker and more efficient way without mass genocides. Besides, anyone talking about those things is either venting or seriously lacking lucidity.
If they had a secret agenda they wouldn't tell us and we wouldn't find out without ballsy whistleblowers.
The cost to make hardware with such systems would easily give away any such measure. We are talking of much more R&D needed, and higher chances of unforseen expenses, which on average multiply the price of the device by an order of magnitude or two.

Besides, hardware manufacturers want to make cheap stuff for obvious reasons, and won't adopt such measures unless mandated by a government of a nation they cannot simply stop selling stuff to.
More often than not, hardware manufacturers pay lip service to these kinds of regulations, which is why the hardware systems that aren't sold to governments (say most of the hardware encryption systems used to protect media and copyright) are so unbelievably crappy.
I use i3lock; might it susceptible to the attacks you mentioned?
From what I see googling it's tiny and barebones. I'd say it is ok. The less code there is the less bugs it can realistically have.
With it being Debian Stable I wonder if delaying updates long enough to wait for problems to be fixed would even be a problem.
Debian Stable is just that. Stable. Even security updates are pushed only when stable.
and SolydXK would have to provide Intel Haswell 4400 graphics driver, just as Mint does
Drivers are inside linux kernel. Debian Stable has a backport repository, where they offer newer stuff backported to Stable (also kernel). Being newer it can ahve more bugs, but if you need the functionality you don't have a lot of choice.
Just add that repos to your SolydX sources and pull down the packages you need (then disable that repos).
User avatar
linx255
Level 5
Level 5
Posts: 668
Joined: Mon Mar 17, 2014 12:43 am

Re: how do I change sudo timeout, and what to? and why?

Post by linx255 »

Thank you much for the detailed responses. You are very helpful!
Then head over to Debian or RedHat forums. There you can find people with experience in that field (those are server distros). Mint is a desktop OS, and people here are mostly users, not server admins...Debian Stable is just that. Stable. Even security updates are pushed only when stable.
I've considered eventually getting RedHat certified but I won't do RedHat yet because I want to see if I can teach myself to do everything I need to do on another platform for free. As for Debian Stable, the reason I bring up 'updates' is because I read somewhere Debian Stable has been known to crash on some updates, which of course I don't ever want. I mean, if that's something every server admin has to live with, then fine, but ideally I want a reliability edge without having to do RedHat unless necessary. I used to use RedHat at work and it crashed plenty, but that might have been just standard ( or substandard ) IT maintenance. I will definitely check out Debian forums next chance I get.

In the mean time, I feel I need to learn how to utilize the full power of a desktop environment until business necessitates otherwise. I'm at early stages of research so I'm not fixed on any specific outcome yet; just trying to learn as much as possible about everything and make decisions day-by-day, and gradually, loosely go a direction, succeed, fail, and adapt, before succeeding big.

People don't like to repeat the basics to every newcomer in most forums.
There is no expectation or attachment to outcome when I post. It's whatever, so I'm not worried if I don't get what I need out of it. I'm here to contribute as well. I sometimes ask questions that might be basics, but the information is buried in vast search results and I don't know what criteria to enter or where to find what criteria to enter. Even if I find articles pertaining to the right criteria they don't necessarily answer all the essential questions. And if it's stated elsewhere the observation lense of context often differs from the author to me...In ages past I was into hardware engineering and never was concerned about Linux until recently as I've been developing my own ideas instead of working for some jerk company :lol: ; I really don't know where to start because there are so many facets involved and my background is foreign to this. Having aggregated, recent information with fresh or additional context from a source like this can make the difference in finding the answers I seek. But it's understood; we all have priorities. ;)

file access can be controlled with different users
The technical ability is there, yes, I'm very aware. I use immutable property to prevent changes to my files all the time. But other methods of file protection are sometimes necessary.....
The kinds of things I want locked down are basically access to files on my system
I need to know how to allow a program to access its own file but no other program. I was thinking I could achieve this with SELinux in a way, but I don't know if it's technically feasible or worth the time and trouble for a small to mid, or scaling business. I have yet to find an article on SELinux or any other solution specifically or generally answering this question because there are too many unknown variables to state the question appropriately.....
Pros setup a hypervisor.
I haven't gotten around to researching all that yet. I'm engineering not only a business system but a system of systemizing business systems :lol: to create the best possible technological foundation without giving up control to organizations with conflicting interests. I've long envisioned mastery of virtualization tasking on servers, though I don't know if I need a hypervisor just yet. At the moment I'm taking my time to discover Linux's abilities and limits without virtualization because the way I'm setting up my machines I might not need to muck with virtualization; I don't expect my projects to reach that kind of complexity just yet. I do use VMs already to run OSs and keep them from self-corrupting by managing images automatically, and I manage images of hard drives with disk dumps, backup drives, and data management scripts. Maybe a hypervisor is what I need to research now. Perhaps it's time to jail Firefox and whip it into compliance with my administrative prowess. :twisted:

your own personal virtual machine (this last thing is not a good idea when the server starts doing a real work)
If it's a performance issue, I'm sure I can run at least three light-weight VMs plus my own without a problem, but only my actual business will determine my ultimate solutions and performance, and that's a ways off. Being this early on in my R&D I don't know what type of business I will create or at what point it will scale to exceed the computing capacity of my machine, so the main thing for now is I want to learn to use what I have to its fullest because I probs won't be able to afford another, more powerful machine for a while.

[right-to-click] add-on...This is standard settings on Ubuntu firefox afaik. The responsible is likely xul-ext-ubufox...
...Yes but an add-on does not change the overly tolerant Firefox source code, only sidesteps it, and the I'm left wondering if my add-on is trustworthy. I've installed only 5 add-ons, all from well-known, "trusted" devs, and I believe one of these add-ons modified my prefs.js file to change my homepage to Yahoo and enable crash reporter. Just one day I opened my browser, it opened up to Electronic Frontier Foundation ( EFF ) homepage, as if an updated occurred with my HTTPS Everywhere add-on, and that is when all my Firefox settings changed. That was a very visible effect of boundary violation and though EFF is highly regarded by not a few, I'm hecka suspicious here.

I haven't gotten around to complaining to them about this yet but they can be assured I will if no one hasn't already because I've pretty much isolated their add-on as the cause; their page automatically loaded at the exact time my settings changed. I have no rootkits and no reason to believe anyone else was responsible... xul-ext-ubufox is not installed and I don't believe it was ever. Also it set my homepage to Yahoo, which is not the default homepage of Firefox-- Google is, unless Firefox derives the homepage value from some OS policy that changed, but anyawy, xul-ext-ubufox properties in Synpatic Package Manager claims to "Set homepage to Ubuntu Start Page". I don't know how it would have gotten installed or updated or re-installed. I haven't removed, added, or updated anything, not manually, or automatically except for running update manager when prompted. I'm not doing anything fancy with my machine; I have been spending a great deal of time just trying to get it to work normally with as little interference as possible.

I hate having to add-on a feature that ought to be built-in especially if the add-on can muck with my own, personal settings files. In fact I'm such a serious opponent of boundary-violating functionalities in browsers that I am building my own browser with wget and analytical scripts so I can find bull-crap-free, raw information from pages only I pre-approve, and only upload information and change browser settings on deliberate command, and to heck with anyone trying to tell me what I can and can't do with what's supposed to be MY machine! :lol: but for real!

If they decide you should not copy it, it's their right to do so.
Mm, "right" is pushing it. They have the power to do so for certain, should the browser devs bestow it upon them. I am not trying to illegally reproduce any copyrighted work, just trying to access what's already on my system in a different form, which is not infringement. If I view a copyrighted image in "my" browser then the publisher of that content has already provided me with a copy in the form of an image on my screen and data in my RAM, and the software is owning my system in a way that does not respect my need for system ownership, and this practice of restricting access to a particular form of any single work that is made available is not specifically supported by any law I know of, and is unfairly at odds with the interests of system owners.

If I can't control data that ends up on my system then I don't own my system, at least not to its fullest capacity; and I always thought open-source software was supposed to favor, emphasize, and maximize ownership ability, and complement ownership rights. It should not be the job of a browser maker to enforce copyright law or to favor the interests of copyright holders / media businesses over the interests of the machine owners. ( And it's not that copyright holders' needs aren't important either. )

If a copyright holder posts a flyer with a copyrighted image on a public street light pole it is not copyright infringement for a passersby to take the flyer, nor would it be considered "theft" since it's posted in a public domain. You can access the image by looking at it, you can fold it up, put it in your pocket, and take it out without having to go back to that pole to look at it. This is no different; people should have the right to access the poster whether it's viewed hanging on the street light or whether they pull it out of their pocket. If it's on my screen and in my RAM, I should be able to access it both ways, neither of which violate copyright law.

Retaining the published information in an accessible form is not reproducing the material. All they are doing is controlling which form the copyrighted image is made accessible at any given time, and have no authority to prosecute anyone for this AFAIK. For example it is not illegal for me to bypass the right-clicking restriction and save the image so that it is, not duplicated-- but accessible outside of the "browser-web-page-exclusive" form.

There is no law I know of that gives any copyright holder the legal right to enforce such control or to prevent others from accessing the published information in whatever forms result from its publishing, or any law that gives a software maker reason to respect such a ridiculous, over-reaching function, but somehow it turned out that copyright holders and browser makers got together and have done this out of privelege to my detriment, which begs the question, why should any open-source software developer build a browser that gives copyright owners control of someone else's system just because they're worried about potential damages from IP theft? Why favor a particular market / industry and not the owners of systems?? Where is the voice of the owner in the browser development world?? Should we not all have equal favor??

If the maker of a browser respects the wills of copyright owners more than system owners' then they got some screws loose because that's a major conflict of interest! Of course, it's me to blame for voluntarily using free, open-source software, which in this case isn't serving my interests whatsoever. :lol:

The issue is really no different than the one addressed by Audio Home Recording Act of 1992. It gives the viewer the right to retain the published ( broadcast ) data on his / her own medium using technology marketed specifically for the purpose of duplication. Let's look at the TV for example. The publisher allows us to access content in two ways: light rays jumping off a CRT monitor, and analog / digital data running through its circuits which we can tap into via RCA / other ports. Prior to this legislation, the argument was made ( via lawsuits ) that copyright law should prevent duplication of the broadcast information to prevent, as wikipedia puts it, "widespread copyright infringement...[to] curb lost sales". One of the things this law achieves is preventing copyright holders from suing the manufacturers of recording equipment in the name of copyright infringement prevention ( which gets into an entirely different subject: to what degree the law is really obligated to protect the royalty / media markets; i.e. Should our tax dollars pay ICE agents to arrest copyright infringers while human / sex traffickers are obviously a much worse problem? Who said profit was more important than people? I mean, the logic here is astoundingly absent! ) but it effectively prevents corporations from controlling what you do with your own equipment, such as feeding the data stream from the CRT to a recording device which is sold for that very purpose. Now there are many uses for computers, and it should come as no surprise that duplication of data has always been one of the prime built-in functions since they were first commercially available. People buy computers for this very use; in some cases exclusively.

Now, it's my opinion that information should be not be held hostage by law, nor should a person's receiving & recording equipment. The idea that laws should be made to create & sustain markets / industries @ the expense of the right to manufacture / use a device as intended is overwhelmingly unfair ( biased in favor the interests of some but not others; particularly media corporations over ordinary people ). Would it be nice to get rich off royalties? Sure. But we shouldn't trump the rights of millions to use equipment intrinsically designed for reproducing data just to make every user personally responsible for curbing lost sales of someone else's business. ( In fact, if they are to use every consumer to achieve their business objectives, these consumers should be paid! ) Because, by such a standard every other market / industry should enjoy the same level of support and protection. Just because it turns out to be economically feasible to enforce copyright law doesn't mean it should automatically be done; who said the profits of copyright holders are more important than the profits of other types of businesses? We're essentially subsidizing the media industry. Why should tax dollars go to arbitrarily propping up one industry and not another? The real answer: media giants have extreme lobbying power. It's all about special interests to be sure. Am I against media companies making money? No, but their ability to make money is not my problem or concern. Am I for maintaining control over my own device, using it for what it was marketed for? Yeah, and so should be the browser devs. I should not have my functionality restricted to help prevent someone else's potential sales losses; I will not be a complicit cog in the media industry just because they think they're more entitled to protection than everyone else. This kind of control over a person's machine is just fanatical when you really think about it! :lol:

Anonymous are the cybernetic equivalent of a street gang.
The majority of multi-national corporations are the equivalent of something way worse, and not all Anonymous participate in cyberattacks. They are not all the same and whether their actions are legal / illegal, wrong / right, I'm not trying to say here, but there is real debate over who the real victims are in the issues they raise. The the groups and individuals they accuse of human rights violations, I observe, often lack of understanding of technology and law-- and there are so many areas, who can know them all? The original Guy Fawkes mask presentation they employ, is created out of a geek / hacker / activist culture not necessarily well understood or received by mainstream society ( from my observation ), which may tend to assume Anonymous is like an organization with a single doctrine that all its members follow, which isn't at all the case. Really it's just a communication break down of all the parties and a socio-cultural integration failure on a global scale. :lol: We're all just interdimensional beings striving and failing to become one, healthy, happy planet, but hopefully learning before self-destructing. :lol: Like a train with 1 conductor trying to speed up the train headed toward a collapsed bridge while the other 99 passengers scramble to figure out how to convince an stubborn, arrogant conductor that not only does the problem even exist but there are good solutions. I've worked with brilliant engineers, programmers, philosophers, freethinkers, and journalists from all over the world, from all walks of life. There are infinite ways to miscommunicate and misunderstand things across all the cultures, sub-cultures, and individual mindsets, especially when human rights violations occur with little to no other way for victims to raise their voice, let alone communicate effectively to anyone that has the power to relieve them and improve the balance of the collective paradigm for the benefit of all. :lol:

True that; I'm not saying they have superior computing power. It seems to me their power is in the grassroots movement itself and the way the use technology to communicate to masses. It is what it is and nothing more or less. I don't want to throw a label on anything that filters out that thing's infinitely-faceted defining characteristics, and I'm by no means an expert on this subject.

I was talking of developers.
Oh yeah, I know what you said and meant, I just found humor in the rough parallel of Anon to your comment about pissing off geeks because Anon is just so INTENSE. :lol:

Intelligence is the ability of asking the right question to the right person.
Intelligence is the wisdom to know when we don't know the right question or the right person, and the ability to learn from the failures resulting from the ignorance. Reminds me of an interesting TED talk I recently saw about "quality ignorance"..."Thoroughly conscious ignorance is the prelude to every real advance in science." --James Clerk Maxwell

Users that just assume it's fine, they know enough and use stuff like it's magic are morons, as simple as that.
:lol: OK.

All I'm saying is it's not so black and white. Technically inclined folks could be considered morons for not being as familiar with other cultures, objects, or aspects as they are with their PC. And by the way, I didn't intend to make a blanket statement about "older generations" or lump them all into one category of people. But I've just observed that a lot of folks who didn't grow up with this stuff may have a very different kind of life, like agricultural work or caring for an aging parent, etc, and then found themselves in need of a smart phone, but not having followed the explosion of technology they can't always be expected to assume that a smart phone would be susceptible to the countless vulnerabilities that no carrier, manufacturer, and retailer, or product manual would care to mention. That's all I'm saying. If they got attacked and kept using the device like magic then I would say they'd be fools. :lol:

I've never really saw that as a massive privacy violation.
It is a struggle between free speech and privacy. Interesting what's going on in Europe / with EU lately on that. Early in the implementation of street view, it wasn't being updated as regularly, and the shots were sometimes clear enough to identify someone. Granted, I don't believe in prohibiting taking or publishing public photography-- I think what they were doing was a bit extreme and unethical just because nearly everyone and their car anywhere near mainstream civilization could be potentially identified with extreme ease... I just think there should be a basic level of privacy people should enjoy and that should not go away just so a corporation can profit from a service, regardless of any perceived benefit. I'm not saying I have a balanced solution to this either, however.

It's not like Facebook that (can) auto-tag your face in all photos loaded in its database EVAR...
I'm def not a fan of that service / corporation either, and I respect them least of all, probs.

wifi baloons are just wifi baloons
I don't know, balloons are large and could contain special equipment not available in smaller devices. You're probably right though, it's just a wifi balloon at this point. I am just skeptical of everything because I know how an organization can betray trust in the most unsuspecting ways and keep it all out of public mind, which makes choosing trustworthy technology a very difficult task.

If I had a big company...
Execs are recruited and initiated into that stuff and they both share interests amongst members and have their own interests. I can't say I know the exact context but there is a vid of MS exec talking about the problem of overpopulation then apparently mentions 'vaccines' as one of the solutions. I'd challenge anyone to refute the obvious implications. For long, not a few folks have been calling vaccines complete fraud; I for one, because it's no secret they're all about money for big pharma, not health, and they always made me markedly sick within a day of injection, and after I stopped getting them I never got sick again. My immune system strengthened all on its own just by breathing, drinking water, eating right, sleeping right, exercising, and moving around. Of course all the chemicals they're dumping into the air and water worldwide is another thing I think they're behind, because I am a career scientist and I know that these chemicals ( which are found in water and soil samples analyzed in labs ) are harmful to humans in even small amounts. Recent studies of vaccination contents reveal chemicals known to cause brain damage and cancer. And that's a fact, and also just a scrach on the surface.

They want mandatory vaccinations which they will implement by convincing the public of a false health crisis, which of course they already have a well-documented history of doing, and are continuing to succeed in doing. If I understand right, the idea is that vaccines can be delivered to trusting masses while minimizing suspicion, collateral damage, and ultimately effective opposition. The institution of modern medicine, whose reputation is laughably rapidly declining :lol: is ignoring the undeniable controversy about the harmful ingredients found in vaccines and is pushing them anyway, and hard. Some vaccines have not been demonstrated to prevent any disease, in fact, and a rapidly growing number of patients and practitioners are discouraging them, and for truly thought-provoking reasons, like, they are making some people really sick. A lot of it depends on their existing physical and mental health of course.

Besides, anyone talking about those things is either venting or seriously lacking lucidity.
Well, I don't know if you're referring to me or them, but I am not angry, insane, or an idiot and the world is definitely run by insane people who very boldly want to reduce the world population to 500 million one way or another. This intention is inscribed in granite at 'Georgia Guidestones' in Elbert County, Georgia, USA. "Maintain humanity under 500,000,000 in perpetual balance with nature." Admittedly, I have not visited the site to verify, but it's hard to refute that others have.

The cost to make hardware with such systems would easily give away any such measure...won't adopt such measures unless mandated
An organization itself has to efficiently manage finances to ensure customer & investor satisfaction, and survival & competitive edge, and so forth, but money for hidden agendas is a non-issue since the owners of multi-national corps have trillions to play with. Individual owners and investors outside a company got all of it and some, and can get or make even more from any number of sources. It's easy to conceal money, cook books, please investors, customers, present an incomplete picture of the company and its products, get away with corp crime, bribe, and even comply with ever-increasing mandates, that have recently been revealed to be far more common than the general public and even many levels of government believed. Some mandates are pretty far-reaching and yet the tech giants' compliance with them is no mere lip service even if the company applies minimal effort. It's all about who owns & influences the companies, and how organizations, departments, projects, and information are structured and controlled at every level of the hierarchy, along every axis. Priveleged, special-interest company owners may be so wealthy & influential they could develop anything secretly, from the outside, in a sort of shadow, mob-style development organization, and inject it into their company in a controlled way for undetected delivery. I can't point to a concrete example in electronics, but I have to wonder. Big banksters have a stake in electronics, and although banks are a different monster, they can achieve astounding tyranny through masterful elusiveness. Take dual tracking, for example, one of dozens of economically destructive schemes in which the bank is structured so that the foreclosure department is blind to the short sale department so that foreclosure proceeds even if a valid contract to sell the property has been landed. It's hecka illegal, and the DOJ and Federal Reserve have fined the banks billions to be paid back to homeowners ( plus private law suits ), though of course that only comes out to a few thousand bucks for most, if anything, and the penalty to banks is literally, relatively pennies, and is just business as usual for them. That's why home ownership is at historic lows in USA today. There is not a thing the vast majority of the victims can do about this...The banks were too sneaky, crafty, rich, and powerful. They can get away with a lot more than your friendly bank representative would tell you. Of course, when they're stealing your house you don't get to speak to a rep and they become faceless and impossible to work with. Laws passed since have demanded a single, responsive caseworker for each case, but they still break the law, pay the fines, and for some reason people still bank with them, and we let the government bail them out when if a citizen were to hijack your home, occupy it, and force you out, they would be imprisoned for a long time. :lol:

Drivers are inside linux kernel...
I did not realize that. Good to know! Well, I have about a lifetime of research to do now. :lol: I appreciate your time. This is most helpful, again!
- I'm running Mint 18 Mate 64-bit
- 4.15.0-34-generic x86_64
- All my bash scripts begin with #!/bin/bash
Locked

Return to “Chat about Linux”