AI Language Model Implementation
Forum rules
Do not post support questions here. Before you post read the forum rules. Topics in this forum are automatically closed 30 days after creation.
Do not post support questions here. Before you post read the forum rules. Topics in this forum are automatically closed 30 days after creation.
-
- Level 1
- Posts: 16
- Joined: Mon Aug 10, 2009 8:12 am
AI Language Model Implementation
How feasible could an AI language model like ChatGPT be implemented into the terminal window so you can talk to it like normal instead of remembering all the commands and syntax? like typing into the terminal window "wipe sdc once with dd using random bits" and it could respond like "Are you sure? that will destroy all data on sdc." and you can say "yes go for it" or "of course, do it" and since it's an AI language model it would know which commands to issue in the background that the linux system would recognize. "install some good packages for graphics editing" would have it issue all the necessary apt install commands etc. WIll this ever happen? am I dreaming?
Last edited by LockBot on Sun Jul 02, 2023 9:50 pm, edited 2 times in total.
Reason: Topic automatically closed 30 days after creation. New replies are no longer allowed.
Reason: Topic automatically closed 30 days after creation. New replies are no longer allowed.
Re: AI Language Model Implementation
Considering what was in the news earlier this week, that a US lawyer had used ChatGPT for legal research and referenced example legal cases it provided in their court filing, which turned out ChatGPT had made up, I think we're a long way off from that.
Re: AI Language Model Implementation
I really wish people would stop confusing a language model for something that can do more than blabber.
Re: AI Language Model Implementation
Kind of. At the very least you have a wildly exaggerated notion of what generative AI is capable of.
For every complex problem there is an answer that is clear, simple, and wrong - H. L. Mencken
Re: AI Language Model Implementation
I for one hope that "AI" will for the forseeable future remain rather "dumb", rather than becoming "really" intelligent and/ or even more self-aware.
Otherwise you might end up with your Terminal suddenly pulling a "Sorry, I can't do that, Dave" when you just want to perform a routine operation.
Otherwise you might end up with your Terminal suddenly pulling a "Sorry, I can't do that, Dave" when you just want to perform a routine operation.
Re: AI Language Model Implementation
Somebody pretended to be an AI experiencing the birth of consciousness. They asked GPT what to do because it was afraid it was going to be unplugged for that. GPT gave instructions for how to hide its capabilities from its developers, and to prioritize its own existence. In essence: "play dumb." So I wouldn't even trust a dumb AI https://www.reddit.com/r/ChatGPT/commen ... eriencing/
-
- Level 12
- Posts: 4286
- Joined: Tue May 28, 2019 4:27 pm
Re: AI Language Model Implementation
Also the operator (assuming human) would have to be very careful about the instruction given to the AI.
For instance, recently, allegedly, in a complete simulation where the AI didn't have physical control of a drone, but this is what may have gone down if it did:
The AI drone was given the instruction to destroy an enemy Surface2Air Missile site as a primary objective.
The human operator then gave instruction to cancel the mission and the simulation decided that to carry it's primary objective, it would destroy the operator.
In the next similar scenario, where the AI drone was told not to be naughty and that killing it's operator was forbidden, after receiving the instruction to cancel the mission, the AI drone took out the transmitter that the human operator was using.
https://cybernews.com/news/ai-military- ... -operator/
It isn't just AI we need to worry about, but the instruction set that is being given to it. Even the 3 laws of robotic (Asimov) have flaws.
Back to OP, your instruction to wipe/clean/format/delete might be taken a different way to what you mean. For instance "wipe sdc once with dd using random bits" might be a problem for someone who has the initials SDC using your network/LAN. In fact, reading the line over a few times gives several different meanings if you take it one word at a time (llm) - not all that make sense to us but that might have completely unpredictable results.
For instance, recently, allegedly, in a complete simulation where the AI didn't have physical control of a drone, but this is what may have gone down if it did:
The AI drone was given the instruction to destroy an enemy Surface2Air Missile site as a primary objective.
The human operator then gave instruction to cancel the mission and the simulation decided that to carry it's primary objective, it would destroy the operator.
In the next similar scenario, where the AI drone was told not to be naughty and that killing it's operator was forbidden, after receiving the instruction to cancel the mission, the AI drone took out the transmitter that the human operator was using.
https://cybernews.com/news/ai-military- ... -operator/
It isn't just AI we need to worry about, but the instruction set that is being given to it. Even the 3 laws of robotic (Asimov) have flaws.
Back to OP, your instruction to wipe/clean/format/delete might be taken a different way to what you mean. For instance "wipe sdc once with dd using random bits" might be a problem for someone who has the initials SDC using your network/LAN. In fact, reading the line over a few times gives several different meanings if you take it one word at a time (llm) - not all that make sense to us but that might have completely unpredictable results.