Page 2 of 2 FirstFirst 12
Results 11 to 14 of 14

Thread: Should we be scared of artificial intelligence?

  1. #11


    Joined
    Mar 2013
    Posts
    212
    Userbars
    2
    Thanks
    737
    Thanked
    208/65
    DL/UL
    33/0
    Mentioned
    4 times
    Time Online
    6d 6h 59m
    Avg. Time Online
    2m
    I don't think so. I think it's silly to be scared of something just because you don't understand it. I will never be afraid of technology, only the people who use technology for terrible deeds. Capitalists who will abuse AI or use it to encroach upon human rights are my biggest fear WRT AI. Technology and artificial intelligence has so much potential to help humanity, but as the most power in the world is heavily capitalist we will never see those benefits -- only the disadvantages.

  2. #12

    Joined
    Jul 2018
    Posts
    62
    Userbars
    2
    Thanks
    13
    Thanked
    15/12
    DL/UL
    4/0
    Mentioned
    3 times
    Time Online
    1d 19h 43m
    Avg. Time Online
    1m
    I think we should be in the future, right now AIs are still learning from our recaptchas. They will replace a large number of unskilled workers, but in like the service industry I'd say people are more comfortable dealing with people rather than machines. The day an AI can pass a Turing test is when we should be scared of AI, the day an AI intentionally fails a Turing test is when we need to be very afraid.

  3. #13
    Serpent Rider's Avatar
    Joined
    May 2017
    Posts
    119
    Userbars
    7
    Thanks
    42
    Thanked
    68/29
    DL/UL
    89/0
    Mentioned
    5 times
    Time Online
    3d 7h 42m
    Avg. Time Online
    1m
    I suggest anyone interested in the question of AI read up on Nick Land. He's a British philosopher who has been knee-deep in futuristic literature for quite some time. Here's a good quote from him regarding just how little we know about what the effects of it will be in the future:

    "Scientific intelligence is already massively artificial. Even before AI arrives in the lab it arrives itself (by way of artificial life). Where formalist AI is incremental and progressive, caged in the pre-specified databases and processing routines of expert systems, connectionist or antiformalist AI is explosive and opportunistic: engineering time. It breaks out nonlocally across intelligenic networks that are technical but no longer technological, since they elude both theory dependency and behavioural predictability. No one knows what to expect."

    He has a rather esoteric writing style, but it really opens your eyes once you can understand it. Happy reading

  4. The Following User Says Thank You to Serpent Rider For This Useful Post:

    phantasia (07-26-2022)

  5. #14
    PabstBlueRibbon's Avatar
    Joined
    Jan 2017
    Posts
    144
    Userbars
    4
    Thanks
    91
    Thanked
    123/68
    Mentioned
    39 times
    Time Online
    9d 1h 35m
    Avg. Time Online
    4m
    Well.... I think it depends on your definition of "AI." There are many versions of intelligent. I'd be afraid of an AI that can independently solve novel problems in a "most efficient manner", because once you've got a hammer, everything looks like a nail. You can apply that sort of problem solving to the machines in The Matrix, if you reframe the movie to say the machines were given the problem of "Please stop all humans from having wars." The hammer solution is "Put all humans in isolated life-support boxes, physically, they can no longer have wars because they can no longer contact each other. Problem solved."

    The idealistic Gene Roddenberry AI Star Trek solution to a "most efficient manner" solution, "War is caused by humans wanting things. All things humans desire can be made of any and all assembled atoms, develop technology to generate any and all matter from any and all available atoms." Thus you have your replicators, and likely tanked all the world's economies at once, because once you can generate hot Early Grey in a porcelain cup at a voice command, you no longer need -> ceramics, tea farms, arable space, clean drinking water, heat or any of the people involved in the logistics of putting those things together for you to have a cup of tea. Then you end up with very bored humans who start having theological issues, the solution is, go to space. Thus, the entire Star Trek universe, because Earth is boring.

    The version that people are afraid of is the "No humans can have wars if there are no humans" scenario. Which is incredibly efficient, when you think about it. And was the type of AI found in Ellison's I Have No Mouth But I Must Scream novella. This also ties in to the "(you need an account to see links)" angle in the Fermi Paradox, it hinges on 3 "facts": 1.All forms of life want to stay alive. 2. There is no way to know if other lifeforms will kill you. 3. Because you can't be sure, the best thing to do is kill everything else first, that way you're certain to stay alive. 3a. Because you know you have to kill everything to stay alive, you know everything else that wants to stay alive has to try to kill you first too. (Thus completing the logical tautology.)

    Humans are sort of odd in that we're a curious species and desperately want to know what's out there, but don't want to be found out in the process, because, as the Searcher in the Forest theory predicts, anything new might decide to kill us. We have historical evidence of that particular mindset on human-on-human 'First Contacts'. Manifest Destiny was a big one in the United States, but basically looking at any interaction between Europe and the rest of the world, and who comes out the technical loser (either by people, territory or/and culture lost) can be considered as lifeforms killed by the bigger Searcher.

    Right now, failing any sudden First Contacts after this gets posted, AI is the closest we've come to contacting alien life. It's going to be completely and utterly logical without any meat-machine sentimentality, but the problem is, we don't know how that logical mindset is going to come out ((you need an account to see links)), and to add to it, the logic machine is going to know we're standing next to it's power plug and it may not like that idea. The problem is, how do you turn it on, without it being threatened by the "possibility that we'll try to turn it off, thus being initially hostile to protect itself is the best possible plan"? The solution is to give it independent power that humans can't turn off... but of course, that's not safe for humans because what if it decides to be hostile anyway, and now we can't turn it off. (That's how you end up with a GlaDos.)

    BUT, that's all assuming you mean a human or + level intelligent AI.

    If your AI is set at about.... the level of a savant human, that would mean it's very capable and inventive in a limited way. No one's ever denied (you need an account to see links) is human in all cognitive ways and intelligent broadly, but with an encyclopedic memory and the human ability to cross interpret that information on demand. For example, you could give Kim your birthdate and he'd tell you everything that happened that day in any of the newspapers he'd read during that time. Which is not different from what Wikipedia can do now, except that the wiki can't offer any information on it's own, Kim could proffer further analysis of the intersecting data independently.

    That's most likely where AI is going to head, because to a point, computers can't do random because of their nature, but they are good at the hyperspecific simulation once they have enough processing power. Until we get into engines that can do probabilistic functions, like quantum computers, a computer or an AI can't spontaneously develop variables outside of what it's programmed constraints are, but it's the interpretation of those constrains that have to be considered. That's the brilliance of Asimov's Three Laws of Robotics, because it basically reeled in every possible outcome into one of those three categories, and then had those categories arranged in a hierarchy of importance. In the chatbot case, they were told to talk, but the variable didn't close at "In legible constrained English" so they went off the rails in that direction from a human perspective, but the communication made sense to them.

    I think you're more likely to see improvements in wetware, cyborg technology and trouble in those areas before AI becomes worriesome or something to consider worrying about.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •