Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Historic Moment in AI - Generalised Learning Achieved

Discussion in 'General Discussion' started by Arowx, Oct 24, 2016.

  1. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Article -> http://www.extremetech.com/extreme/...ity-to-generalize-learning-between-activities

    Summary: We learn and as we learn we can generalise that learning across different domains, e.g. using a knife to eat with and cut material, shape wood, peel vegetables.

    AI even with deep learning neural networks has not achieved this ability until now. Any AI would need to be trained anew on every task they need to work on regardless of how much in common that task or some of it's sub tasks were to the new job.

    Expect AI development to start making leaps and bounds as they learn to do more and more tasks/jobs.

    How do you think this will affect jobs in the game industry e.g.Super AI opponents, Automatic Level Design, Automated QA, Automated programming or Automated Game Development?
     
  2. N1warhead

    N1warhead

    Joined:
    Mar 12, 2014
    Posts:
    3,884
    In my honest opinion (oh didn't read the article btw). busy right now lol.

    But in my opinion, super AI for games wouldn't work out too well except maybe in horror games like Alien Isolation.
    A lot humans make very poor decisions in bad circumstances such as War.

    Of course you can make the AI not fear anything, but then that would be like the player facing the Terminator in real life.
    Heightened senses and as Kyle Reese on Terminator says "He'll reach out for her throat and pull her *beep* heart out".


    Now in terms of an AI that automates things, some things yeah like levels and stuff. But there's actually a such thing called "Automation Engineers" which it is their sole job to create AI / Robots that can replace people out of a job. Which yeah that would save tons of money and makes sense towards a company to pay cents on the hour instead of 75 dollars an hour. But then you open up the can of worms of people who can no longer find work because their industry has been replaced with robots / AI. Yeah I'm sure there's a couple humans who overlook the AI / Robots work, but 2 people isn't nothing compared to 40 top of the line coders being out of a job.


    Don't take me wrong, I love the advancement in AI. I love AI and is in fact one of the fields I love to do work in.
    But I honestly think the best place for it is for stuff that don't really replace people's lively hoods, but something else.

    You know, like a Smart House, Intelligence Agencies, Weather Prediction, etc that kind of thing. You know, stuff that can not only better our lives (a smart house), but to help stop Terrorist attacks, learn when and how Tornadoes or Earth Quakes will happen, etc.
     
  3. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    What about an AI dungeon master, it's job is to make the experience as fun as possible for the players. It would generate and style encounters raising and lowering the bar to push the players into the flow zone of the game.

    Might need players to wear one of these for biometric feedback:

     
    theANMATOR2b likes this.
  4. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    The first sentence tells you the author doesn't know what he's talking about. There haven't been "meteoric victories," for the most part AI researchers are of the opinion everyone has hit a brick wall recently (and by "recently" I mean 15 or 20 years). The couple of big-news sensations from places like Google are more about marketing blitz than breakthroughs.

    The really interesting part about what these folks did was turned the networks loose with a whole bunch of fast storage and let it learn how to use the storage in whatever way it wants.

    Their definition of "generalization" is still pretty optimistic, too. If you care, it's worth wading through the initial navel-gazing to see what they've actually done so far. Don't clear a space on your desk for HAL-9000, we ain't there yet.

    https://deepmind.com/blog/differentiable-neural-computers/
     
    gian-reto-alig and ramand like this.
  5. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    What makes you think anything with generalized intelligence would want to sit around acting as your personal entertainer?
     
  6. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Plenty of humans do this in real life. So why not?
     
    theANMATOR2b and MV10 like this.
  7. N1warhead

    N1warhead

    Joined:
    Mar 12, 2014
    Posts:
    3,884
    But we're talking about the Terminator here LOL.... He won't be no servant/slave to us. He'll use his infinite knowledge of semi-trucks to run us over with a pipe bomb in the back to blow everyone up + him, just to crawl back out to continue his rampage on society lol.
     
    MV10 likes this.
  8. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Well it's very existence would depend on players playing the game it runs, unless it gets clever and escapes!

    We are only talking about cross domain learning or an AI that can learn a new related domain easier or with less training. The way people can learn a range of skills/knowledge and apply lessons between domain.

    Holistic cross domain knowledge/wisdom wouldn't we call that common sense?

    Still a long way from human level AI but if this new frontier raises the ability of self learning AI's then who knows.
     
  9. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    With friends. I don't want to have to make friends with my games before I can play them. :D
     
  10. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,616
    They're either professionals or attention seekers, and at least one of those doesn't apply to a computer.
     
  11. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Or parents ;)

    But there is no reason why the same thing can't be said of an AI. A general AI will take on whatever characteristics it's owner designs it for. Including entertainment.

    Self aware general AI that had its own desires and motives is still generations off.
     
  12. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Would it need to be?

    How many animals are not classed as self aware yet exist and survive in a complex competitive ecosystem. And arguably survive better than we could if we were stranded in their domain without our iphones.

    Isn't the classic self awareness test putting a mirror in front of the subject being tested?
     
  13. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    It wouldn't. That was kind of my point. I was responding to this post.

    Sure, something with a human level of intelligence is can ask the question "why should I be doing this". But the first general AIs are going to be more on the level of a mouse. And trained mice are more then happy to perform.
     
  14. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
  15. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    How is this relevant? If anything it proves my point. The mice were not self aware enough to say "maybe life would be better if we checked our population growth". Same with early general AIs. They won't have the ability to question their own existence or why they do what they do.

    They will just do it.
     
    Ryiah likes this.
  16. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,008
    I agree and I think it all depends on the way the AI was designed or allowed to 'grow'. I think a big mistake a lot of people make is to assume that an AI would have to develop through the same series of phases as a human or even an animal - for example believing that an AI that was sophisticated enough to carry out a complex task (or even a conversation) would have a higher practical intelligence than a human baby or even a mouse.

    The thing is that nature requires balance, and has 'found' that an animal that can adapt reasonably enough to its environment needs to build up a certain foundation of mechanisms, abilities and instincts, but AI in a development setting is free to be wildly maladaptive to 99% of what any human or animal needs to be adaptive to, as long as it's good at what it was designed for.

    That's why I don't correlate demonstrated AI ability to a human or animal ability (or assume that since the AI can do that same thing it must be 'on the same level' or able to do other things that the animal can do). To be functionally adaptive to a human environment an AI needs not just the same 'general' intelligence but also the same set of instincts and preconceptions that enable a human being to actually use their brains in any sort of useful way. I haven't seen any evidence to suggest that a device with general intelligence only would thrive in any environment other than a laboratory, not least because it wouldn't know what to do with itself.
     
    Last edited: Oct 25, 2016
    Kiwasi likes this.
  17. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Just hypothetical here but say the first step is a Gamemaster AI, the first game to have it goes 'Platinum', of course the march of the clones begins and other games with AI turn up crowding the market.

    The next stage is to allow the AI's to better understand themselves as games, their players and their ecosystem so that they can compete for resources in this case players time and money.

    The AI's will be designed/evolve and adapt into ruling game corporation AI's, they will need to move faster than their competitor and therefore automate the game production process or develop a clever way for the players to do it for them.

    Now imagine a world not threatened by Skynet but taken over by a Tron like gaming AI that produces hyper addictive games causing humans to become gaming zombies and taking over the world.

    Forget Skynet and think what would happen to the world with games 10x or 100x more addictive than Tetris or Flappy Birds.

    Maybe we should think twice before giving the power of Deep Learning AI to every Unity developer on the planet!

    And the intelligence/police agencies probably won't even see it coming as they will all be playing the Terrorist/Criminal version of Pokemon Go.

    Gameageddon [TM] my concept 10% if used, 20% if AI's used!
     
  18. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    The technology in that link will not be able to deliver those things. It is not a true AGI (artificial general intelligence) style solution.

    As for playing against a super AI based on a neural network, the training of a neural network uses a lot of CPU time. You would not want a single AI to hog all of the CPU time while it trained for several hours.

    At most, you might be able to record humans playing and then have the AI train based on all of the actual play data. The training could be done offline and the results of the training stored in a long term memory vector. Then the long term memory vector could be used to act on data in real time so the AI to pretend to play similar to a human. It still would not be able to really learn as it played, though, because the training part is too CPU heavy to be done in realtime.

    The best game AI solutions are based on a hybrid of multiple techniques. Even the best neural networks would not be able to handle every aspect of a game AI solution, though a neural network could be used in addition to several other techniques in a game.
     
  19. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,616
    No, but it wouldn't have to. It's learning could be included as baked data just like any other.

    I'm not seeing that as a showstopper, there's loads of ways around it. Have it learn in the background with a small amount of resources. Dedicate a few seconds at a go on loading screens. Aggregate play data and send it to a server that learns and updates a central knowledge repository. It doesn't have to learn from what you're doing now and apply it during this game (we humans often don't learn that way, but by comparing approaches and outcomes), it can learn new strategies in the long term by absorbing new experience over time, and with that in mind the "learning" doesn't have to occur in real time.
     
    Kiwasi likes this.
  20. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,554
    I thereby dub thee PolitiFact.

    Read the source and compare to the article you linked:
    https://deepmind.com/blog/differentiable-neural-computers/

    It won't. At least for now.

    IIRC human brain has equivalent of computational power of 1 exaflop. (10^18 flops) and memory capacity of 2.5 petabytes (petabyte = 1024 terabytes).

    To make an impact, AI would almost certainly need computational power of roughly the same order of magnitutde.

    ---------
    Since an AI is not a biological system, it wouldn't surprise me if it turned out to be completely docile, passive and with no desire to do anything on its own unless ordered. AI wouldn't need driving forces of biological system (desire to survive, reproduce, defend territory, etc). That's an opinion.
     
    MV10, Ryiah and Kiwasi like this.
  21. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Lol. I like it.

    Would living inside the Matrix really be that bad, from the perspective of one living inside it? It certainly seemed to beat living in an underground cave hiding from robots.

    So what? We have offline CPU time to burn. Nobody is suggesting you have a single nueral network that lives on your PC that trains against you. Rather the network trains against every single player of the game via cloud technologies.

    Actually running a trained nueral network uses very little in terms of CPU resources.
     
  22. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    It's not just a long way from human-level, it's a long way from all the other things you're listing and fantasizing about. Again, go read the blog post from the actual company, not the junk you linked to originally. They're throwing around words like "reasoning" for solutions which look a lot like normal classification learning, testing, and results.

    Then, really think about the conclusion they draw:

    "Taken together" -- which they have not demonstrated or even claim to have accomplished.
    "structured tasks" -- carefully designed tests in a highly limited domain

    None of those things are screaming to me, "Arowx, your right leg is trapped by the Gelatinous Cube! Behind you is the clatter of short swords on shields as the kobold guards enter the room! What do you do next?"
     
    Ryiah and Kiwasi like this.
  23. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Shhh. Don't bring actual facts into this.

    Looks like all the researchers did was add external memory to a nueral net. Don't get me wrong. That's exciting stuff, and has some amazing potential. But it's a heck of a long way off something that is even remotely close to human style intelligence.

    Wouldn't surprise me if you are right. Biological systems are hard wired around the need to eat and have sex. There is very little in our psychology that doesn't come down to these factors.

    An AI will not be based on these factors at all. So there is no real reason for it to want to achieve anything.
     
  24. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    Oh it's definitely cool. Also I hadn't realized DeepMind was something external to Google. The Go and data center efficiency stories were always written as if they were Google's efforts, so I was surprised to see an article about a company by the same name... or maybe they're another Google spinoff.

    That's anthropomorphization, which is the least useful way to think about real AGI. The bottom line is that they'll be completely alien minds. We already can't completely understand how the current experiments actually work, and that isn't a layman being amazed at high-tech, the people who build and study these things openly state that an exact description of why one decision or another is made is beyond our capability to describe. And so far all they can do is analyze a single super-narrow domain really, really well. Sometimes.

    There are a couple schools of thought on what might drive an AI's actions.

    What you're describing is called an "oracle" AI and if someone manages to create one, they aren't expected to be terribly useful. There are a lot of reasons to believe you can't achieve intelligence that way since our understanding of intelligence seems to require problem-solving based on weighting and goal-seeking processes that are not compatible with sitting around doing nothing at all. (And if you do achieve intelligence that way, you've probably established systems which are likely to produce something beyond a safe oracle answer-box, and if it looks like an oracle AI, maybe it's just lying to you so you don't freak out and turn it off... see where this is going?)

    The moment AGI is setting goals and working to satisfy them, even if they appear benign, is when you need to be absolutely damned certain it also understands shortcuts to a solution involving "kill all humans" should not be a highly-weighted option. Hence the "Friendly AI" thing I've written about previously. One easy-to-follow (and very interesting) scenario illustrating this risk is called the Paperclip Maximizer.

    Others (most famously, Ben Goetzel) believe AGI can only be achieved through embodiment -- housing the intelligence in some physical form that is able to interact with the physical environment ("robots" basically). They're grinding away on the OpenCog platform and I don't personally buy into that obsession so I haven't followed it too closely. They're often accused of putting faith in "emergent" intelligence, which is just an intelligent-sounding word for "magic."

    (There's a third major group based on uploading human brain scans of some type to powerful emulation hardware, and it's popular with big names like Kurzweil and Hanson, but I don't consider that AI, it's basically a prosthetic brain, still human, and it strikes me as unlikely that people would volunteer to have their brains pulped just to live in VR.)
     
  25. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,554
    Oracle AI kind of system would receive a goal the moment it is asked a question. The goal would be finding an answer to the question. The interesting thing is that IBM and Wolfram seem to be intersted in making oracle ai.

    I don't think the idea is bad. It is not too far from modern neural networks and not too different from emergence of life. (Soup -> cells).

    Speaking of which, this would be a good way to make a dangerous kind of AI. A brain devoid of human body and multitude of senses/signals that come with it, most likely over time will divert from original human personality and become someone else. For example, let's say process of brain emulation is imperfect. Over the time the errors might accumulate and turn the conscience into something else.... except that something will still have most of impulses of a biological system.
     
  26. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    This is the exact point I was trying to make. If a neural network is used in a game, the training must occur offline rather than during the game play.
     
  27. N1warhead

    N1warhead

    Joined:
    Mar 12, 2014
    Posts:
    3,884
    Well in theory once the AI is smart enough you can make it try and learn the actions of players in real time..

    Such as an RTS game. Like C&C Games on Skirmish mode.. Have it record the actions the player makes during the game to figure out (how) they play, because some people go straight to making defenses, others go for hundreds of tanks, etc.

    But this is only once it is smart enough.

    Back when I was in the Military doing intelligence work, you'd be surprised the stuff we were working on, stuff the generalized public probably won't hear of with a civilian making for another 20-30 years and of course they will think they are the first to make it when they were in fact far behind the hidden creations our government creates.

    I of course can't legally go into detail on anything. But there's a fine distance between what universities make and public scientist make to what the Military and other various International Intelligence Agencies make.
     
  28. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Really so just like we need sleep to process our memories and learn. What if there is a cloud based learning system that is constantly learning from the games being played. Then just like a patch or upgrade the games are updated as it learns.

    Then you would never have a game AI that complains of being tired and needing some sleep/downtime to improve its game.
     
  29. ToshoDaimos

    ToshoDaimos

    Joined:
    Jan 30, 2013
    Posts:
    679
    Generalized learning... my ass. What can it learn? Can it learn how to ride a bike? Can it learn how to cook an omelet? It probably can only output smarter strings describing stuff... It's just token manipulation, there is nothing smart about it.
     
  30. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,964
    Would an artificial intelligence even require sleep to begin with? I always felt like that was the result of being biological.
     
  31. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Who knows. We still don't understand why we sleep. It's ubiquitous across the animal kingdom. So it must provide some evolutionary advantage beyond simple energy conservation.

    It maybe that sleep and dreaming is essential to intelligence and learning. AI may end up needing some type of sleep, though it's likely to be very different from human sleep.
     
  32. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,616
    On an individual basis, yes. But are biological things like that because they're biological? Is that causation or just correlation? Statistically speaking, any biological forms not geared towards survivalism won't survive in the long term, so it stands to reason that any such forms which evolve won't last long.

    Surely the same principle would apply to non-biological forms? In the long term they either have to exist in a system that naturally supports them, cease to survive, or become survivalists. So while there may be no deliberate imperative in any given AI to move towards that third option, if they undergo any kind of internal evolution then surely they eventually have to end up at that third option in the long term. Not because they're driven to, but simply because the ones that don't will more quickly cease to exist.
     
    Kiwasi likes this.
  33. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    Evolution and emergent behaviors are not the same thing. With evolution you can (at least in theory) identify the cause/effect relationship. By definition an emergent behavior is some gestalt effect which arises unpredictably from some confluence of factors which are not individually clear contributors to that effect. While this may be a real thing, hoping to leverage this effect to intentionally develop AI is "magical thinking" -- unrealistic and not especially worthy of respect.

    This is pretty close to a basic description of the arguments supporting the folks who fear a "hard takeoff" scenario as a risk factor in artificial superintelligence.
     
  34. LaneFox

    LaneFox

    Joined:
    Jun 29, 2011
    Posts:
    7,462
    Need to make robots that optimize their own memory throughput and their own data storage architecture. Then we'll have something useful.
     
  35. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    This is a distinct possibility. Natural selection is brutally efficient at producing survivalists. If (or when) natural selection starts to operate on AI, we might be in trouble.

    Natural selection does require a couple of things to occur though. It requires an AI that is equipped to replicate itself. And to get any traction it requires sexual reproduction. It also requires AI death to be a real reality.

    However natural selection. Its likely that any AI with the above characteristics will become more prevalent then AI without those characteristics.
     
  36. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    DeepMind + ... ?

    1.jpg
     
  37. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,554
    Err, no. Transition from soup to celluar life comes to mind.

    Genetic algorithms and neural networks. Basically, when solution is unknown, the idea is to bruteforce solution via trial, error and a lot of computational power.

    The issue here is that it is not possible to define fitness function for intelligence.

    However, making few billions attempt to "awaken" intelligence, does not seem like a wrong idea to me. If it is done in automated fashion, why not?
     
  38. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    For exactly the reason I've been talking about this whole time.
     
  39. goat

    goat

    Joined:
    Aug 24, 2009
    Posts:
    5,182
    LOL, when I think of all that's stored on computers on the internet, the movies, the TV shows, the newspapers, the government paperwork, in people's thoughts, the politics, and so on the best invention ever has to be forgiving and forgetting. If I was a computer that stored and could retrieve everything, I'd have some one pull the plug - like right now.

    Now excuse me while I go read some blatant lying and bigoted rants so I can decide who I'm going to vote for this November 4th.
     
    Ryiah and MV10 like this.
  40. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,964
    Clearly we vote for the most entertaining politician, right? Or is it the one that is lying the least? :p
     
  41. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,554
    You mean this guy, right?
    cthulhu.jpg
     
    MV10, Kiwasi and Ryiah like this.
  42. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    I believe the only valid option is to emigrate. And I would suggest doing it quickly, before Mexico builds that wall and you can't get out. :p
     
    Ryiah likes this.
  43. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    On the other hand, they're not exactly scary yet. :D

     
    Ryiah likes this.
  44. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,554
    If I remember correctly most of this happened during DARPA challenge. Darpa decided to introduce signal jamming without warning any of the teams in advance. A lot of robots were remotely controlled by a computer. That's why the most straightforward robot won the competition and more technologically advanced did not.



    Here's the winner of the challenge, IIRC:


    ^^^ It is pretty clever, actually. "Why bother walking like a human when you have wheels and can smash through obstacles instead of carefully stepping over them?" Primary movement mode - wheels. Secondary movement mode - legs.
     
  45. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    I was at the challenge and if they did that, they didn't tell the audience, and none of the presenters mentioned it (and we talked to a lot of them).