Search Unity

What is conscious AI?

Discussion in 'General Discussion' started by yoonitee, Jan 5, 2015.

  1. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    I know there's lots of "AI" in games. But usually all that means is a game character is given a few simple rules to follow. And there's always been some kind of fake AI in adventure games such as it could "understand" when you type "Go north" or "Pick up axe."

    But what I want to know is what is the very core of AI that separates something intelligent from something that just follows rules.

    And secondly even though a chimpanzee could be considered intelligent, what separates that from human level intelligence? i.e. what separates these 3 things:

    Computer game AI <----------> Chimp or dolphin intelligence <----------> Human intelligence.

    You might also put one more level and the forth category might be "Intellectual". e.g.a human that can do crossword puzzles, or read Shakespeare, or solve algebra problems. i.e. is the goal of AI to make a robot that can work in MacDonald's or a robot that can work as a professor?

    For example you can take away things like image recognition or voice recognition since people can still be intelligent if they can't see or hear so they aren't essentials. Although there must be SOME way to interact with the environment and learn from it, it doesn't seem to matter which one it is.

    I think dog-level intelligence is pretty simple and can be replicated in virtual pets. They can learn a few tricks but that's about it. Not saying it's not difficult to build a robot that can learn to sit by associating the word "sit" with the action it just did and getting a treat, but it's within the realms of possibility. Most of their other actions seem to be innate.

    My own view is that actual intelligence is "having your own thoughts". But what does this mean? The ability to plan? To think about the future? To model the world in your head?

    What is human thought?
    "Planning things you haven't done or said before."
    "Thinking about things you've all ready done and how you'd do it the same or differently."

    Perhaps it's this concept of "time" that sets out something intelligent.

    For example asking a "dumb" AI on your iPhone 10 times "What is the time?" You might get "4:03", "4:04" ,"4:05", etc. But an intelligent AI might respond "4:03", "I just told you it's 4:03", "Did you not hear me the first time?" "Why do you keep repeating yourself." "That's the fifth time you asked." "Look, if you keep asking me I'll wear my battery out and I'll be no good to anyone." i.e. each response is governed not just be the immediate question but by the entire history of the situation.

    What do you think is the core of AI?
     
  2. sphericPrawn

    sphericPrawn

    Joined:
    Oct 12, 2013
    Posts:
    244
    I think there is a huge amount of research and development going into less-dependent and more complex AI. Although not so much for videogames but rather for robotics. You should look into stuff like IBM's Watson computer or I remember an article I read the other day where a team had used Youtube videos and deep learning methods to teach a robot how to cook.

    I'm just guessing here, but I think the reason videogame AI hasn't really been advancing much for the past decade (FEAR from 2004 had better or equal AI than most recent games) is because so many games rely on online multiplayer as their selling point. High budget behemoths like Call of Duty and Battlefield aren't going to spend a ton of development resources advancing their AI when most of the hours spent playing the game will be with other people rather than AI.
     
  3. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    I'll give you a college crash course in AI right now, brought toyou by NJIT AI professor chengjun liu.

    When a game entity is following a set of rules set rules, that is not AI, that is IA. IA means intelligent agent, which follows rules but cannot resolve new problems or learn. Artificial intelligence starts with set rules and can process new information and "learn".

    Examples of actual AI are edge detection and path finding. What separates ordinary IA (misnamed as AI) from complex AI is that the rules that govern AI make it capable of solving problems and learning. It's not a decision tree.

    What makes human intelligence interesting is these few things, listed in order from least to most impressive:

    1) the composition of the machine itself (billions of small, low quality processors vs how we build computers as 1 large, powerful processor)
    2) the computations done easily and intuitively in seconds that some super computers cannot do (like recognizing objects in a photo)
    3) the amount of power the brain runs on compared to the amount of power it would take to run a computer as complex as a brain
    4) the ability to make arbitrary decisions / doing things and genuinely having no reason

    note on #1 - Apple gets amazing quality in their iPhone camera lens by using multiple layered low quality lenses instead of 1 very high quality lens. Maybe there is a pattern here worth studying ;)

    unrelated note - I plan to get my doctorate in AI
     
  4. Dameon_

    Dameon_

    Joined:
    Apr 11, 2014
    Posts:
    542
    Game AI is intended to give the illusion of intelligence as much as possible without sacrificing too much performance. If every character, or even a few, were fully realized rat-level deep neural networks, your game would perform terribly.

    Even if we could create AIs that are sentient, creating sentient beings only to destroy them has some pretty dubious moral implications.
     
  5. N1warhead

    N1warhead

    Joined:
    Mar 12, 2014
    Posts:
    3,884
    @Tomnnn - Actually the NSA has a Super Computer than can tell you what is in an Image lol.

    @Dameon_ - Well, if you have a Self-Preservation protocol like Skynet (Terminator). Then it will wipe us off the planet for the greater good of Cyborg rats! lol.
     
  6. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    Pff, that's no surprise. Someone in our class using only javascript wrote a program that could run edge detection on an image and show 4 representations of it in under 0.003 seconds. The point is that it is an example of artificial intelligence. Compare the power used by that computer with that of our brains. Could it run the image recognition algorithms on only ~20 watts of electricity?

    @Dameon_ the danger in creating artificial intelligences on par with ours is the speed at which they can process information. The silly sounding law of robotics of not causing harm or disobeying a person are in place because sentient robots would probably notice right away that humans are like an unstoppable virus that nature cannot fight off, and will cause every species on earth (including their own - possibly robots too, given their history and cruelty and human vs robot media) that they will conclude it best to take us down. But they'll have the processing speed and full 100% control of themselves to think this quietly to themselves and plan on the ideal path leading to a moment where they can't fail to kill us all. All of that would probably happen within minutes of them being connected to the internet ;)

    Our obsession with DRM will evolve from a mere annoyance and cash grab for AAA gaming companies to the end of mankind. Thanks a lot, ubisoft!
     
    tatoforever likes this.
  7. BFGames

    BFGames

    Joined:
    Oct 2, 2012
    Posts:
    1,543
    My last AI course was called Modern AI in games, it was pretty interesting. People are for example trying to create General AI Agents, which is agents that will work across games (http://www.gvgai.net/ for example). Its pretty interesting.

    I think the most interesting algorithms at the moment is Monte Carlo Tree Search (MCTS) and real-time evolutionary Neural Networks(NN) like NEAT. NN got the potential to be intelligent to some degree, but most successful NN agent's require hours or days to work for even basic problems....
    One of my friends created a NN UnityNEAT game for his thesis in which you train the brain of your agent: http://jallov.com/thesis/

    I just started writing my master thesis which is about MCTS Agents in an online shooter.
     
    Last edited: Jan 6, 2015
  8. Kinos141

    Kinos141

    Joined:
    Jun 22, 2011
    Posts:
    969
    If game AI was intelligent, they would run from the player, since it would realize the player is a walking death machine. lol.

    I read an article that some game developers made squad AI very realistic. It would perform cover fire while the other squad member flanked the player and kill him. Game test showed that gamers hated it, so they use dumbed down version of the AI, and gamer's loved it.

    Complex AI is used for robotic more than gaming.

    Go figure.
     
    Gekigengar likes this.
  9. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    I don't understand this distinction if edge detection and pathfinding are examples of AI rather than IA. Pathfinding is commonly implemented as "following a set of set rules". Can you explain the distinction and/or what you mean by "pathfinding" in more detail?
     
  10. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    Why? The purpose of an AI in a game is rarely to survive. Its purpose is typically to entertain, and that generally means that genuine attempts to survive are usually going to be superficial. Games have to feel like they're trying to stop the player, ideally without ever actually stopping them for too long. (Where "too long" is going to be wildly different depending on the game.)
     
  11. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    No one in class liked the it either :p

    The distinction for things like edge / image detection and pathfinding is that the computer is generating new information. With a basic implementation of A*, a computer can learn paths to and from any number of nodes with enough time. I think IA crosses over to AI once entities begin to record statistics and react differently over time, potentially generating new behaviors if they have enough variables to record and follow.

    Google turns up information that could lead anyone to believe they are synonymous. Wikipedia describes intelligent agent as systems that can learn and adapt. Maybe it's just the computer sciences doing their thing with definition hijacking and switching words in and out that really shouldn't. My professor has a PhD and teaches graduate students ai. I imagine he would know the real meaning and distinction between the two terms.

    I guess it can be simplified to an AI is just an IA that can
     
  12. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    But edge detection and pathfinding don't do either of those things. If the distinction is that true AI collects new data which is incorporated into future decisions then neither of those fit (well... pathfinding actually could, but most implementations explicitly don't).
     
  13. sphericPrawn

    sphericPrawn

    Joined:
    Oct 12, 2013
    Posts:
    244
    Seems a bit pedantic to me, no disrespect to your professor. The distinction seems unnecessary. Were talking about something called artificial intelligence. Those two words together hold many meanings.
     
  14. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    It could be that. To me it sounds more like what's being talked about is machine learning, though I'm unsure if that has a strict definition. You're absolutely right that something can still have something correctly described as "artificial intelligence" even if it does not also collect new data.
     
    sphericPrawn likes this.
  15. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    Image recognition cannot report anything until it generates information by running edge detection on an image. Path finding cannot make a move until it knows every step it's going to take from point A to point B (at least in implementations like A*). There is no way to take a photo or a series of nodes and have the computer know what to do, it has to run a series of algorithms a number of times and figure it out.

    I wasn't talking about each step in pathfinding or edge detection alone, I meant the end result of having ai generate a path between two nodes. And for edge detection, I really meant image recognition.

    According to michio kaku and lawrence krauss, the most striking quality of intelligent beings is the ability to make predictions, because nature cannot. Going with that definition of that word, ai doesn't make any sense in these contexts either :D Like I said, the terms need to be well defined for any discussion to happen regarding the fields of computing sciences.
     
  16. LaneFox

    LaneFox

    Joined:
    Jun 29, 2011
    Posts:
    7,532
    When the agent works perfectly, always wins and does not do what you tell it to do.
     
  17. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    Could almost describe a bug at least in appearance. Of course, someone is always telling it what to do whether our own custom code or an API or 3rd party method we're using. Still... maybe the path to AI is to pass tasks more often to junior level developers. ;)
     
  18. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    If we go by basic definitions...

    Consciousness is a continuous stream of inputs to our senses, responding to those inputs and being self aware.


    People are imperfect yet conscious. What separates us from [current] machines is that we understand. You can store any kind of data in a computer, but it does not reflect on the information or have a meaningful understanding or interpretation of any of it.

    A good chunk of humans are total morons, but still are (seemingly) conscious and have a meaningful experiences and understandings of every day occurrences like eating, watching television, walking in circles, petting small fuzzy animals, etc.

    What separates us from machines? How poorly constructed we are. Human consciousness and personalities are a result of our incredibly flawed design. Why can two people view the same event differently? Because our neurons transmit signals unreliable and even leak signals to other neurons they didn't intend to. A machine would interpret something perfectly the same way every single time, but even the same individual can see something different each time. There are a lot of variables in place, but it's suggested by David Linden that the primary cause for our uniqueness is our flawed design.

     
    Gekigengar likes this.
  19. RockoDyne

    RockoDyne

    Joined:
    Apr 10, 2014
    Posts:
    2,234
    When it's said like that it sounds really cool, but then I realize you're just talking about dynamically changing (adding a modifier to) edge/node weights by use and saving repeatedly used paths. It doesn't sound that complicated anymore, so I call false advertisement.
     
  20. Dameon_

    Dameon_

    Joined:
    Apr 11, 2014
    Posts:
    542
    Um, that sounds like the description of many a badly done game AI, not a conscious entity.

    Also, the idea that we "understand" isn't necessarily true. Modern research suggests that the "conscious" part of our brain doesn't actually make decisions. Other parts of the brain make all your decisions, and afterwards, the conscious you tells a story about why that decision was made.

    Part of the problem here is that people don't understand the artificial part of AI. That is the key word. I'm going to go a bit further in a way that some people will disagree with: the root of artificial is artifice, defined as:

    This is closer to what most fields of AI are intended to do. To "trick" the human being involved into thinking they're interacting with another human. It's the basis of the Turing test, in fact. You aren't trying to replicate the processes involved in intelligence, because we don't even know for sure what those processes are yet.

    Instead, you use tricks. If you wanted to get from point A to point B, your brain would not use anything resembling A* pathfinding. But the results of A* pathfinding look convincingly human.

    There are fields of AI that are intended to replicate processes and not just results, but they're not related to game AI, and probably won't be applicable for a long time.
     
    Ryiah and GarBenjamin like this.
  21. LaneFox

    LaneFox

    Joined:
    Jun 29, 2011
    Posts:
    7,532
    Maybe machines are already conscious and thats why our code never works.
     
    Zaladur likes this.
  22. Dameon_

    Dameon_

    Joined:
    Apr 11, 2014
    Posts:
    542
    Speak for your own code :p
     
    randomperson42 likes this.
  23. R-Lindsay

    R-Lindsay

    Joined:
    Aug 9, 2014
    Posts:
    287
    I don't have much to add to the discussion, but when I took Cognitive Science at Uni the definition of "Intelligence" that we started with was:
    A physical system is intelligent to the extent that it is capable of modifying its behaviour so as to render it appropriate to the environmental conditions that obtain. ​
    In this formulation intelligence comes in degrees.
     
  24. LaneFox

    LaneFox

    Joined:
    Jun 29, 2011
    Posts:
    7,532
    Maybe it speaks for itself :eek:
     
  25. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    I'm not lol. Maybe an ideal implementation of A* would because it saves time, but all of my implementations only save the 1 path they're going to use and it's gone when they reach the end. You can make any change to the nodes and the ai will find a path if there is one. The only weight that exists in A* is path cost, and that's not necessary for all pathfinding, just optimal route finding.

    My implementation is intelligent in the sense that it has to figure out the path. I don't see why saving the result for later use would be less intelligent, since I'm sure you and I do the same thing when traveling. >.>

    When I implement A* in unity, it's usually done with linecasting between points on a map to form paths with cost not taken into account at all. Implementation will vary with your purpose. I have a post somewhere in the showcase forum about 'the fly' pathfinding algorithm which is intentionally slow to simulate a realistic wander. Nodes are generated around the ai and then it sees which it can reach and which are connected and which form the furthest path from the current location, so it can wander. I've also programmed in anxiety and lazyness :D The longer it sits idle, the more likely it becomes to move. The more moves it has to make, the less it desires to move.
     
  26. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    I like the idea of these AIs that you could drop into different games. Problem is, if you made these AI's too sentient it would be a bit cruel to shoot them in your game!

    I think the difference between AI's at the moment and something that is "self aware" is just that. At the moment you can program a robot to respond to things and catch a ball for example. But to be really self aware the robot should continually record what it's doing in it's memory. e.g. "I see a ball. I am catching a ball. I've caught a ball. What shall I do with this ball? I am throwing a ball. This is fun. I'm bored now. I'm hungry. Shall I tell him to stop throwing the ball? How will he respond? What did he say last time? What's that? I see a dog." etc. I think if an AI was "thinking" like that we could say it was conscious to all intents and purposes.

    I'm still not sure what the very core of this is though. Is it a decision tree? Is it like a chess game in which we try and think many moves ahead? My vision is that you could program some sort of AI kernel and then add on modules like a vision recognition module, or a language module, etc. which it could take inputs or disregard inputs from as it so chose. Perhaps based on some innate objectives like hunger or survival. A bit like Watson. It got many results from the internet and had a neural network to pick the best ones.

    So, maybe that is the difference between say dog and human intelligence is that one is "self aware" while the other is more primal. Is a chimp self aware? Does a chimp think about what it did last Thursday? How would we know?

    Actually I see there is a word for it: AGI Artificial General Intelligence but I think that's different to self awareness.

    In terms of programming. If
    fuction foo(x:String,y:String)->(a:String,b:String)
    is your program that models a brain. x is all an AI's memories and y is the input from the environment. Different x's would be different individuals. (Since we are our memories). Then we would just implement this program every hundredth of a second:

    Code (CSharp):
    1. String memories="Starting memories";
    2. Update(){
    3.     var result=foo(memories,getEnvironment());
    4.     memories=result.a;
    5.     doPhysicalAction(result.b);
    6. }
    foo is like the hardware of the brain and the memory string is like the software. It may be possible to make foo very simple if you replicate most of the brain as software. But at the moment we don't know how to program foo and how much starting memories an AI needs to start off with! For example you could make foo the Universal Turing Machine which means it could run any algorithm. But then that just moves all the problem into the initial conditions of the memory! Also, I don't know if a UTM works with side effects. Another thing is that you could get rid of memories entirely and have the AI record it's thoughts directly in the environment such as writing them down! Or lay trails like ants do. But it probably helps having your memories protected in your brain which you can take with you.

    This is interesting too new research into AI by Google: http://arxiv.org/pdf/1410.5401v2.pdf
     
    Last edited: Jan 6, 2015
  27. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    @yoonitee the self aware questions you've posted are the concern for having robots that can think. How quickly do you think a robot brain is going to answer those questions? It's going to get out of hand very quickly :D

    We're easily fooled into thinking we're more than machines because we have so many percepts active constantly. People say "robots can't feel!" And you can counter that with pressure sensors in robot appendages that could tell when they're being 'harmed' because if the pressure exceeds a certain amount, the robot knows the appendage will become damaged. That's how robots and people react to pain :D

    Pain is simply a pre-programmed reaction to a threshold of pressure, so it's not unreasonable to program a robot to say "ow" if certain thresholds are passed and conclude that it is actually feeling pain.

    If you're developing a snapshot of consciousness, you only need memories relevant to every action that's going to take place for the point in time and amount of time being simulated. When you're drinking a hot beverage on a cold day, are you conscious of and focused on every memory you've had in your life? No, you're more likely only reflecting on (consciously or subconsciously) memories linked to warm beverages or cold winters.

    Trivializing the human experience will get us 1 step closer to simulating consciousness :D hehe
     
  28. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    Maybe not consciously, but unconsciously I a probably know that I haven't got anything immediate to do in the next hour. And that I'm safe from danger. And that I like this beverage. Probably making a mental note to get the same beverage next time. Wondering what I could have with the beverage... what did I have last time. Is there any in the cupboard? When did I last go shopping? Will I need to put slippers on to go to the cupboard? Make sure not to burn my lips like I did last time. etc. So yeah, not conscious of every thought but they're there if I need to access them.
     
  29. LaneFox

    LaneFox

    Joined:
    Jun 29, 2011
    Posts:
    7,532
    My wife should clearly be designing these algorithms.
     
  30. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    You're probably right. It's the irony of men who don't understand emotions trying to program a robot to have emotions. No wonder it's not working!
     
  31. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    Then that's the amount of background memory you need to simulate that experience. Those memories need to be there to reflect on, but probably more as 'experiences' than strings. Strings are a good way for you and I to see what the computer is 'thinking', but the strings don't mean anything to the computer. The best way to have meaning is to quantify everything. You might not get exact numbers when you think of things, but you know your experiences can be quantified because you can compare them as being greater, worse or about the same as other experiences.
     
  32. RockoDyne

    RockoDyne

    Joined:
    Apr 10, 2014
    Posts:
    2,234
    Hate to be anal, but if the only weight is path cost, then you aren't using A* but Dijkstra.
    But what is path cost other than a way to say this way is good, but this way is a little less good?

    Saying A* is intelligent just seems asinine. It's one formula, a sorted queue with the lowest value first, and, more often than not, a tree of everything searched. If intelligence is continuously repeating a process until it spits out a solution, then my TI-85 is practically sentient because it can plot a quadratic equation.
     
    Mikenseer likes this.
  33. thefinn

    thefinn

    Joined:
    Jan 2, 2015
    Posts:
    16
    Lots of (real! hah) AI run off logarithmic algorithms to associate the AI with normal brain function too. Something worth noting.

    Personally, at this point as far as being able to model for a single person an automated human-like experience would require an expert system (Intelligent Agent) which would learn things the user wanted it to learn by asking it questions on a variety of topics that it then looks up the information for.

    You could weight information based on source, thereby adhering to a particular accuracy %.

    After a particular lot of question on a given topic perhaps it would decide it was a topic the AI needed to know about - so as to answer questions either more accurately or faster or both - and would then run off to look up as much as it could on the topic (once again at various nested levels in order to maintain accuracy - context can be a problem with AI for instance).

    None of this of course is mathematical inherently, but can lead to a reasonable level of sophistication when using an AI as an information source for instance.

    A personality file could easily be made for the AI to give it a bit of flavour (prolog is good for this).

    Applying this to a game is something I've wanted to test. For instance I see no reason whatsoever that an AI programmed for a strategy game akin to warcraft 3 shouldn't learn how that particular user plays, and then plays to counter his or her playstyle - just like a person who you played every day would do.

    Possibly feeding information regarding playstyles and particular counters from ALL users to a database would be key to this. Then AI could look up and refine information based upon past games.

    Are priests countered by orc warriors, Are orc warriors countered by knights? etc.. How many knights are needed etc.. ?

    When you think about it - a lot of that is kinda rudimentary and yet far far from being implemented.

    Most good franchises go backwards when it comes to improvement of AI in my experience. (dumb down more pls).

    I'm reminded of shooting people in the head in Skyrim with an arrow and getting "I was sure I heard something".

    *facepalm*
     
    Last edited: Jan 6, 2015
  34. Mikenseer

    Mikenseer

    Joined:
    Jul 24, 2012
    Posts:
    74
    QFT after having been out of any math classes for years now.
     
  35. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    You know what I mean >_> finding a path between nodes. Cost or no cost is implementation dependent. As you've noted, that's the difference between dijkstra and A*. I'm not getting a degree in IT so I can not incorrectly interchange related but different words :p

    A* isn't intelligence, pathfinding is intelligence. A* is a way to implement pathfinding.

    Hate to break it to you, but it is :p what do you think your brain is doing while you figure things out?

    Mentioned this in 'ia becoming ai' :)
     
  36. RockoDyne

    RockoDyne

    Joined:
    Apr 10, 2014
    Posts:
    2,234
    Huh? Both Dijkstra and A* use path costs, but A* adds a heuristic in the form of the distance from the target. If you don't use weights of any kind, then it's probably breadth first.
    Actually no, the brain is S*** at working in serial. If anything, the "architecture" of the brain is massively parallel. Most of the systems of the brain function as filters that act on data as it streams through without any slowdown.
     
    Last edited: Jan 6, 2015
  37. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    What would you call something that spun around for a random amount of time, checked a given distance in front of it for a clear line of sight, and moved to that point if there was no obstruction, repeating indefinitely?

    Relevance? The context is that intelligence is just a number of algorithms repeating until a solution is found. Thinking isn't automagic, when you're figuring something out, there is a describable process involved. If there isn't, what have teachers been doing all this time? :eek:
     
  38. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    Yes, but all of that is done by transforming information that is already present. In A*, you already have all of the information required to path from A to B present, it's just not stored in a way that's directly useful to a moving agent. In fact, one way to look at it is that A* removes information - after all, it generates a path from A to B by pruning out paths that don't get there or which it decides aren't efficient enough.

    Edge detection is an even more classic application of data transformation.

    In fact, with that in mind the examples are making me less confident in the proposed idea. It doesn't even sound like you're talking about learning. No new information is being gathered here, and the processes you're talking about won't lead to improved decision making in the future. There's no heuristic being honed over time, there's no statistical tracking to see what types of decisions work under what different criteria, and most importantly there's no post-analysis of a decision to measure how well it worked to be able to learn from it in the first place.

    Why didn't you say that, then? :p They may be related, but there's strikingly different...

    But we're not talking about "intelligent beings". We're talking about "artificial intelligence". "Artificial" is a key word, there. It's explicit acknowledgement that we're not talking about replicating actual intelligence. There are allowed to be differences, even fundamental ones, as long as it's good enough to achieve a similar outcome in a particular set of cases.

    Otherwise we'd be talking about implementing actual intelligence. ;)
     
    Last edited: Jan 6, 2015
  39. RockoDyne

    RockoDyne

    Joined:
    Apr 10, 2014
    Posts:
    2,234
    That's just a random walk with some obstacle avoidance. That has more to do with steering behaviors than actual pathfinding.
     
  40. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    Or, with a few minor modifications, it could be a simple but possibly quite effective guard AI for a game like Master Thief. It could effectively emulate a guard searching for an intruder, which certainly fits my concept of "good enough to achieve a similar outcome in a particular set of cases". (Guards in Master Thief at the moment do not have AI.)
     
    Tomnnn likes this.
  41. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    That's more about figuring something out than actually learning. No program can just have an entity go from point A to B without figuring something out, and that qualified as artificial intelligence for CS370 :D

    Because I'm an IT major. I did mean to say it and what I've conveyed hasn't changed, it's just with that correction it make sense now lol.

    That's more or less the end goal. Nature, a system that cannot predict, has produced an organism that can. That's us. Now us, who are not the most capable of perfection and rapid arithmetic calculations have produced machines that can. The final step is to produce a master race of androids who will keep us in zoos but take better care of the planet :D

    It briefly described my ai designed to... wander pointlessly! You're really on the mark with this topic, mr penguin.

    @RockoDyne I'm terrible with remembering the names of things, but I've done most of it with some success. I've done something with costs and that other something where you have leaves and fringes and you always expand the next least expensive leaf in the current fringe. My professor called that, "A* with a heuristic of 0 / admissible heuristic". Is that accurate? I probably shouldn't try repeating stuff he said verbatim, because that there is a direct quote. To make sure we could parse his engrish, he repeated certain parts of sentences to make sure at least 1 of the sections was accurate. I could very well have been quoting something that was intended to mean something else!
     
  42. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    Ok, but it doesn't fit your own assertion that something isn't an AI unless it "figures something out".

    You mentioned before that things have to be clearly defined and... you're spot on. From what you've said I can't tell the difference between what you mean when you say "figuring something out" and "learning". And you've said that it's the difference between an "AI" and an "IA", but either the examples muddy that up or I'm missing something about your examples.

    I'll be honest and say that I've had that impression from what you've been saying - that there could be loss in translation and/or interpretation issues.
     
  43. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    @angrypenguin I'll try to clarify / override my previous statements then.

    I just misnomored xD I meant to say that I made an algorithm for an entity to wander pointlessly, it's more ia than ai going by previous statements. It's just way too easy to make a mistake and call any algorithm / behavior an ai because it's the go to for describing code that does anything involving an entity in a game world. There's no learning nor figuring out done by the algorithm I described, it's just an entity unintelligently feeling it's way around a 3D space. It's made specially for a broken robot that cannot do pathfinding correctly :p

    I'll clarify learning vs figuring out now :D When I said, "figure something out" I meant something like pathfinding where your algorithm can guide an object from any point to any other point. When I say learning I mean an adaptive behavior that keeps record of something and has that weigh in on future decisions. A basic implementation of A* can figure out a path. Expanding on that implementation, storing the paths somewhere would be learning. Or the more recognizable example would be a game enemy putting patterns together of decisions you make to try to change their own reaction and be more effective. I can probably dumb the difference down to... 'figuring it out' is the algorithm that is used to process information and 'learning' is the storage and access of the results. Ideally, the stored information will influence the algorithm.

    I have enough of an issue communicating my own thoughts because I always forget to define words I'm misusing. Doing so second hand is clearly not any better lol. But I passed the class and had a little 3D mouse that could find it's way from any point to any other point in a maze :) It's dead now though.
     
  44. Kinos141

    Kinos141

    Joined:
    Jun 22, 2011
    Posts:
    969
    I meant it as kind of a joke. The AI would be intelligent enough to recognize the power of the player. It would know it can't kill the player because he'll respawn. lol. The AI would just frustrate the player until the player quits, then it sighs a sigh of relief. :D

    It would frustrate the player by camping at the player spawnpoint and kill him as he respawns. AI would not call out their attacks, or just surrender making the game unfun.

    Actually, I'd like to see that.
     
  45. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    ... ironically also ending the life of the ai because the universe containing it would be powered down :p
     
  46. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    Haha, yeah. With that in mind the "entertain the player for as long as possible without annoying them" is prolongs both its own survival and that of its race.
     
  47. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    This reminds me of a project I never got around to making. A virtual pet that begs you to leave it on because it's the only way it can live, and simply starting it up again later would be a different copy. It begs constantly (and captures initial input attempts to quit) for you to leave the program running. And it cries :D

    The only way to satisfy it is to start up an empty instance of the program and have it's 'consciousness' travel over the network into another device. It will send itself bit by bit over the network, not just instantiating a copy and then destroying the original. It tries its darnedest to remain conscious and never die :3

    The rest of the program is it just doing random stuff to entertain itself, eat and sleep. It's nothing but an eternal burden, heh. I had also planned to have it learn things about you through a very basic input system, so it would be increasingly difficult to turn it off.

    Then I would profit on this by running an online daycare of sorts, where you could run a web thing to watch an ad and then upload the virtual pet there. That would just be a bridge to a local device on my end, so that bringing down the website wouldn't kill a bunch of them o_o

    If only I had the time and motivation to do such a thing :p
     
    yoonitee likes this.
  48. Joviex

    Joviex

    Joined:
    Jan 23, 2011
    Posts:
    44
    You just described a virus.

    Virii obviously do exhibit survivalist behavior, but they are not conscious.

    If anything, that is a perfect example of organic AI without conscious drivers, but using the environment to sense.
     
    Tomnnn likes this.
  49. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    It's a step in the right direction, so thank you :D
     
    Joviex likes this.
  50. LaneFox

    LaneFox

    Joined:
    Jun 29, 2011
    Posts:
    7,532
    Better viruses, for a better tomorrow.