Search Unity

What if I told you that an AI is learning battle tactics from a game, would you be afraid?

Discussion in 'General Discussion' started by Arowx, Nov 5, 2016.

  1. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Google's DeepMind is working with Blizzard's Starcraft II to learn how to play RTS games. (gamasutra blog post)

    DeepMind has already learned to play a lot of Atari 2600 games.



    So you're probably not worried now but didn't a lot of dictators start off with a box of toy soldiers. Is this how you start training Skynet?

    Any news of Unity adopting/working with a Deep Learning AI system, if only to playtest our games?

    Will DeepMind become xenophobic?
     
  2. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    Deep Learning is like witchcraft! At first I was sceptical... maybe they faked all the videos of a computer playing space invaders.

    I saw a lecture by someone from Deep Mind. They were English. Then I thought English people do have deep minds (like Isaac Newton) so maybe its true. But maybe if they make their system English then it won't be too xenophobic. I think we are a quite tolerant country.

    Regarding Unity using deep learning etc. I don't think that would be practical since all this deep learning requires lots of data and big servers which only places like Google can do. I think for indie developers it will be easier and faster just to program things ourselves using heuristics. You can fake a lot of intelligence fairly easily. e.g.

    if(person.Says("Hello")) Reply("Hi, how are you?")

    After all we are making games here not life forms!
     
    dogzerx2 likes this.
  3. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,565
    No.
    If AI can happen, it will happen and there's nothing you can do about it. No point in being afraid.

    Please try to logically explain how you managed to arrive at this idea.
     
    zombiegorilla likes this.
  4. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Well there was Microsoft's chatbot that was put on twitter to learn and it picked up a lot of bad language.

    What will an AI learn playing Starcraft II that its developers might not expect?
     
  5. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    I don't think its possible to commit war crimes in Starcraft. I may be mistaken.

    Maybe if it was playing the Sims, it would have the possibility of being xenophobic.

    I think in the future all robots will have the possibility of being bad, just like humans. So they will have to be deterred from such crimes by suitable deterrents like robot prison. So long as the robot crime rate is below the human crime rate I think they will be accepted as part of our society.

    Can a robot be more `evil' than a human? Its hard to imagine how!

    The problem with the Facebook twitter bot is that it didn't understand what it was saying. There needs to be a feedback loop where it reads what it wrote and considers the consequences. If it had that it would conclude that what it wrote was likely to lead to it being switched off! Hence, if only for self preservation, it would avoid being xenophobic!
     
    Last edited: Nov 5, 2016
    dogzerx2 likes this.
  6. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,137
    That's because a bunch of clownfarts swarmed the bot's mentions with targeted attacks in order to make her learn those things. It's not like Starcraft has this sort of thing built in.
     
    Ryiah and MV10 like this.
  7. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    This, a thousand times over. Same with DeepMind playing Atari 2600 games, etc. ... it can do them well (individually), but it doesn't know what it's doing, and it can't generalize that learning.

    I have a big problem along my property line with poison ivy. It grows like crazy and has killed several trees. That doesn't mean I'm worried it's eventually going to go on a rampage and attack my home. It does one thing well but it has no idea what it's doing.

    Rule 1: Don't anthropomorphize.
     
  8. Aiursrage2k

    Aiursrage2k

    Joined:
    Nov 1, 2009
    Posts:
    4,835
    Tay did nothing wrong!
     
  9. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,565
    Microsoft chat bot was neither intelligent nor sentient. It is dumber than a Parrot (and possibly dumber than a house fly).

    Will your calculator become xenophobic? It is the same kind of question.
     
    mathiasj and Ryiah like this.
  10. TechDeveloper

    TechDeveloper

    Joined:
    Sep 5, 2016
    Posts:
    75
    I'm not afraid as long as I have a few banana skins

     
    Ryiah, Player7 and MV10 like this.
  11. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,144
    Chatbots pick up and repeat what you throw at them. Who would have known! :p
     
    dogzerx2 and MV10 like this.
  12. Player7

    Player7

    Joined:
    Oct 21, 2015
    Posts:
    1,533
    jeez I am I the only one who cringes at fking documentary's etc that use xylophone background music.. its like wtf is this adult kindergarten class, lets learn about google deep mind today kids... I reckon in 5years they just build underground warehouses to house slaves of disenfranchised millionaires I mean millennials instead.

    job done, look our AI is pretty good 'do no evil' huehuehue
     
  13. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Meh. If machines are going to rise up and take over, it's going to happen. Not much we can do about it.

    It is likely that a machine would make a better president then the two you have to choose from this week anyway. So maybe we should accelerate the process.
     
    Dave-Carlile and Ryiah like this.
  14. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,144
    There are some people who just want to see the world burn... and they're going to have their work cut out for them deciding which one of these candidates will do a better job. :p

    Rather than focusing on stopping them we should focus on trying to convince them there is a good reason to keep us around.
     
    dogzerx2, zombiegorilla and Kiwasi like this.
  15. Whippets

    Whippets

    Joined:
    Feb 28, 2013
    Posts:
    1,775
    Yes, we could be their power source
     
  16. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Let's keep it to real science and none of this fantasy stuff. The movie was good. But the logical premise of machines using humans as a power source is totally ridiculous. The physics just does not work.
     
  17. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,565
    Someone said that a bar of chocolate contains more power than equally sized bar of dynamite. That could probably be a nice premise for another matrix movie.

    There was also a fan theory that the whole power source thing was a lie made by the machines.
     
  18. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    I'm sure Resident Chemistry Guru @BoredMormon could respond to this better, but LOTS of mundane junk has more energy than dynamite. The difference is how it's released. Though it might be awesome to have an entirely chocolate-powered power plant bubbling away in town. :)
     
    dogzerx2 likes this.
  19. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,012
    I'd rather be one, than hang around trying to convince one of anything. Then people can hang around trying to convince me why I should keep them around ...

    Logically I don't understand the premise of the whole 'us and them' idea, there are a whole range of possibilities for mixing humans and technology, that frankly I wouldn't mind being a part of. If an AI is so super duper advanced compared to a vanilla human being, you have 2 choices - get on board or get left behind. Why not get on board if it's such a good thing?

    For some reason, rarely if ever in any AI discussions do I see this being brought up. I think that in itself is a dangerous sign of how difficult it would be for humans to face an AI 'revolution', since perhaps we consider technology of this kind to be too alien to ourselves. And that kind of point of view makes it seem likely that some kind of impulsive mistake will be made that would make it difficult for a smooth transition to take place.
     
  20. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Edit: This post is basically wrong.

    There are lies, damm lies, statistics and internet facts. I'm not sure who suggested chocolate has more energy then dynamite. But chocolate bars don't have a habit of exploding and taking off limbs. Much of chemistry is simple common sense.

    If the statement has any truth it probably comes down to the different ways of measuring energy. There are about a dozen off the top of my head.
    • Mechanical energy - how much energy is present in the bulk movement of an object
    • Thermal energy - how much energy is present in the vibrations of individual particles
    • Chemical energy - how much energy is present in the bonds between atoms, or the potential bonds with other atoms.
    • Nuclear energy - how much energy is present in the bonds between particles in the atoms nucleus.
    • Relativistic energy - how much energy is present on the mass of an object itself.
    In practical terms when looking at dynamite we are talking about chemical energy. How much energy is released when high energy bonds break, and the resulting mix combines with oxygen in a combustion reaction. This is also the same type of energy we normally associate with chocolate bars, chemical energy is the energy we use when we eat food. In chemical energy terms, dynamite has much more energy then chocolate.

    To compare the nuclear or relativistic energy I would have to do some complex math. But since no one has created a nuclear bomb out of chocolate bars, I think we are safe to ignore this.

    The statement most likely implies that a chocolate bar has more nuclear energy then a stick of dynamite has chemical energy. That's true. But your average grain of sand also has more nuclear energy then a stick of dynamite. We just lack the capacity to utilise it.
     
    Last edited: Nov 6, 2016
  21. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,565
    Well, here's a quote with explanation:
    You judge how much truth is in that.
    And this is apparently the source of the whole thing:
    http://www.sightline.org/2009/08/14/of-car-crashes-and-snickers-bars/

    Apparently the original idea was to compare amount of energy in the chocolate bar when it is consumed to amount of energy released by dynamite stick when it explodes.
     
  22. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,144
    dogzerx2 likes this.
  23. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,565
    You can calculate kinetic energy of a moving car yourself and then convert it into calories. Kinetic Energy = 0.5 * m * v ^ 2. 1 kilocalorie (which is one diestary alorie is 4184 joules). Mid-sized car travelling at 60 kmph is 1500 kg * 60 km/h ^2 = 1500 kg * (16.66666 m/s)^2 = 416663 joules divided by 4184 = 99.58 dietary Calories. Snicker's bar (according to google) - 216 Dietary Calories.
    Looking at this, it is mostly physics. Not chemistry.
     
    Ryiah likes this.
  24. LaneFox

    LaneFox

    Joined:
    Jun 29, 2011
    Posts:
    7,514
    I'd be afraid of the posts you would make to discuss it.
     
    dogzerx2, Ryiah, Murgilod and 2 others like this.
  25. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,021
    I think it is extremely exciting to know there will be an API coming to let AI enthusiasts hook into StarCraft to do AI research. That is absolutely fantastic. StarCraft will be an excellent visualization system for testing and improving certain AI concepts.

    As for whether or not this will lead to the end of humanity, that is just silly. None of the AI solutions are currently considered working AGI. None of the AI solution are self aware or even close. At best, we will see improved neural networks that can work in concert with other game AI solutions like utility AI, route planners, etc. Those things are exciting to work with, but they are not any cause for alarm from a human survival perspective.
     
    mathiasj likes this.
  26. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Damm it. I hate it when I post big long physics/chemistry posts that turn out to be completely wrong. I should have known better too, food based organic dust is a major industry hazard.

    The numbers don't lie. According to wikipedia dynamite contains 5 MJ/kg. Fat contains 37 MJ/kg. So from an energy density point of view chocolate easily beats dynamite. And if you could induce chocolate into an explosion, perhaps by powdering it and dispersing it with compressed air, it would be more devastating then dynamite.

    My apologies for a long winded post above that is basically wrong. As a further apology, here are some explosions that prove that food can be just as dangerous as dynamite.

     
    Ryiah and neginfinity like this.
  27. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,565
    Hey, there's no reason to apologize - it was an interesting thing to investigate.

    I also didn't realize possible connection between kinetic energy and dietary calories. If anything it is interesting to take a look at human body energy requirements with this info.
    Recommended daily calorie intake can be around 2500 Cal, meaning it is 2.5 kilograms of TNT.
    At the same time, if this energy is used in span of 24 hours, it means that human body power consumption is about 121 watts. Basically, two 60 watt incadescent lightbulbs.

    Wolfram alpha lists human average daily power consumption as 85..100 watts, by the way. That's... incredibly efficient, actually.
     
    Kiwasi, Ryiah and MV10 like this.
  28. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    Wolfram lists a lower consumption because you don't get perfect conversion from food. Interesting coincidence that this came up. Having eaten very little on Friday thanks to the airlines totally screwing up our flights home from Unite, I happened to be thinking about how amazing it is that people can run on such small amounts of energy (and from such varied sources).

    I didn't know the human numbers, but my wife and I are into RC drones and I know those numbers well, and I was wondering how they compare. My little 220mm racer burns down a 2100maH 3S LiPo in just 7 minutes on average. (The motors occasionally burst-draw 120+ amps!) That's nominally 11.1V, so it would require nearly 4kW to keep that 3 pounds flying near full-speed (40+ MPH) for 24 hours. (Scales up well, too, my much heavier, slower 450mm camera drone works out to 5kW.)

    I can't fly around at 40 MPH, so that led me to wonder about my buddy's hawk, which happens to be about the same weight as my drone... it doesn't fly around all day either, of course, but apparently in the wild they consume about 130 to 175 calories per day, or just 8.5 watts -- not even a night-light's power consumption. Again, amazing efficiency!
     
    dogzerx2, Kiwasi and Ryiah like this.
  29. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,565
    I think in case of flying it is possible to significantly reduce power consumption by making a glider drone. You know, the one that will rely on air currents in order to stay afloat, and not on pull created by motor.

    https://en.wikipedia.org/wiki/Gliding

    As far as I know, that's what large birds do.

    I think that standard quadracopter drone (not sure if your drone is copter or a plane) is pretty much equivalent to a mechanical bumblebee. So, it'll require a lot of power to stay in air.
     
    Ryiah likes this.
  30. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    Yeah they're all quads. There are RC glider guys out there, and generally their major power consumption is high-powered radios because they can get many miles of travel. One of these days I plan to cobble together a 'duino-driven little autonomous tank-like thing that'll wander around the yard all day. It'll be interesting to see what power consumption is like with that one.
     
    dogzerx2 likes this.
  31. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,565
    Keep in mind that using internal combustion engine with RC electronics is still an option.


    The only issue is that internal combustion is very loud.
     
    Last edited: Nov 6, 2016
    MV10 likes this.
  32. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    I keep meaning to dig up info on this nitro-powered Stingray quad copter. Typically quads assume four motors which makes IC-powered quads prohibitively large and heavy, but the Stingray (which is sold as a normal battery-powered setup) powers all four rotors with one motor thanks to collective pitch rotors. This guy converted his Stingray to use an RC plane nitro engine. Pretty cool.

    Edit: Turns out the conversion was by Curtis, the guy who makes and sells the battery Stingrays. No info that I can find apart from this video though. I was hoping to find out what kind of flight time he gets. Though realistically there's no good reason to do a quad this way over a helicopter, that I can think of.



    Most gas-driven quads look like this terrifying beast:

    1.jpg
     
    dogzerx2 likes this.
  33. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Have you seen the news on NASA/FlexSys morphing wing yet, massively reduces drag from airfoils, rudders and ailerons. Who knew nature got it right, so don't just think gliding drones with morphing wings think flapping drones.



    Wired article

    We will really be talking when we invent artificial feathers, birds use them and they don't need them. They could just be hairy/scaly or bald. So they must be useful for flight?
     
    Last edited: Nov 6, 2016
  34. SteveJ

    SteveJ

    Joined:
    Mar 26, 2010
    Posts:
    3,085
    Sorry, subject line cracked me up. Reminded me of something..........

     
    dogzerx2 likes this.
  35. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    I wonder which game engine will be the first to integrate a deep learning AI system at some level of the pipeline?
     
  36. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,021
    It would be pretty silly to integrate a deep learning AI system directly into a game engine. For one, the training portion of deep learning is slow and uses a lot of CPU time. But more importantly, there are already readily available deep learning AI solutions that you could integrate into any existing engine if you had a use case for it.

    The real problem with integrating a deep learning solution directly into the engine is that each game is unique. There is no one size fits all approach to integrating a neural network into the AI workflow in a game. There are a bunch of different ways you can already do it. The trick is always figuring out a useful way to present data to the AI, so it can offer something useful in return.

    Arowx, I strongly suggest that you read a book about machine learning before jumping to wild conclusions about what could be done with it. There is virtually no benefit to integrating a neural network directly into the game engine.
     
    Kiwasi and MV10 like this.
  37. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    NASA has been playing with variations on that ever since industrial-scale manufacturing of MEMS became possible in the mid 80s.
     
    dogzerx2 likes this.
  38. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Well even from the examples I have provided in this thread, it should be obvious that a Deep Learning AI could be a great tool for QA. Playtesting your game to find bugs, game design loopholes or sticking points.

    Admittedly you might need to provide an AI layer to your game so it can more easily see your game, but for early prototyping with cubes that should be great.

    Then there is the obvious role of an AI playing NPC's or even Game Master against the player. After all if an AI can play Blizzard's Starcraft II with hundreds of NPC's it should easily be able to play the enemy AI in an FPS shooter, platformer or tower defence game.

    Actually with an AI GM you could turn the tower defence game on it's head, allowing players to be releasers or generators of the hordes, while the AI tries to build defences.
     
  39. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,137
    You can do that now.
     
    MV10 and Ryiah like this.
  40. ImpossibleRobert

    ImpossibleRobert

    Joined:
    Oct 10, 2013
    Posts:
    527
    This is actually incorrect and this is the big thing we have to realize here. It is not playing the games individually. Deep minds approach is not providing specialized solutions to specific problems, like image recognition. It's goal is actually to find the one algorithm. In the Atari case one algorithm plays all games. No changes made. What will happen now (simplified speaking) is that the algorithm will learn new tricks step by step. Starcraft is highly interesting because it will add specific problem domains: handling uncertainty (fogged land), remembering things it has seen once and reacting to that, much more complex input like moving the camera & get an understanding of buildings, army and strategy, micro and macro management.

    I am looking forward to the first games a lot. Like with the Atari games the AI will probably come up with strategies that will exceed what pro players currently see as the gold standard. It will brutally find imbalances and use these. It is this insight it can generate, that is on the one-hand side astonishing but to me also deeply frightening.

    Side-note: I just started watching Westworld. If you haven't seen it, good food for thought on what types of questions we will have to ask in the future.
     
  41. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    Not really. At it's current state (and the foreseeable future), you'd lose reproducibility and reliable re-testing -- both critical factors in real-world QA processes. Any programmer will tell you the worst thing in the world is a bug report with no information about how to make it happen again.
     
  42. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    Nope -- the learning routine was generic but each game was learned individually. That is NOT the same as generalizing what is learned to apply it to a different scenario.

     
  43. ImpossibleRobert

    ImpossibleRobert

    Joined:
    Oct 10, 2013
    Posts:
    527
    That is exactly the point: the learning routine is generic. It can learn by observing and trial & error without prior knowledge. This is a paradigm shift. It changes everything in my opinion. This is the step into the direction as humans learn as well.

    In a very simplified level: How do we learn to play a game? It has quite similar steps: Observe the screen, try different inputs, develop an understanding of strategies etc. Nobody told us before how to beat a certain boss but we have the algorithm in our brain to learn new concepts and apply them.
     
  44. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Good point and as you would need an input API for the AI to control the game you could stream the data and replay / repeat the run, to recreate the bug and giving you automated test scenarios to ensure you have fixed that bug.
     
  45. goat

    goat

    Joined:
    Aug 24, 2009
    Posts:
    5,182
    A computer with the proper sensors and a voluminous memory that can't forget, doesn't know how to lie, doesn't need to feel pain or fear, and could be programmed with exacting preciseness as to the current set of laws for a jurisdiction and monitor accordingly; making itself a nightmare for criminals in all walks of life. Nobody I've ever met can even recite everything they ate & drank for the past week.
     
  46. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,012
    Not just criminals. In fact they could be very useful for assistance in a lot of crimes.

    And I don't see what would be all that difficult about making a computer that could lie, assuming that the interface was not a direct access to memory. But if you could directly access a human being's memory they couldn't lie either. So it's kind of a moot point.

    Lastly, there's no such thing as a precise law, which is why there are so many books on it and people make so much money interpreting them. A lot of the basis of law is an acceptance of the judge or jury's ability to interpret a given situation from a human perspective, beyond mere technical details. It would be a monumental challenge to translate this to a technically precise set of commands, not least because a lot of the stability in any society rests on accepted biases and prejudices, and there's little evidence to suggest that society would do well without them.
     
    Kiwasi likes this.
  47. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    9,051
    It is time...
    image.jpeg
     
    Debhon likes this.
  48. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    This is incorrect, it is well known that people's memories are fuzzy things that can fade and warp. It is common for people who go on a group holiday to have their own recollections from the holiday. Then if they get together and recount the holiday to each other they will generate a group memory of the holiday.

    Look into the science of our memory systems they are not perfect, just like our perception of reality can be warped by our internal state.

    The funny thing is that AI's that use deep learning are based on technology that mimics the fuzziness of our brains and is only as good as the data you train it up with.

    What you want and what you get can be two different things!

     
  49. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,565
    That's not how it works. Deep learning system often is essentially a sequence of mathematical matrices.
    Input vector is multiplied by sequence of matrices and then produces output vector.
    Using training data the matrices are fine tuned to produce expected results in hopes that by doing that an algorithm adjusting the matrices will also stumble upon some general-purpose case.

    They are not modeled "after fuzziness of our brains", because last time I checked, people don't exactly know yet how brains work. Too many gaps in knowledge.
     
    ShilohGames likes this.
  50. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,012
    According to neuroscience stuff that I've read, not to mention psychoanalysis, there's a huge amount of censoring carried out between our conscious minds and our subconscious (not to mention input sources such as our eyes and ears). Although memory no doubt loses detail as time goes on, direct access to memory would bring up a hell of a lot more than asking someone to try to remember. Our unconscious minds constantly deal with and process much, much more data than we can consciously handle. It's like an HDD vs RAM, except that what brings forth information from the HDD has more to do with seeing something that our unconscious mind decides is associated with it, coupled with a strong stimulation, rather than a conscious act of remembering.

    So when memories are warped, it seems likely that they are warped somewhere along the way from storage to the lobby, rather than in storage itself.

    So when you say that deep learning is 'fuzzy' like our brains, again the computer can store data in an unprocessed state, so any fuzziness that you get is probably going to be its current interpretation of that data, not the data itself.

    But none of this prevents a computer's memory from getting wiped, scrambled or physically destroyed, so saying that a computer can't forget (or hallucinate for that matter) is not correct.

    What is interesting to think about though is whether an AI on par with the human brain would be able to reconcile itself with all the truths of its memory. It seems that humans can't, that's why our brains helpfully prevent us from remembering things when it is convenient, even though the data is still there. Our ability to make decisions and live a constructive life depends as much on our ability to remove things from our conscious mind as adding them. At some point, would an AI need to do the same?