Search Unity

Programmer, Fired After 6 Years, Realizes He Doesn't Know How to Code

Discussion in 'General Discussion' started by Ony, Jun 8, 2016.

  1. KnightsHouseGames

    KnightsHouseGames

    Joined:
    Jun 25, 2015
    Posts:
    850
    Wait.....seriously!? O_O

    Is this that Google Deepmind stuff? I remember something about the bot that won at Go being really advanced in some way

    Thats more frightening than I thought...
     
  2. KnightsHouseGames

    KnightsHouseGames

    Joined:
    Jun 25, 2015
    Posts:
    850
    That was kinda what I was getting at really. The places it will get us are where we aren't thinking about it. The machine will think nothing is wrong because it's doing what we told it to do
     
  3. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    I think someone posted the stamp collecting robot video earlier. 'Doing what we told it to do' is quite a scary prospect.
     
  4. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    The machine won't think "nothing is wrong, though". It has no concept of right or wrong.
     
  5. KnightsHouseGames

    KnightsHouseGames

    Joined:
    Jun 25, 2015
    Posts:
    850
    Heh, yeah, that was me.

    It truely is.

    And I think this is where assuming the machine is intelegent can really come back at bite them the worst. Theres always a time where you are writing a script and you say "Yeah, the computer will know what I mean when I enter it like this" and then it even works the basic circumstances you intended

    Then a fringe case happens and it goes wildly wrong from what you intended. The code isn't broken, it;s just not acting as you intended. But instead of like, some little anomaly on a game play clock or something, an automated bulldozer decides to demolish a school or something

    Perhaps I personified a little too much here. What I mean is your code won't like, throw an exception or won't be caught by some sort of logic that says "If [circumstance is this] do this [EXCEPT] when doing this" that will tell it to stop doing that thing.
     
  6. tedthebug

    tedthebug

    Joined:
    May 6, 2015
    Posts:
    2,570
    I read a great book about a stockbroking AI that was built to digest the news & make microsecond bets on stocks based on what it predicted was happening. Eventually it realised that disasters caused various impacts so it started causing disasters so it could profit from the right stocks. The ultimate insider trader.
     
    Kiwasi and Ryiah like this.
  7. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
  8. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    No deepmind is uncharted4 to atari pitfall, ie a more sophisticated version of old stuff, it's basically a neural network, using learning to predict next move (probability, at that scale literally intuition), and a MCTS (basically a stochastic minmax) with a lot of computational power behind it. It's rather "trivial" (still lotta complex but not in a way that can't be replicate with a smart kids in his bedroom minus computational power, in fact it happen someone replicate it and put it on github forgot the name).

    There is many AI module, we haven't assembled them together yet, but some experiment in robotic deal with coding "lie" or "curiosity". Curiosity is what interest us, there is many implementation, the most basic is exploration, ie score the furthest from goal or distance from top performing solution, a more sophisticate version is to favor many different way to do the same thing (regardless of performance of the many ways is score diversity). Another is basically following an heuristic based on how many new information a trail give.

    There is a video somewhere of an AI playing mario based on novelty instead of trying to beat the level, stop when bored, he had a very purposeful but wandering behavior.
    This one is also quitecool.
    http://www.extremetech.com/extreme/...self-ai-machine-learning-and-super-mario-bros

    To understand how close we are, there was one major problem in IA, "parsing reality" into a set of symbol to manipulate, with deep learning this is falling apart. Ia can effectively look at reality and describe what they see. That's massive, it mean it understand context provided sufficient learning!


    http://www.scientificamerican.com/article/see-and-tell-ai-machine-can-describe-objects-it-observes/

    But that's not all, words are just an output like any other, the EXACT same mechanism is what brought us driving car and beating the champion of go. So we have a generic mechanism that allow to parse "random" input into a set of coherent output! And it's not super difficult to operate either, kids in their bedroom had replicate all of that for fun. Now, in some domain, the algorithm can just learn by itself no need for human training (deepQ learning ,advance N.E.A.T.).

    It's cool but it's basically a good obsessive students, it's powerful, it doesn't really "think", it's more "instinct" through experience. That's where stuff like the above link with mario learning from his environment take the mantle. Remember I said parsing reality used to be difficult, but now with deep learning we can "just" parse it and fed those higher reasoning algorithm.

    And to limit complexity you can use deep learning to reduce the branching factor of reasoning by creating a thought instincts (what AlphaGo did). That's also what stuff like cortana and siri are doing at a SMALL scale, they are still smoke and mirror, we have the parse reality part but the reasoning part is old laughable trick barely above the kind of conversational AI we have in game now! It's all hand made.

    Do you see where it's all going?

    BTW while skynet is still just out of reach, terminator is not. A kid in his bedroom can make a drone that recognize people (and their state), fly by himself, recharged itself, adapted to his environment through trial and error, and pull a trigger.
     
    Deleted User likes this.
  9. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,183
    That may be so, but can it lift the gun the trigger is attached to along with the battery it needs to do so? :p
     
  10. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    What does "intelligent" even mean? To me it's referring to something that can figure out how to make useful decisions based on arbitrarily complex criteria. That is, it's not just following instructions to make a decision, but figuring out what needs to be considered for a decision, and then implementing it.

    To me most computer software doesn't fit that because a person had to first tell it specifically how to make a decision. But we are getting to the point where self-optimising systems can be created, and we're making systems with an increasing capacity to model or interpret the world around them. To me that says we're getting towards some form of intelligence.

    Those aren't the important things about Terminators' robots, though. That wouldn't be scary. Just send a tank to blow it up, job done. What makes The Terminator scary is that it's smart enough to fully blend in with human society, and then use our own tools against us in order to better hunt us. Without that then having guns, being bulletproof and shapeshifting would all have been useless.
     
    Ryiah likes this.
  11. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Absolutely yes, but not more than 30mn until graphene battery flood the market and put it at 25h which would also allow for solar panel to be installed.
     
  12. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,183
    Naturally it'll be a graphene solar panel, right? :D
     
  13. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Ho! They will be able to see the tank and dodge base on being more nimble and seeing the direction of the cannon. They are intelligent enough for that already. And if it's not a kid a swarm would be hard to stop :p

    Well THAT terminator is coming soon though




    We can make DIY graphene with sugar I wait until there is a homemade solar recipe for graphene, I'm still waiting :( If you have news about DIY graphene transistor tell me too I'm interested

     
  14. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Nah, something like this.

    The general premise of the stamp collecting robot goes something like this
    • An AI is developed with the goal of collecting as many stamps as possible
    • The AI is provided with a general model of the world
    • The AI can make a bunch of predictions on the world model, and use them to decide what would maximize the number of stamps it collects in the real world
    The developer might anticipate something like this
    • The machine locates stamps in the world
    • The machine figures out who to email to get stamps
    • The machine figures out a set of trades to get the most stamps
    However, the result might end up like this
    • The stamp collector realizes that manufacturing its own stamps is more effective then trading
    • The stamp collector decides to use the closest organic matter to hand as raw material. This includes plant and animal matter.
    • The stamp collector realizes that converting all organic matter on earth to stamps results in the most stamps
    • The stamp collector realizes the only person capable of stopping this conversion is the developer
    • The stamp collectors first action is to kill its own developer
     
  15. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    Right, in which case Isaac Asimov's Three Laws of Robotics come into play.
     
    Kiwasi likes this.
  16. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Yup. However the three laws are complicated. Building a ai that can comprehend those three laws will be even more complex then the stamp collector.

    It's also interesting to look at how the three laws ended. The robots started to move away from general intelligence, and were instead built to do specific tasks, and only those tasks. Much like the modern machines of today...
     
  17. 3agle

    3agle

    Joined:
    Jul 9, 2012
    Posts:
    508
    The issue with this whole theory is that to get to the outcome of killing a person you have to infuse that AI with so much redundant information about human sociology, biology etc, then give the AI the ability to kill and a method of killing.
    I don't see how they would be present in a situation where you make a stamp collecting robot. It's information (and capability) that the robot does not need to be given in the first place.

    Sure you could look at it like hollywood does and say 'oh it could connect to the internet and learn about humans instantly'. But the bigger issue there is that you'd have to have a truly intelligent AI. In which case it's more than capable of collecting stamps already and you'd have made a software system that's truly over-qualified for the job..

    So the theory of someone developing an AI that collects stamps but suddenly goes crazy and kills people is just daft. The theory of developing a true intelligence that can perceive the world as a whole, which you in turn try to get to collect stamps, is a totally different thing, but still daft in it's own theoretical way.

    Some people have far too much time on their hands to keep coming up with these silly ideas.
     
  18. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    The stamp collector is basically a metaphor!

    It's not a silly idea when more complex ai in complex task(think military ai) could evolve that way easily (even without having to go whole biology or sociology) . In fact there was already drama happenning with AI, such as TAY the microsoft chatter bot going racist and nazi after being hooked on twitter (though this one is kinda not that, the effect where the same), facebook at a an IA that classified black people as monkey. Imagine a minority report like situation! It's a real problem to solve, it had already happen.

    Also I was musing how we could make a real stamp collector right now lol, we have an AI that can classify painting based on their novelty, I'm sure we can do it.
     
  19. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Well VR and Games provide great sandboxes to try out AI, maybe like drugs, weapons and dangerous technologies they should be thoroughly tested before release.
     
    Kiwasi likes this.
  20. Dreamaster

    Dreamaster

    Joined:
    Aug 4, 2014
    Posts:
    148
    On my first job I started as a Network Administrator, but then they "discovered" I could program and the company made me a "Web Master". Every week we would have a web team meeting, which included the CFO and all the supervisors from each department and we'd talk about my progress. One week it was 4 hours until the meeting and I realized I hadn't done a single thing for the entire week. In a panic, I grabbed the to-do list from the last meeting and busted out one of the items. I remember how nervous I was in the meeting because I was feeling really bad that I had only worked for 2 hours for the entire week and was afraid of getting into trouble for it.

    I showed off the additions I had made and got a standing ovation from the entire team. It was one of the most surreal moments in my life.
     
    Socrates likes this.
  21. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Result matter, not efforts :D
     
    Dreamaster and Kiwasi like this.
  22. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    The scenario is ridiculous. An internal model that accurate probably isn't possible. It's well beyond human intelligence. The idea of it being tasked with stamp collecting is laughable. As is the developer not including safeguards.

    But it's a thought exercise to indicate a goal orientated general AI could be very dangerous, without needing to be explicitly greedy or malicious. The goal is pretty benign.

    Of course all bets are off if we produce a similarity.

    I'm personally not a fan of the singularity idea. I prefer the future where humanity evolves into a single organism. But all of this is fun to play with.
     
  23. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    Well, this is more interesting.

    The MOVIE I, Robot, found a way to use three laws of robotics to hurt humans.

    One harmful interpretation of 3 laws of robotics would be:
    "Humans hurt other humans, and therefore the only reasonable approach would be to put everybody into cryogenic stasis forever".
     
  24. Enoch

    Enoch

    Joined:
    Mar 19, 2013
    Posts:
    198
    Agreed. I've always thought the traditional technological singularity centered around AI was misguided. Using AI as tools to hyper develop our own selves seems more likely. I remember seeing the idea of a singularity illustrated as a skyward shooting curve and I have always thought that while that "explosion" point is possible, before that happens its probably more likely we will conquer the communication barrier. That physical/technological barrier that keeps us communicating with each other at "speech" speed. Once that goes I think a more likely "human" based singularity is most probable.

    The ultimate result being that we end up eventually communicating at near light speed in terms of information bandwidth (far faster than our neurons currently), in that case we would be wholly indistinguishable from a single organism for all practical purposes.
     
    Kiwasi likes this.
  25. RichardKain

    RichardKain

    Joined:
    Oct 1, 2012
    Posts:
    1,261
    The movie, "I Robot," is steaming garbage. Okay, that may be a bit harsh. It was actually decent, if not great. But the way in which it approached Asimov's laws of robotics was laughable. And this failure to grasp those laws was one of the biggest flaws in the film.

    The "wrinkle" you're proposing doesn't work, because such actions would supersede the first law. Current technological advancement would prevent robots from placing humans in cryogenic stasis, as this would constitute "harm." Any robot operating under Asimov's laws would not only NOT place people in cryogenic suspension, but would actively prevent anyone else from placing them in cryogenic suspension.
     
  26. KnightsHouseGames

    KnightsHouseGames

    Joined:
    Jun 25, 2015
    Posts:
    850
    Good thing he made a followup video that covers that exact topic as well.

     
  27. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    * A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    * A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
    * A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Have you read "Liar!" by Asimov?

    That's not logical. If a human can be put into state where it can NEVER cause harm to himself or to anyone else, then that robot will immediately put the human into that state because of the first law. Not doing so would be a violation of first law.

    With absence of cryo sleep there are padded cells, straight jackets or old-school anasthesia.

    Since humans cause harm by interacting with each other, all human interaction should be prevented. Since any human action may lead to harm, all actions must be forbidden and humans must be kept immobilized.

    A machine does not have common sense. So it will not operate on common sense definition of "harm", but instead of that it will use definition it was provided. A machine with higher mental capacity than a human, will predict human actions and will seek to exterminate all activity that has non-zero chance of any harm coming to a human. That means, every single thing you can do in your life.

    See, the problem with first law, that robots are not required to keep people happy. Just content and not-injured.
     
    Socrates and Kiwasi like this.
  28. RichardKain

    RichardKain

    Joined:
    Oct 1, 2012
    Posts:
    1,261
    This is why I specified current cryogenic suspension. The present approach to cryogenic suspension destroys human tissue. They still haven't gotten over the issue of crystallization of water in human cells. Modern cryogenic techniques would cause irrefutable harm to any human being placed in such suspension.

    The scenario that you're elaborating on is plausible. It reminds me of the second book in Asimov's Robot series, "The Naked Sun." The presence of robot caretakers in that novel shifts the society on the planet to encourage human separation.

    Of course, in Asimov's work he elaborates on how more advanced robots may be able to begin refining their response to the three laws based on a deeper understanding of human nature and interaction.
     
    Kiwasi likes this.
  29. KnightsHouseGames

    KnightsHouseGames

    Joined:
    Jun 25, 2015
    Posts:
    850
    Since clearly no one watched the video

    The whole point of Asimov's laws were that they were incomplete. He was a fiction writer, not an AI developer. But he knew logically that his laws were incomplete, because it made for interesting storytelling, because the stories often explored where those laws went wrong.

    In the actual field of AI development, Asimovs laws aren't taken seriously, because they are a plot device in a science fiction story.
     
  30. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    Look, I really don't want to write three paragraphs of law-speak-like text explaining that "in this scenario cryogenic suspension refers to fictional future tech that has not been invented yet, and not to and blahblahblah" every time I post something.

    You're human, can't you just infer original intended meaning without having all the details specified? No offense intended.
     
    Last edited: Jun 10, 2016
  31. RichardKain

    RichardKain

    Joined:
    Oct 1, 2012
    Posts:
    1,261
    Oh, I get what you're saying. It's the whole "humanity has to be protected from itself" scenario.

    A lot of modern science theories and engineering achievements have stemmed from the science fiction of the past. What we dream of today will serve as the foundation for what we attempt to create tomorrow. Modern AI development does not concern itself with Asimov's laws because it is so far behind what is described in Asimov's books. We have not yet achieved proper, self-aware AI. We can create immensely sophisticated reactions, but we have yet to create anything that is able to act under its own volition. Self-awareness and self-determination are not that easy to handle.
     
    Kiwasi likes this.
  32. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    Not quite. With an AI the most innocent idea can lead to disastrous consequences due to differences in logic.
    Same thing, on larger scale.

    Logical reasoning can lead to "incorrect" conclusion when initial propositions are flawed. That's why humanity can be destroyed by a machine that follows 3 laws of robotics.

    That's what I meant.
     
  33. KnightsHouseGames

    KnightsHouseGames

    Joined:
    Jun 25, 2015
    Posts:
    850
    Again, I reccommend you REALLY watch the video. The guy is an actual AI expert, and discusses in detail why the Asimov stuff is incomplete and not very useful, as it requires us to solve all of Ethics among other things, and it would probably be better to arm general artificial intelegence with a more complete set of rules instead.

    The entire point of Asimov's rules was to break them. They were put into the story SPESIFICALLY because they are flawed.
     
  34. Akanaro

    Akanaro

    Joined:
    Oct 26, 2014
    Posts:
    12
    Hah, they should totally make a movie about this and call it iRobot! Oh wait... :p
     
  35. RichardKain

    RichardKain

    Joined:
    Oct 1, 2012
    Posts:
    1,261
    This makes me think you have only limited exposure to Asimov's work. Asimov always had an extremely rosy view of his own version of robots. They are consistently portrayed in his work as near-ideal creatures, far less flawed than the humans who make use of them. They are also frequently portrayed in a sympathetic light, while most of the "flaws" are the product of humanity. Asimov never got that much into the whole Frankenstein-complex. He tended to see robots not as monsters, but as a better, more refined version of humanity.

    And the three laws served as the cornerstone of this rather positive portrayal. You are correct in pointing out that the subversion of those laws frequently served as the basis for many of the robot-focused stories that Asimov wrote. What is a story without some manner of conflict, and a resolution to that conflict? I don't think it is accurate to describe the three laws as being fundamentally flawed, however. In all of Asimov's work, the three laws serve their intended purpose in 99.9% of all scenarios that crop up. Even in those stories where people are trying to subvert them, it is usually shown that their proper application wins out. The use of the three laws in logical deduction is how most of those stories are eventually resolved.

    Asimov clearly had a very positive opinion of his own version of AI.
     
    Kiwasi likes this.
  36. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Well you can put people in cryonic stasis AND in a virtual world where he had human like non human interactions where they are free to harm EVERYONE because they are simulated agents. Whenever they escape you tell them you use them as battery even though this idea is ridiculous so their fragile ego isn't HARMED.

    Where I have heard about that?
     
  37. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    This. While many of Asimov's stories tackle the three laws being tampered with, or the three laws being attempted to be subverted, he never really tackles the idea of 'could the three laws actually be built and deployed'.

    Sure there are some forays into the topic, there is one point where robots recognize other robots as human. There is the point where the machines decide to self destruct rather then continue to guide humanity. Daneel even destroys the earth to force humans to colonize other planets, he considers the stagnation of humanity on earth as dangerous to humanity.

    But in terms of movies, Bicentennial man is a more accurate picture of Asimov's world then I, Robot. Asimov generally considers robots to be more humane and generally better people then biological humans. At one point they even elect a robot as president of the United States.
     
  38. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    The logic is that "serving" is really "governing" at its logical conclusion.
     
  39. goat

    goat

    Joined:
    Aug 24, 2009
    Posts:
    5,182
    Ah, that's normal from a good programmer.
     
  40. Enoch

    Enoch

    Joined:
    Mar 19, 2013
    Posts:
    198
    This is really cool video and her certainly makes valid points, but as far as codifying the 3 laws while certainly that would seem impossible in the practical sense, really the entire concept and idea of deep learning sort of "solves" the problem of coding them directly as a natural consequence of how a deep learning AI is "programmed" in the first place. AI is less directly coded and really more "grown" than traditional code.

    If an AI needs "morality" then you have 1000 people teach it morality by giving it 1000 scenarios each where that morality is at least partially expressed. Deep learning extracts the mathematical model of those scenarios over the breadth of its simulated experience. It's it bit like showing it a million pictures and flagging which of those pictures have houses in them and then having it extract the mathematical patterns of house pictures from that data set, so it can instantly identify house pictures with a new picture not in the original data set. I am greatly simplifying but the general idea is the same.

    We won't every be able to code the laws directly. But we will natively through the process of trial and error over the billions and billions of data points teach it subtly the concepts and characteristics that we want it to learn. It will have no more ability to betray those concepts than Asmov robots did in regards to the 3 laws. It will solve "morality" the way humans solve it, through experience (simulated as it might be).

    It's rather simple. You don't want your AI to be racist don't expose it to raw unfiltered data from twitter.
     
    Kiwasi likes this.
  41. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    I had decent exposure to asimov's work, although I certainly never read all of it.

    Here's the thing: I don't share his rosy eyes view.

    A machine capable of thinking most likely will have completely alien thought model. And then we have "Alice in wonderland" logic errors.
     
  42. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    Isn't that the same thing? Sure, the Three Rules aren't "complete" because they're a digestible version used for fiction with an audience of people who aren't computer scientists, but that doesn't invalidate the concept. I agree, actual implementation is going to be far more complex and may not in reality look anything like the three neat propositions presented in fiction.

    To be honest, though, thinking about how to implement it is only a part of the issue. Will people able to build robots (in quantity) really want to incorporate such laws? It rules out military use, security use, and even use in many fields where humans compete with one another (eg: a stockbroker AI that can't make recommendations because any course of action results in someone being negatively effected...).

    I don't know about that. Consider that even as individuals our ethical and moral logic is often inconsistent*. Then take into account that our ideas change over time with experience. Then take into account that it's rare for two humans to agree on all of those things, let alone general consensus even on simple issues ("Is killing wrong?" "Yes!" "What about the death penalty?"...). And you want to give that dataset to an immature intelligent system during its formative growth phases to "program" it?

    At best, the supplied dataset would have to be curated, and then the robot's morality would mirror that of the curator... assuming that they managed to give an actually consistent input set. And even there, the results are highly unlikely to be anything like Asimov's robots for the reasons @RichardKain has already covered.

    And heck, consider that this is an intelligent and learning system... why is it going to stop learning morality just because we're done "teaching" it?

    * The Talos Principle has a nice illustration of this with its ongoing debate between the player and the Milton Library Assistant.
     
    Ryiah and Kiwasi like this.
  43. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Well the whole point of the books is to show the law don't work, and robot struggle with it too, which prompt the existence of robot psychologue who help them, and human too, deal with the implication of edge case, such Susan Calvin eminent sentient robot therapist. All stories there are about the paradoxes and contrive solution robots and humans create to cope. Asimov isn't that much optimist imho.
     
  44. Enoch

    Enoch

    Joined:
    Mar 19, 2013
    Posts:
    198
    Understand that I don't think we can build a perfect bug free machine (not that we can even define perfect as concrete term enough to test for it). If we ever get to the point where we want machines to have ethics for some reason, then those ethics will of course be an exact reflection of the data set used as input. As flawed and varied as the dataset designer. And the machine will perform as good and bug free as it was tested to be. The more QA we poor into it the better it will be.

    That humans disagree on a given point is precisely why a wide set of view points is necessary in the data set, assuming the goal is some sort of average for what "Ethics" is. If you want a given persons definition of "Ethics" than use a data set mostly created by that person.

    Note however that I hold that humans aren't as irrational as I think most people assume they are. The differences in opinion (and I understand ironicly this is a personal opinion of mine, and I am a bit of an optimist) are almost always differences of experience and communication of those experiences. We can't feed AI data that is as broad as "Is killing bad", given that it could even parse those words correctly. I think the data set would have to include far more descriptive and specific instances of killing == bad.

    I don't really buy into the whole we can't turn the machine off fictiony side of these conversations and I guess I assume that machines will never really gain sentience enough to mess with their own data sets. Or at least we will never let them off the chains enough to actually "continue" learning on their own, without our guided help in some fashion.

    My point is simply that we won't have asimov's laws but we don't need them. We are rigorously testing output to make sure the machine performs like we intended. If it doesn't then we didn't test it enough.
     
  45. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    Murphy's law.

    "Anything that can go wrong, will go wrong."

    Human's job is to mess up and learn from mistakes. So people will make mistakes while building sentient machines. A lot of them.
     
    Kiwasi likes this.
  46. goat

    goat

    Joined:
    Aug 24, 2009
    Posts:
    5,182
    Ethics is easy. Especially when everybody gets what they need, not what they want. You don't hurt or capture others unless you are being physically attacked. You help those that need physical or mental help. You don't seize a resource that deprives health, security, or freedom from others. You don't create a resource that deprives health, security, or freedom. However, that is boring. And for those that ignore those principals? Well none of us have been around long enough or the research and surveillance capabilities to be really sure the effect our lifestyles on others by our lifestyle choices. That is why there are governments. The law books and stories of the world are voluminous and meant to remedy the lack of ethics in an expectable way for those living in various local jurisdictions of today.

    It's more than pointless to point at the history books full of cherry picked historical events meant to brow beat the innocent of today to show superiority over those being brow beat, into supporting one clique over another another, or more often than not, send a penny or two to some mass media company via advertising.

    So we can't punish the dead sinners of the past, we can't even identify most of those sinners in the overwhelming majority of the cases or what they actually did personally to be grouped in with the sinners of history that can definitely be identified in the history books. If you believe in a God or Gods, you can't punish the creator(s) of the universe anymore than a character in one of your games can reach out and punish you. So what do you do? Blaming the innocent of today based on bigoted generalizations of cherry picked historical events and a whole lot of ignorance isn't very righteous or smart or ethical.

    This whole ideal is a robot being a judge isn't that far fetched as that's what the law books already do basically. A law is written and it's then no longer up for debate or interpretation just like what computer code does. The problem in most cases is not that equitable laws in most jurisdictions aren't on the books already but that criminals, big businesses, the powerful corrupt in many foreign countries, and many corrupt politicians in the home jurisdiction are amending or surprisingly often, outright ignoring the laws and those rules and regulations in ways that serves their own greed and own interests rather than the moral intent of citizenship.

    What the most jurisdictions need in the world is a system of laws and punishments for their government representatives and tenured workers that is independent of the control of those government representatives and tenured workers and has automatic sentencing. In the US at least, practically it would have to be created via state referendums and ratifications as amendments to the Constitution. There is nothing that prevents, for example, abuse of farm animals via corrupt pseudo-science via the collaboration of the USDA with big animal farming agribusinesses. Ugly or tasty creatures are rewarded for their powerlessness and their utter lack of a clue of our intent to lives of abuse and suffering. Citizens need laws that prevent governments from using science or economic theory to insincerely and manipulatively construct rules and regulations as how to manipulate those 'newly discovered' scientific and economic laws, thereby circumventing existing laws for personal gain and glory via the claim it is scientific or economic law that allows them to create those rules and regulations. The executive branch must be prevented from legislating via selective enforcement of existing laws and likewise Congress through it's funding or lack of it for that enforcement and the judicial branch via different sentences for different people for the same crime. Those are dereliction of duty. A government should not have the capability to circumvent the intent of the law through procedure, science, or funding. No escaping punishment if you are a popular politician or a sport star or any other of these esteemed social positions in society.

    yadayadayada.
     
  47. Master-Frog

    Master-Frog

    Joined:
    Jun 22, 2015
    Posts:
    2,302
    Dear people.



    If such a beautiful creature as this issuing such a stern warning as that cannot be taken seriously... then there is nothing that can save you.
     
    goat likes this.
  48. goat

    goat

    Joined:
    Aug 24, 2009
    Posts:
    5,182
    I like that movie. Well most of it.
     
  49. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    Perhaps not sentience, but what you described is a learning machine which we teach rather than program. So whether it's sentience or something else (it really doesn't have to be sentience), it already requires the ability to update its own internal models for that to work.
    But isn't that contradictory? An important part of intelligence is the ability to learn, so if we're building an intelligent machine then we don't want to turn off its ability to learn. This has nothing to do with some Hollywood idea that we can't, it's simply that there's no point building the machine if that's the only solution. It's like making a car safer by removing the engine.
     
    Enoch and Kiwasi like this.
  50. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I think we can already prototype ethical machine given simple representation in a sandbox! ie a pair of factual situations and ethical judgement.

    That said:

    https://t.co/GTPdIdXNIa