Search Unity

Capture the Flag AI Challenge Post Mortem

Discussion in 'General Discussion' started by Samuel411, May 7, 2017.

  1. Samuel411

    Samuel411

    Joined:
    Dec 20, 2012
    Posts:
    646
    I hosted the Capture the Flag AI Challenge that had been going on from Mar 20 - April 24, 2017. It was super fun to host and I learned a lot in terms of communication, logic, and practicality. This was one of the few times I was able to hold myself from making changes and making additions to things I've created. There, however, were a few changes in the beginning as I was getting the project ready for everyone to use that I believe caused some people to become discouraged from joining for fear that they may have to re-do their entire project due to a change.

    Here's a list of things I wrote down as the competition went through.

    Listen to people but listen to yourself and figure out a good end date and stick to it
    This is something that I forgot at times during the competition. At first with the suggestion to move the date further. I should not have extended the time for over a month, however, two weeks was unreasonable as well. I think three weeks would have been good enough to keep people's attention and for people to have enough time to create their entries. Another example is when some members suggested extending the ability of soldiers like adding snipers, etc. Another suggestion was to change the map and implement different elevations and barriers. I didn't think these suggestions would add that much to the challenge and might just further complicate it for no reason so I decided to listen to myself and not add them. This brings me to my next point,

    Don’t add features once you've released the competition
    This is crucial because if you make changes it is a huge red sign for experienced developers and anyone in their right mind. "Why would I join a competition that makes a bunch of changes? I may end up having to start from scratch at the last day because someone decided they want to have another feature." You want to avoid having contestants having to ponder that question, this is step one of a successful competition.

    Prepare and plan for things such as prizes, criteria, and rules, before hand
    During my competition, midway in I was emailing asset developers asking for a voucher to add as a prize to their competition. I was able to convince Apex to donate a voucher code to their Apex Pathfinding asset by leveraging the fact that I had acquired over 800 views on my thread in the first few weeks and had a fair amount of comments. You cannot reasonably expect a developer to hand over a key because you'll have a competition, they have to have something in it for them, whether it be a half purchase, a good reputation, or the promise of advertising and marketing for them. @LaneFox was kind enough to throw in three of his awesome asset, Cleverous's Deftly: Top Down Shooter Framework. I think this was able to help gather attention to the competition and gain followers and potential competitors. So to recap, make sure you are able to have some rewards available for people to receive beforehand if you decide to have them apart of the competition.

    Criteria and rules should also be planned before you even announce the competition. This is to be assured that people will have a clear idea of what they can and cannot do as well as how the winner will be decided so they can develop around that.

    Make things simple, seamless, and easy to understand.

    Changes are bound to happen, it's nearly impossible to release something without any bugs. Something I stressed during this competition is that my changes were as seamless as possible and would not cause anyone to refactor code or make changes. Things should just work. An example of this is when I switched to using the unity navmesh rather than the A* algorithm that I had developed, luckily, as standard I had separated pathfinding into another class and would handle movement of the entity itself with input from the soldier class. So the only thing I had to do to upgrade was to modify the pathfinding class and have it utilize the soldier class's input and I had navmesh AI set up and anyone could just copy and paste their script into the new version or just pull from the GIT branch.

    Document things
    Make sure to document things in an organized and clean fashion, especially with larger more complex challenges. This allows people to view how things work in the "backend" and use them to their advantage. I kept everything documented in github's wiki system and can be seen here, https://github.com/1samuel411/CaptureFlagAIChallenge. I haven't received too much negative criticism from the documentation really. Only include things that may be relevant in the competition and that is allowed in the rules, for anything not allowed specify it so no one is confused.

    Survey
    One thing I did not do was survey the community to see if this was actually something people wanted. Had I done so I may have seen that not as many people were keen on the idea or they may have simply been scared off by the complexity? Changes? I'm not sure but I ended up with only 3 entries :x So figure out if what you will be running will be actually interacted with the community.

    Thank you for reading, I hope to see more people create these types of things, I would definitely be open to participating. Feel free to contact me privately via my website, http://samuelarminana.com/ I'll be happy to answer any questions or talk in depth on anything I may have missed.
     
    Last edited: May 7, 2017
  2. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,027
    Thanks for the post mortem! I think you did a pretty great job. IMO, the main issue was simply that the premise of the challenge was too complex to interest a lot of people. AI is a pretty difficult topic to tackle, and I think the biggest obstacle was that in order to begin dealing practically with high-level AI attributes, it was necessary to spend quite a lot of time implementing a low-level foundation (e.g. grid-based LOS and stuff like that) which made it difficult to just get in and begin trying out intuitive stuff.

    Anyway, that's just a problem IMO with the initial scope - I think the framework, documentation and your general organisation of the competition was fantastic and there's a lot to learn from that. And I wish I hadn't been so lazy as to miss the opportunity.

    I'd be up to take part in another competition or host one, and I hope we do this sort of thing regularly.
     
  3. Samuel411

    Samuel411

    Joined:
    Dec 20, 2012
    Posts:
    646
    I agree the concept I think may have been too complex and may scare off people. Perhaps a deathmatch 1v1 would have been a better idea and would garner more support.
     
  4. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    I think you could get around the changes issue by having a test and adjust phase to the challenge. The aim being to test and improve all aspects of the Challenge from Arena/Level, Rules to API and features. You could also test the water and poll for the changes people want.

    And once the test and adjust phase is complete you can lock down the challenge.

    As far as the challenge went a single height arena with slow turning rates for soldiers limited the best play to ambush tactics.

    Allowing for terrain height variation, short walls, weapon types or fire modes e.g. Accurate Sniper, Single Shot Rifle, Or Burst Automatic or even grenades. Would allow for much more varied tactics, although a much harder challenge.

    Also features like Armour level linked to movement speed (heavy armour slows down troops) could make for people trying different play strategies.

    The Challenge API also lacked mapping/sensor features either built in or the ability for people to add them e.g. A Team Manager/Planner.
     
  5. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,027
    I don't think it's so much a question of the number of soldiers, rather how easy it was to get them to do what you'd intuitively want them to do. With the current framework, it's difficult to do anything but reactionary AI, because to give the AI a basic useable knowledge of the environment they're in takes at least some kind of grid-based LOS and/or mapping system.

    It would have been a lot of work for you! ... but unfortunately I think that's the nature of AI in a semi-realistic setting - it's a lot of groundwork to give AI even the most basic common sense that you'd need for tactical situations.

    So in that sense, reactionary 1 vs 1 may have been better suited, but overall might not have fit the context of the competition very well.
     
  6. Zuntatos

    Zuntatos

    Joined:
    Nov 18, 2012
    Posts:
    612
    I was looking forward to participating but sadly had to spend my time on more important things. I dabbled a bit in the code (hence my bug report about dead soldiers still seeing things), but pretty much as @Billy4184 said I'd have to make various low end systems before I could build a more complex AI system on top of it. Plus, (dynamic) squad AI is pretty hard to make.

    I would suggest some form of multi round or recurring contest. So instead of hosting 1 round at the end, host like 4 rounds, 1 every 1, 2 or 3 weeks. Then you can actually see your AI in action, and develop counters to the things it lost to. Possibly make the submitted code public after every round (but having videos of the round goes a long way I guess). There'd have to be some way to prevent people from just lurking and only joining the last round though.

    I'm not sure how well 1v1 would've worked for a contest like this; the squad part is probably the most interesting part. The hardest part was probably getting the low end things to work (find the most tactical spot to sit at).

    The assets to win were not important to me, as most of the time they're things you either don't need (yet) or things you already have. I'm not sure how to improve that though as unity assets are a fitting reward.

    I fully agree with both of @Billy4184 's posts above here.

    As @Arowx mentioned above, a 1-week test-period may be something to try out, to allow for some obvious breaking changes that should be done (maybe with a 1-week pause to implement these changes, not that relevant).

    A possibility would also be to use a slightly different game mode, that gives less of an advantage to the defender. I think the gamemode "Search and Destroy" from various games would be a better fit. Destroying 1 of 2 objectives. The defender then has to split forces and react dynamically instead of sit at optimal spots and win, while the attacker will generally go to one objective with most of its force. With a 30 second or so timer between reaching an objective and winning. Grabbing the flag and walking back felt awkward in my test AI.

    </rambling>
     
    Ryiah and Billy4184 like this.
  7. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,027
    One possibility is to build up a better framework and then run the contest again. I made a decent dynamic mapping system and grid-based LOS which I could fix up and maybe integrate into the project. Once there's a more powerful high-level framework maybe we could attract more people.
     
  8. Martin_H

    Martin_H

    Joined:
    Jul 11, 2015
    Posts:
    4,436
    I haven't had time to follow this challenge and likely won't participate in future challenges either, so feel free to ignore my suggestions on this. I think it would get more people to participate if the challenge to overcome with AI, was fixed and predetermined, and gradual differences of better or worse performance can be easily compared during the ongoing competition. Classical example would be something like a 4 player wave-based endless survival arena mode where swarms of enemies attack, the abilities and weapons of the 4 ai agents are predetermined, and people just write "player input", and see whose squad AI can defeat the most waves of attackers. As soon as 2 people have entered, there is a measurable number for progress of who is better and till the deadline everyone can keep tweaking their solution and catch up to others. There is incentive to enter early, for longer time to iterate till the deadline is over, and there is interesting competition as soon as 2 people have entered.
    Making the game to beat, and its low level systems, would of course be a lot more work than a more straight forward red vs blue fights. If there already is a game that allows AIs to be tested this way (or could be adapted easily enough, and preferably is made in Unity), that would probably be the most sensible choice, since you'd save a lot of time and would have something finished, that can't be featurecreeped any further.
    Just my 2 cents. Whatever you go with, I hope you'll have fun and learn something. I think the AI challenges in general are a really cool thing, I just don't have the time to participate myself.

    P.s.: What about this game?
    http://store.steampowered.com/app/555010/MERC/
    Since it has coop already that looks like it might be reasonably easy to adapt for something like this, and it's made in Unity.
     
  9. frosted

    frosted

    Joined:
    Jan 17, 2014
    Posts:
    4,044
    I think some of my comments in that thread may have discouraged entrants. Sorry if I did that, it really wasn't my intent.

    One thing I would suggest is perhaps less of a competition aspect or at least more transparent.

    If people were willing to provide different approaches and others were allowed to 'mod' those approaches then a few things happen:
    - it naturally becomes more collaborative and fun
    - people with more limited time can take something and just put their own spin on it.
    - entries can evolve to be better and better

    Competition is fun, but I think the time investment was just too high for most people - and the bar to beat cheese strategies with non-cheese was super high. You could theoretically beat camper/ambush strats, but it would take craploads of work and serious attention to detail.

    A more collaborative, open approach may have better results.

    Honestly, I think you could re-run the same exact competition and just have the entrants be 'open source' and it'd be fine. You could just keep running the same thing over and over, and the entries could keep getting better and better. Although, I still think it might be too hard to beat camping.
     
  10. Samuel411

    Samuel411

    Joined:
    Dec 20, 2012
    Posts:
    646
    Perhaps an airstrike that gets launched if an entity is around the same point for long periods of time?

    Those features you list just complicate the challenge even more for less experienced developers and even experienced developers who know that a little change in armor could destroy their entire tactic. I do agree with you with having a test phase or rather just open a thread before starting the challenge to take in the input beforehand from the community.
     
    angrypenguin and Martin_H like this.
  11. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,027
    I don't think it's as easy as it seems. For one thing, you have to be looking in the right direction. Also, the mapping system I made collected all of the perimeter squares around an obstacle, and it would have been easy, if I'd had time, to pull back after an encounter and send some people round the other side.

    Also, as soon as one member of the team is hit, the coordinates of the camper could be passed around and from then on it would be quite possible to stack up soldiers in an attack, or avoid that point altogether.

    Camping would only really be a problem if the soldiers were purely reactive and independent.
     
  12. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,027
    Completely agree with you there. I think the framework was not easy enough to approach for people to even take advantage of the current implementation, let alone a more complicated one.
     
  13. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    And that complexity works both ways, a unique strategy but with novice tactics could beat a simple strategy with really good tactics.

    Say you set up a couple of Tanks* to push to contact with the enemy, you expect ambushes and these troops trigger them. You back up the Tanks with Grenadiers to blow away the ambushers with indirect fire. Then runners go for the flag.

    Tank - Heavily Armoured Slow Troops.
    Grenadiers - Use grenades or indirect fire weapon to hit targets they cannot see.
    Runner - Fasters moving trooper config.

    With more flexible troops and a more open challenge people could pimp the best AI with their own Unique strategy.

    The trick would be how to make it simple to do.

    Also a player controller which was available in the Tanks AI challenge would have been a great addition to this challenge. Allowing people to play the game and test tactics and strategies. You know your AI is good when you can't beat it by hand.
     
  14. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,027
    This sounds all very fun and intuitive, but the problem is how to create a system that allows people to implement intuitive stuff without having to build really complex infrastructure. Just because something seems obvious doesn't mean it's obvious to the computer, or easy to get it to respond accordingly.
     
    angrypenguin likes this.
  15. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    It's mostly just configuration data:

    An armour property that reduces damage and movement speed.
    Weapon profiles for:
    Ranges.
    Fire Rates.
    Damage.
    Magazine Sizes
    Reload Times​
    Grenades or indirect fire would take a tiny bit more work but would massively impact camping and static ambush as a strategy.​
    The idea is that you can set up different game strategies with minimal tactics or AI changes.
     
    Last edited: May 7, 2017
  16. Zuntatos

    Zuntatos

    Joined:
    Nov 18, 2012
    Posts:
    612
    A singleplayer-style competition as @Martin_H suggests would be good as well. Makes testing easier. Though it'd get harder to prevent hardcoding.
     
  17. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    It would be an interesting AI challenge and different from the previous AI vs AI competitions.

    What about a different mission/meta game, this challenge used a variation of capture the flag.

    The top competition team based FPS game Counter Strike uses a terrorist/counter terrorist mission plan where one side defends and the other attacks.

    Or there is the next genre up RTS where the game is to annihilation the enemy. An RTS style game also allows for resource usage, extraction and unit construction strategies.

    What other type of game AI challenges could we do, driving, flying, board game, platformer?
     
  18. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,027
    What I was referring to was the groundwork required to even set up an ambush, trigger one, or attack cover in a basic way. It implies that you already have an understanding of the map for one thing, which is not currently there.

    The way I see it, we should keep the current format which has only been marginally utilised by any of the AI implementations made so far, and bring up the framework to include the ability to implement maneuvers that take advantage of very basic things such as finding cover, evaluating firing angles from different positions, surrounding a position, taking into account the boundaries of the map etc.

    Once we've gotten a lot more entries and mostly exhausted the possibilities of the current style of map, then maybe we should consider adding some small thing to it.
     
    Samuel411 likes this.
  19. Samuel411

    Samuel411

    Joined:
    Dec 20, 2012
    Posts:
    646
    Yeah, it sounds pretty cool and exciting. The hard part is getting things to be consistent so you could get the same if not similar results on different runs so that a neutral judge can run the AI and check the code to make sure rules are enforced and report the wave that AI got to.