Search Unity

Games Biosignature: Aliens-inspired 80sSciFi TopDownShooter (4 Player Couch/Online/Mixed Co-op)

Discussion in 'Works In Progress - Archive' started by PhilippG, Aug 4, 2016.

  1. Renin

    Renin

    Joined:
    Mar 10, 2016
    Posts:
    5
    I'm loving seeing this process unfold. You've inspired me to start a thread for my own project! I'm most interested in your dungeon generation system. Procedural game systems are something I'm very interested in learning eventually.
     
    Last edited: Jun 21, 2017
    PhilippG likes this.
  2. PhilippG

    PhilippG

    Joined:
    Jan 7, 2014
    Posts:
    257
    Thank you! I can really recommend looking into procedural content generation, there is so much more you can do with it than levels. Good luck with Project Citadel, I will follow your progress! :)
     
  3. Marrt

    Marrt

    Joined:
    Feb 7, 2012
    Posts:
    613
    I am really interested on how you will implement the camera stuff, zooming thresholds, camera centers, screenshakes, tilts...
    Generally i believe that controls+camera should always be one of the first implemented functionalities on any prototype.

    4-Player coop on one screen is a tall order, ever played that assault cactus game?
     
    PhilippG likes this.
  4. PhilippG

    PhilippG

    Joined:
    Jan 7, 2014
    Posts:
    257
    Yes, well - controls and basic camera logic is already in the prototype, far from polished of course.
    Currently the way it works is pretty basic:

    For single or online multiplayer, the local player is just centered. In narrow spaces or close to walls it zooms in, and in more wide spaces it zooms out.

    For local coop, I generally use the center of all local players, with additional zoom out when they spread. The zoom out is then limited. Whenever a player will leave the limit bounds, the camera does not take this player into account anymore (its also priorizing Player 1 or the bigger group then). When this happens the player does leave screenspace. I then plan to show an indicator pointing to where the player currently is. The player will try to find his way back on screen to not get lost or killed. Probably I could also add a timeout respawn that will teleport the lost player back to the others, but I want to take my time try all this out before setting this in stone.

    There are more things I did not do final decisions on, like whether the players are able to rotate the camera or not. It is something I enjoy, but it is taking away lots of accessability, and I actually want the controls to be as simple as possible. Yes, all this is something I did not finalize yet - thankfully these problems got solved before, so I don't really worry about it - however I am aware this can make or break the game, so yea...
     
    RavenOfCode and Martin_H like this.
  5. Martin_H

    Martin_H

    Joined:
    Jul 11, 2015
    Posts:
    4,436
    I would say leave the camera rotation out, because a) it tends to cause motion sickness when someone else rotates the cam for you, and b) you need your assets only to look good from one direction if you know the cam won't ever see them from the other side. That can help you save on polycount and asset dev time. And c) you can make better assumptions on how the player will see the level for your PCG, to actively avoid situations where an important thing would only be visible from a certain camera angle that the player may or may not have chosen because the rest of the level plays fine in any perspective. Know what I mean? In games where I can but don't have to rotate the cam, I very rarely do it because it tends to desorient me.
     
    PhilippG likes this.
  6. PhilippG

    PhilippG

    Joined:
    Jan 7, 2014
    Posts:
    257
    That is true, I have similar concerns. And you can get really confused when north is south all of a sudden, minimap or not.

    For Procedural Generation, yes, this will mean I shouldnt place important content near the southern walls then, where it could be obscured by walls. Thats the advantage when writing the generation algorithm myself I guess, it shouldnt be hard to introduce such additional rules :)
     
    Last edited: Jun 28, 2017
    Martin_H likes this.
  7. Marrt

    Marrt

    Joined:
    Feb 7, 2012
    Posts:
    613
    So you'll show some kind of offscreen indicator i guess. I tested something like a bubble-Cam Indicator once, a kind of Bubble on the Edge of the Viewport that shows the out-of-bound player. Here is a gif, the bubble outline is missing a pointy peak that should point into the playerdirection, should look like a gps-pin-symbol then.

    bubble.gif
     
    PhilippG likes this.
  8. SoftwareGeezers

    SoftwareGeezers

    Joined:
    Jun 22, 2013
    Posts:
    902
    Very nice project! Great work, especially the procedural stuff. I was doing similar for my cooperative dungeon crawler which I've had to put on hold. When it comes to promotion though, Pretty really sells. Can you add GI lighting or something to make the work you've got (great animations!) pop a little more? Also nice particles add flamboyance without costing a lot of time to make.

    Keep at it!
     
    PhilippG likes this.
  9. PhilippG

    PhilippG

    Joined:
    Jan 7, 2014
    Posts:
    257
    Showing a second smaller viewport is actually also an interesting concept! This would allow for new mechanics, where the players are forced to split up - classically that could be timer-levers or step-on pressure plates, where one player needs to open the path for the rest of the group, but then scifi-themed of course.

    I decided to drop true splitscreen very early in development, because it introduces a bunch of problems that I really don't want to mess with - handling audio in splitscreen is troublesome, and worst case I would need to render 4 cameras, which is pretty hefty performancewise.

    However, a small, stylized second view - say very lowres and with scanline effects and glitches, plus maybe an occasional loss of signal (thought Duskers) - this would be reaaally cool! Still, for 4 players I could still worst case have 3 additional cameras rendering then, so I need to carefully check if it is okay for performance.

    When I started writing this answer, I was like hmmm, but now I'm actually really into that idea! Thanks! :)
     
    Last edited: Jun 29, 2017
    theANMATOR2b likes this.
  10. PhilippG

    PhilippG

    Joined:
    Jan 7, 2014
    Posts:
    257
    Thank you!

    Glad to hear you like the procedural system! :)

    As far as I know, GI and procedurally generated environments unfortunatly is something that still does not go well together yet. You'd still need to bake/precompute in runtime, after the geometry was generated, and afaik that is still not possible. If you or anyone reading knows different, let me know. Anyways, I plan to concentrate on graphics more to the end of development, and maybe there are new solutions to this problem then.
     
    AsmCoder8088 likes this.
  11. PhilippG

    PhilippG

    Joined:
    Jan 7, 2014
    Posts:
    257
    Interlude 13-July-2017 - Let this number sink in

    So I currently spend most of the time working on my AI. Yesterday however, while testing AI-Agents against the generated Levels, I figured that I never actually tracked how fast the generator actually is.

    So I tracked it for my laptop, and in the geometry generation cycle (exclude layout algorithm) it is able to generate:

    100 GameObjects per 1/60 sec avg

    So thats 6000 tile and wall geometry prefabs per second.
    And now imagine someone had to place all that by hand.

    Have to love coding! :)
     
    Last edited: Jul 13, 2017
  12. SoftwareGeezers

    SoftwareGeezers

    Joined:
    Jun 22, 2013
    Posts:
    902
    I've seen a couple of realtime GI solutions tooted for Unity on YouTube. They'll make a huge difference if they really work that well!

     
    PhilippG likes this.
  13. PhilippG

    PhilippG

    Joined:
    Jan 7, 2014
    Posts:
    257
    Ah, yes that looks promising! Thanks! :)
     
    AsmCoder8088 likes this.
  14. AsmCoder8088

    AsmCoder8088

    Joined:
    Dec 7, 2016
    Posts:
    30
    One approach may be to run the generation algorithm in Unity with no GI in place, and create say a few hundred levels and save each of them to a 3d file that can be read by unity. You then write a script that imports them back into unity (without being in 'play mode' this time), adds lighting, bakes the scene, and saves the scene, doing this for each level that was saved earlier. So now you wind up with a lot of pre-generated scenes that, to the player, still seems procedurally generated -- just not at run-time.

    At least, that's the only way I can think of to get around the issue where you can't bake lighting while in 'play mode'.
     
    PhilippG likes this.
  15. PhilippG

    PhilippG

    Joined:
    Jan 7, 2014
    Posts:
    257
    I was thinking about workarounds like this, however I decided I don't want to drop real procedural content generation because of a lighting issue, at least for now.

    I'm going to keep an eye on the SEGI solution @Shifty-Geezer suggested. I'm not 100% convinced yet because it reads it can only handle scenes up to a limited size, plus its beta. And light bleeding might also be an issue.

    I'd really like to evaluate it for my project and integrate if it works well. @sonicether Any chance to get a trial version of SEGI pre purchase?
     
    theANMATOR2b and AsmCoder8088 like this.
  16. PhilippG

    PhilippG

    Joined:
    Jan 7, 2014
    Posts:
    257
    Dev Update 15-Aug-2017 - Flexible game AI

    Hello! After some more hard crunchtime at my dayjob (with weekends and all), I could now finally return to indie dev again! Although I will be busy in September, the plan for now and the upcoming winter is to get lots of things done for the game! So, whats new? Well, I decided to concentrate more on gameplay. And one of the most important keystones to me, when we’re looking at the top down shooter genre, is enemy behavior.

    AI.jpg

    Game “AI”

    I actually don’t like to call game AI to be artificial intelligence, because in general game AI is just scripted behavior. It will not “learn”. It might adapt to the players playstyle over the course of a game, but to me, that’s still not AI. A learning AI is unpredictable and nobody wants that for a game, neither the developer nor the player (players just might not know). Instead, we want to learn the patterns, and be able to master predicting them. This enables us to get into the flow and dance the game’s choreography.

    I am looking at a variety of inspirational sources for my AI. On one hand, lots of inspiration comes from Doom and its very chesslike approach, which is simple but brilliant: What can we learn from Doom | Game Maker’s Toolkit – Enemies have distinct movement and attack patterns and must be prioritized. The combination of different types and in different environments can create all fresh experiences.

    On the other hand, I am fascinated by more complex individual and group behavior. Enemies should have a bigger skillset rather than just charge and bullet sponge – this is fine for zombies or similar, but based on their type there should be variation, so we can also learn their individual traits.


    Behavior Trees

    For Biosignature, I decided to implement AI by writing my own behavior tree system. Behavior tree driven AI is quite common in games and there are plenty of resources. Give this article a read if you want to learn more about it, but simply said, it is a statemachine, but in a hierarchical tree layout, where branches are usually either sequential or selective, and leafs are either conditions or actions. The AI works by contiuosly ticking along that tree and evaluates the right (valid) branches for a given scenario. The below scheme is pretty rough but I think you can get the idea from it:



    Actually there is also quite a good asset for this on the Unity Asset Store, Opsive’s Behavior Designer. I was using this at work and it is really great to get started and I’d recommend it to anyone looking into AI. However, with the Behavior Designer I still wrote most of the actual nodes myself, so it was really just the visual debugging that I could benefit from. I don’t want to trade my full control over the code for this, so thats why I rather like to implement my own system. I now use XML to write the actual trees and serialize that into the game, which works quite well so far:



    I want to give you a brief overview of how I approach the implementation based on a simple example: Imagine a dangerous alien roaming around the area. You and your friends need to get past it. The alien doesn’t know you’re there yet…

    Sense -> Think -> Act

    These are the three pillars of AI. First, the alien needs to be enabled to sense it’s surroundings in some ways, by seeing, hearing, and feeling. Then, it can process that information to make decisions and select and execute from the available set of actions.

    Sensors
    So, knowing what’s happening around it is crucial for the alien. Now, there are different ways it will be able to spot you. For Biosignature, I implemented a whole set of different types of sensors (I call them detectors) that can be combined and tweaked to fit my requirements:

    1. On sight. You are within a view cone, you are not obscured by any cover, you are within a certain distance (also measured on the navmesh), you are lit(tbd).
    2. On touch. You are colliding with the AI agent.
    3. On receive damage. Imagine shooting and hitting the AI agent from behind.
    4. On percieve noise. Imagine shooting and missing, or bumping into something, or just making too much noise by running. The AI agent should start to investigate the position where the noise came from.
    In implementation, we need to keep an eye on performance! Detectors like touch, damage or noise are event based and therefore cheap, as they only evaluate once triggered. But both the view cone and the distance checks are problematic: They need to be run continously, and such checks are pretty expensive when done on tick. Even more when we need to make the check for multiple entities like in a multiplayer scenario, and even more when there are many AI agents that run those checks.

    Optimization: To counter this problem, I introduced asynchronous checks that run in background and keep evaluating on a much lower cost, while they still allow access on valid results at all time. Just like the tree itself, the checks can also be LOD’d – meaning that when an AI agent is more far away, like on the other side of the map, it will check much less often to keep the footprint low:



    Decision
    So in our example, you could either try to sneak out the alien or fight it. Let us now assume you failed sneaking past it so it is now attacking you and your friends!

    Since you are in a group, the alien is able to spot more than one target at the same time. That means that each of these detectors does not only have to “detect” as such, but also calculate a value for each valid target. For example, the distance check value is better for closer targets, but can be combined with a damage inflicting value. I do now sum up all evaluation values for each target and pick the highest value. That means the AI is able to decide to pick the annoying sniper instead of the rushing tank.

    However, to make decisions, a blank behavior tree is not enough! Let me explain. Nodes in behavior trees generally are implemented so that they clean up behind themselves, so that no matter in what order they are ticked, they may not lead to any unwanted behavior. Imagine a move node is interrupted midway because the agent was hit and a death animation starts playing, but the move node does not stop moving – uh oh. So we need to wipe all data and action on a node when we are done with it.

    But you still need a place to store and share data between those nodes, the agents memory. This is usually called a blackboard. Every agent has its own blackboard, and every node of the agents behavior tree must be able to access it. For my blackboard implementation, I simply use a Dictionary<string, object> (varname, data). This way I am able to store data in the most generic way possible. I can name, write and fetch any data from the XML by just defining variable names and using “the right” nodes, means they just need to use the same data type.

    So I can now save the best evaluated target into the blackboard, and on another branch in the tree, I check if that target is not null, and if so, it simply executes “moveto” and “attack” nodes on it.

    Acting
    The alien will now focus its prey with its dead eyes, and with a loud hiss it will sprint towards it to rip it apart. “Acting” is everything from movement to animation to playing audioclips on the AI agents.

    Since Biosignature is a networked multiplayer game, all the evaluation and decision will only run on the server machine. All the acting however needs to be distributed to all the client machines. The challenge is now to write the system in a way that I can trigger as much acting as possible with as little actual game specific code added to the behavior tree code. While there is probably no way not circumvent some more game related nodes, there are many occasions to keep everything generic on this end. Imagine the alien is wounded and retreats in order to find heal(maybe by returning to the hive). So I’d just evaluate the best/closest “hive” entity position and execute “moveto”, maybe along with another audioclip, instead of adding a specific “findHive” node.

    And thats it, the AI framework is pretty much completed now! Here you can see a proof of concept of a melee enemy that was done during development (graphics to be replaced!):



    I will continue working on AI and enemies, and explore the possibity space of movement patterns and behaviors to create interesting gameplay. There will be a big break for me in september, so expect the next update somewhere in late october ;)

    See you next time!
    Philipp :)
     
    RavenOfCode, theANMATOR2b and Marrt like this.
  17. PhilippG

    PhilippG

    Joined:
    Jan 7, 2014
    Posts:
    257
    Small Update - 6-Nov-2017 - Hey there, I'm back...

    ...from a really great honeymoon trip to the US! Yeah! :)
    In the last two weeks I came back to development, and for most of the time, I worked on the AI, Optimization and some nice level generation tweaks.

    Progress on Enemy AI
    So far I have three diverse enemy types running.
    A close combat enemy, a range attack enemy, who are both attacking on sight as well as investigating noises(gunshots...), and a scavenger (rat/bug-alike) that is only aggressive when attacked/another scavenger nearby is killed.

    Optimization
    I spent some time on optimization - foremost because I wanted to keep developing on my onboard-graphics notebook - and I could significantly boost performance by
    1. Optimizing how shadow casting lights are distributed in the generation cycle,
    2. Adding a fog of war system that culls away rooms completely whenever a room is closed,
    3. And also (sadly) dropping the custom culling system which was still performing worse than the built in one. As long as there is no fix for the oblique culling matrices bug, I need to rely on just adjusting the near clipping plane and therefore I can now only render real 90° topdown. But its still looking good I suppose :)
    Sometimes I can now get back on up to 60 fps, but its more like 40 on average. But its really just rendering that costs now, so I guess its all good for a 3D game that relies on dynamic lighting and shadows.

    I'm currently hesitating to show more footage because I got some asset store placeholders for the enemies in there - soooo I think it is about time to start working on graphics again now, along with working on gameplay.
    Well, we'll see.

    See you then!
    Philipp :)
     
    Last edited: Nov 6, 2017
  18. PhilippG

    PhilippG

    Joined:
    Jan 7, 2014
    Posts:
    257
    Dev Update 5-Dec-2017 - Stepping sidewards

    Hey there! Long time no see, but I can say I've been pretty busy! I did work on Biosignature some more and ran a new live test session with my coworkers which went well. I then decided to move on by adding gameplay, and what I wanted to do next was terminal hacking, which would naturally be minigames with reward for success and penalty for failure. This is when things went "sidewards"...

    From minigame to side project
    I wanted to do something similar to the Alien: Isolation minigames. While I wanted to have it look like hacking, it should be easy to understand yet demanding. It should draw the players attention so with the chance of being attacked anytime, I wanted it to be very tense. I played around with text rendering, because I wanted that Matrix look in there:



    The idea for my first minigame was that it can be controlled with only directional inputs from the keyboard or controller. The required input is indicated by the triangle. The player has to enter the right input ten times to succeed, false input drops the progress again, and three wrong inputs fail the hacking attempt. I could add variation and difficulty by simply adding more lines and shift lines or rows, and include or exclude the triangle from shifting.

    So on to the second game. With that I wanted to try something different. I've been working on my level generation systems for pretty long, and I had the idea to reuse part of it to generate a labyrinth. I also wanted to keep that matrix look, hence I kept it textbased:



    The player has to find the way to the exit to succeed the hack. I also added hazards (the 'X's) that have to be avoided. If the player gets hit three times, the hack would fail. This is also only be played with directional inputs.

    Now, this second minigame turned out to be quite fun! So on the evening I finished it, I decided to make a standalone version. I added a background scene and some atmospheric ambient/music to give it a more meditative and dark feel. I then gave that version to my coworkers:



    And they really enjoyed it! Which made me think about why not try to turn this into something bigger?! It feels like the right thing to do, and I honestly could use a break from working on Biosignature and start a fresh and smaller sideproject.

    I have some free days after christmas this winter, and the plan for then is to build a solid alpha. I already have written down some core features, and I think it's going to be surprisingly refreshing. I can't wait to show you more!

    So, see you next time!
    Philipp :)
     
    theANMATOR2b and Marrt like this.
  19. PhilippG

    PhilippG

    Joined:
    Jan 7, 2014
    Posts:
    257
    I'm at Unite!