Search Unity

Have you ever made a game supporting both gamepads and touch gestures?

Discussion in 'Input System' started by runevision, Apr 28, 2016.

  1. runevision

    runevision

    Joined:
    Nov 28, 2007
    Posts:
    1,892
    If so, we'd like to hear from you and talk with you!

    Input from gamepads and the like is "global" in the sense that the input doesn't know about objects in the world.

    Input in the form of touch gestures is (often) context-sensitive in the sense that the gesture is performed on an object on the screen / in the world and properties of this object is a key part of how the gesture works.

    We understand both of these systems well in isolation but we'd like to talk with people who have made games or systems that support both well in an elegant way where the gameplay code doesn't need a lot of special cases or if/else conditions.

    If you have designed such a game or system, or if you know of people who have, then let us know!
     
  2. codeinclined

    codeinclined

    Joined:
    Apr 15, 2016
    Posts:
    2
    The way I'm approaching this in my strategy game is by having helper functions return a position regardless of the input method. I use a Vector3 within my scripts that I call targetPosition. The script bases its behavior upon this Vector3. If I'm using a mouse, targetPosition is set to the RaycastHit from the pointer's screen coordinates to, say, a ground plane. If I'm using a gamepad then I simply add its input value with a relevant object's world coordinates and set targetPosition to that. This is done by a simple switch statement and helper functions that either cast the ray and return the appropriate Vector3 or another function that takes gamepad input and maps it to a relevant offset vector (my game has a flat ground plane, so this function maps the Horizontal axis to the vector's X coordinate and the Vertical axis to the vector's Z coordinate) which is added to an origin point, such as the currently selected character. I have a bool that determine's whether or not this should be rotated to match the camera's Y rotation (I only allow the player to yaw and pitch the camera, so this tends to work out).

    The question was about touch gestures, which I assume I will handle the same way as I described for the mouse _except_ I'd need to make certain adjustments due to the fact that a touch can be thought of as a button push and position input simultaneously. Thus, I would probably add something like a confirm button at the bottom right corner so that if a player is trying to create a targeting cone he can press the screen at various points to size it and see the results before submitting it. I've thought about a few ways touch can streamline this too. My current system has the user create the length of a targeting cone, click, adjust the width, and then click. I could have the single finger press or swipe gestures adjust length and pinch to adjust width.

    Long story short, I am trying to implement all of my code so that it doesn't really care what the player is using. All they see are Vector3's and button press actions.
     
  3. Jonny-Roy

    Jonny-Roy

    Joined:
    May 29, 2013
    Posts:
    666
    I did exactly this, but mine was an arcade game and as such, the onscreen input was a simple left and right with a fire, so I created wrapper classes to handle the touch inputs to return left, right and fire and then just checked both keyboard and touch. Although for the UI it was a case of touch the UI or select with a Joystick cursor.
     
  4. Darress

    Darress

    Joined:
    Mar 5, 2015
    Posts:
    12
    Yes, we are trying to achieve just the same. And we are using the the new input system for that.

    We have to support several platforms with different input devices, vr, touch, keyboard/mouse and joystick included. For VR I created a new input device, was pretty easy. The touch part was really hard. I hacked the input system so I can override the values in an action map. I use TouchScript for identifying gestures, then feed the values to the new input system, so I can handle it uniformly. Not a nice solution but with the time pressure that was the best I could do.

    I think it would be really nice to have gesture handling by default. TouchScript has a really nice way of handling them. I just did not really find a place in the new input system, where I could inject gesture handling. And by the way, gesture handling should be input device independent, you can make gestures with head tracking too.

    As I understand, action maps are fed from device states. So where would you implement gesture handling? And devices are no update methods, where I could process gestures.