The Gesture Events


The Gesture Events

Introduction

In this tutorial we are going to look at the different Gesture Events available to an object in GameMaker Studio 2. The Gesture Event is an event category that you can add to an object in the Object Editor and have 12 different sub-events which can be triggered by different "gestures":

The Gesture Events

A "gesture" can be detected by touching the screen of a mobile device or by detecting a mouse click (and further mouse movement afterwards), and they fall into two types:

  • Instance Gestures: these gesture events will only be triggered when the initial touch/click is on an instance within the room and the instance has a valid collision mask (see The Sprite Editor - Collision Mask and The Object Editor - Collision Mask sections of the manual for more details on collision masks).

  • Global Gestures: these gesture events will be triggered by touches/clicks anywhere in the room and do not depend on a collision mask, so that all instances with a global gesture event will have it triggered regardless of whether the initial touch or tap was on the instance or not.

As we have mentioned, when a gesture event is recognised, it will trigger one or more of the available sub-events, and the sub-event triggered will depend on the type of gesture that has been detected - either a tap, a drag or a flick. In every case, however, a DS Map will be generated for you and stored in the built-in variable event_data which will contain the keys and values shown in the following table:

KeyValue Description
"gesture"
This is an ID value that is unique to the gesture that is in play. This allows you to link the different parts of multi-part gestures (such as drag start, dragging and drag end) together.

"touch"
This is the index of the touch that is being used for the gesture. In general this will start at 0 and increase for each finger that is held down, then reset back to 0 when all fingers are removed, but if the user is touching the screen anywhere else when this event is triggered by another touch, then the value will be greater than 0.

"posX"
This is the room-space X position of the touch.

"posY"
This is the room-space Y position of the touch.

"rawposX"
This is the raw window-space X position of the touch (equivalent to getting the mouse position using device_mouse_raw_x()).

"rawposY"
This is the raw window-space Y position of the touch (equivalent to getting the mouse position using device_mouse_raw_y()).

"guiposX"
This is the gui-space X position of the touch (equivalent to getting the mouse position using device_mouse_x_to_gui()).

"guiposY"
This is the gui-space Y position of the touch (equivalent to getting the mouse position using device_mouse_y_to_gui()).

"diffX"
This is the room-space X difference between the position of the current touch and the position of the last touch in this gesture.

"diffY"
This is the room-space Y difference between the position of the current touch and the position of the last touch in this gesture.

"rawdiffX"
This is the raw X difference between the position of the current touch and the position of the last touch in this gesture.

"rawdiffY"
This is the raw Y difference between the position of the current touch and the position of the last touch in this gesture.

"guidiffX"
This is the gui-space X difference between the position of the current touch and the position of the last touch in this gesture.

"guidiffY"
This is the gui-space Y difference between the position of the current touch and the position of the last touch in this gesture.

"isflick"
Only available in the Drag End event. This is set to 1 if the end of the drag is detected as a flick, meaning that you don't need a separate Flick Event if you're handling dragging anyway.


The returned values are designed to be as versatile as possible, permitting you to detect touches using the raw screen resolution, the GUI layer resolution, or the room/view resolution, as the base values. We won't be using most of these in this tutorial but we will use some of them - especially when it comes to dragging objects around and flicking them around - and don't worry too much if you are unsure about using DS maps, etc... as we'll explain things a bit more as we go along.


The Tap Event

The first Gesture Event we are going to look at is the Tap Event. This event will be triggered when the user touches or clicks and then releases all in one gesture. If you've looked over the project, or even run it, you'll have seen that we have a room with three "crate" instances in it that are physics enabled but don't actually do anything yet. We are going to change that by adding some gesture events to the different objects and have them react in different ways.

The first object we are going to edit is the object "obj_Crate_Parent", so open that now if you haven't already. You can see that we already have a few events defined in it to deal with collisions and to set the instance up, but there is nothing in it yet to permit the user to interact with it, which is what we are going to start adding now.

obj_Crate_Parent Events

The first thing we are going to do is open the Create Event of the object and add a new instance variable:

obj_Crate_Parent: Create Event

selected = false;


This variable will be true if the instance is selected and false if it's not.

We now need to open the Draw Event and add a little code to outline the sprite so the user knows that it has been selected. We already have a line in there to tell GameMaker Studio 2 to draw the instance (if you add any code to a draw event, GameMaker Studio 2 will stop default drawing the assigned sprite and leave it up to you what you draw), so add the following after that:

obj_Crate_Parent: Draw Event

if selected
    {
    draw_sprite_ext(spr_Select, 0, phy_position_x, phy_position_y, 1, 1, image_angle, c_white, 1);
    }


We have the variable set up and we can draw an outline around the selected crate, but what about actually detecting the touch/click to select or deselect? For that we'll use a Tap gesture event, so add that now (click the "Add Event" button, then select the "Gesture" category and "Tap"), then add the following code:

obj_Crate_Parent: Tap Event

selected = !selected


The variable selected is a boolean value (which means it is either true or false), and as such we can use the "not" operator (the "!" symbol) to switch between these two values by negating them. So, if selected is true, not selected is false and if selected is false, not selected will be true. This is a really nice shorthand version of writing this:

if selected == true
    {
    selected = false;
    }
else
    {
    selected = true;
    }


You can run the project now, and if you click (or touch, if you are using a mobile device) any crate object, then you should see that the crate becomes "selected", and if you click/touch it again then you should see that it deselects:

Select Crates Animation


The Double Tap Event

Now that you've run the project and seen that you can touch/click an instance to select it, let's look at the next event in the Gesture category - The Double Tap Event. This event is only triggered if a "double tap" is detected, where a "double tap" is defined as two short touches/clicks and releases. We're going to edit the object "obj_Crate_Explode" for this one, so open that now.

obj_Crate_Explode Object Properties

Before we go any further, it's important to note that when you open this object, you'll also see that the Parent window opens too. This is because the object "obj_Crate_Explode" is a child of the object "obj_Crate_Parent". This means that it will inherit events from the parent object. You can see this in action right now when you run the project, since the object "obj_Crate_Explode" has no events defined for it, yet it still responds to a touch/click by being selected/deselected. This is because it "inherits" the events from its parent object automatically, meaning you don't have to write two sets of the same code to get the same effect.

Parenting is a powerful tool that permits you to create behaviours and events in one object and have them "carry over" to all the child objects, keeping the code tidy and easy to edit. Note, however, that you can override parent events by adding code into the same event of the child object. In our project, for example, if we gave the object "obj_Crate_Explode" a Tap event, it would no longer respond to the parent object Tap event (but this can also be forced using the function event_inherited() in a child event).

We're not going to override that event though and instead we are going to add a Double Tap event (this will only apply to the instances of this object, since it is not being added to the parent object). Add the event now from the Gestures event category and then add the following code (note that you should delete the default comments - if you have them enabled - before adding the following):

obj_Crate_Explode: Double Tap Event

/// @description Explode The Crate

if selected
    {
    var _xx = 64 + random(room_width - 128);
    instance_create_layer(_xx, 32, layer, object_index);
    effect_create_above(ef_explosion, phy_position_x, phy_position_y, 2, c_yellow);
    instance_destroy();
    }


As with the other events in the parent object, in this object we first check to see if the instance has been selected, and then if it has been we create a copy of the instance at a random position at the top of the room, create an explosion effect and then destroy the instance itself. It's worth noting that we have also added a JSDoc "description" tag to the event on the very first line. This is how you can give names or descriptions to any event in an object, making it easier to see at a glance what is happening from the Event Editor.

You can test the project again now and you should see that if you select the red crate instance then double tap it, it will explode and create another one. Try deselecting the crate and then double tapping it too (it shouldn't explode), and try double tapping on the other crates (they shouldn't do anything either).


The Drag Events (1)

Detecting a "tap" on an instance fires a single event when a touch/click is detected and then released. But what happens if the user does not release it? In that case, you the Drag Events will be triggered. There are three Drag Events, and each one will be triggered at a specific moment:

  • Drag Start: This will be triggered when the user touches/clicks and then maintains the pressure. If this event is triggered then a Tap event will not be triggered.

  • Dragging: This will be triggered for every step that the drag position changes above a minimum threshold (which is set to 0.2 inches). So, if the user touches/clicks and then moves around, this event will be triggered everytime the position changes to update the internal DS Map with the new position (amongst other details).

  • Drag End: This event will be triggered when the user releases the touch/click, but only if a Drag Start event has been triggered previously.

In the test project you need to open up the object "obj_Crate_Drag". This is another child object of "obj_Crate_Parent", and so inherits all the same events and can be selected/deselected, but we also want it to have some variables of its own that aren't just those of the parent. For that we need to add a Create Event to the instance with the following code:

obj_Crate_Drag: Create Event

/// @description Setup Drag Object Vars

event_inherited();
drag_offset_x = 0;
drag_offset_y = 0;
drag_x = phy_position_x;
drag_y = phy_position_y;
drag = false;


As before we add an event descriptor, but then we call the event_inherited() function so that the instance will inherit the parent Create Event (and so inherit the "selected" variable). We then create five new instance scope variables: two to hold the offset positions of a touch/click (we'll need these in the different drag events to make sure that the instance is positioned relative to the touch/mouse position and doesn't "jump" around the screen) as well as two to hold the current position and one to tell the instance when it's being dragged.

NOTE: We use the variables phy_position_x and phy_position_y instead of the regular x and y built in variables for position since the instance has physics enabled, but for non physics objects you'd just use x and y.

Now we can add a Drag Start event to detect the user holding down their finger/mouse on the instance. Add this event now, and give it the following code:

obj_Crate_Drag: Drag Start Event

/// @description Setup Drag Object Vars

if selected
    {
    drag = true;
    var _xx = event_data[?"posX"];
    var _yy = event_data[?"posY"];
    drag_x = phy_position_x;
    drag_y = phy_position_y;
    drag_offset_x = drag_x - _xx;
    drag_offset_y = drag_y - _yy;
    }


The Drag Start event sets the controller variable drag to true and then sets drag position variables to the current position in the room, and the drag offset variables are set to be the relative offset position for the touch/click on the instance using the built in event_data DS Map. We get the position of the detected touch/click in the room (by getting the values from the "posX" and "posY" map keys) and then subtract that from the current position of the instance to get the offset, which we'll use in the Dragging Event of the instance to move it around.

NOTE: We get the data from the event_data DS map using the map ? accessor, but you can also use the function ds_map_find_value()).

Before we get to the Dragging Event though, we need to add a Step Event with this code:

obj_Crate_Drag: Step Event

/// @description Move The Instance

if drag
    {
    phy_position_x = drag_x;
    phy_position_y = drag_y;
    }


The above code will simply move the instance to the drag position when being dragged.


The Drag Events (2)

If we were to run the project just now and select the blue crate, nothing much would happen because we don't update the position as the user drags their finger/mouse across the room. To remedy this, we now need to add a Dragging Event to the instance. Do that now and add the following code:

obj_Crate_Drag: Dragging Event

/// @description Move The Instance

if selected
    {
    var _xx = event_data[?"posX"];
    var _yy = event_data[?"posY"];
    drag_x = _xx + drag_offset_x;
    drag_y = _yy + drag_offset_y;
    }


As before, we get the current touch/click position from the DS Map event_data, and then we apply this to the drag position variables along with the previously calculated offset values. These values will be updated only when the user moves the instance more than 0.2 of an inch, and not every step of the game (you can actually set the distance an instance has to move using some GML functions, but we'll cover that at the end of the tutorial).

We have one final event to add to finish our dragging object code, and that's the Drag End Event. You should add that now and then give it the following code:

obj_Crate_Drag: Drag End Event

/// @description End Dragging

drag = false;


That's all we need in this event to tell the instance to stop moving to the drag x/y position. You can test the project now, and you should see that if you select the blue crate then touch/click and drag, it will follow the position of the finger/cursor around the room, and when you release it, the instance will fall to the floor.


The Flick Events

We have added in tap events and drag events to our crate instances, but we have still to cover the Flick Event. This is an event that is designed to detect when an instance has been "flicked", ie: the user has dragged and released their finger/cursor all in one movement to "flick" or "throw" the instance. Currently when we drag our blue crate, for example, and then release, it doesn't matter what speed you are dragging it at, it'll just fall straight to the floor, which isn't very satisfactory. So, we'll use the flick event to push the instance in the direction of the "flick" movement...

We'll add the Flick Event into the parent object and permit all the instances to be flicked, so open up the object "obj_Crate_Parent" and add a Flick Event to it with the following code:

obj_Crate_Parent: Flick Event

/// @description Flick The Instance

flickVelX = event_data[?"diffX"];
flickVelY = event_data[?"diffY"];
phy_linear_velocity_x = flickVelX * 25;
phy_linear_velocity_y = flickVelY * 25;


Here, we get the "diffX/Y" values from the built in event_data DS map. These values reflect the difference in position between the last Dragging Event and the release Flick Event, calculated by taking the current x/y position and subtracting it from the previous one. These values can then be used to set speed or other variables. In this case, because the instance is physics enabled, we use the phy_linear_velocity_x/y variables to set the instance moving in the correct direction on flick.

If you run the project now, you can "flick" any crate in the room and see it fly across the room, not just those that are selected.

It's worth noting, before we finish this tutorial, that the Drag End event will also detect a flick, meaning that you do not always have to add a Flick Event. The event_data DS map of the Drag End event has an extra key that is only present in this event: "isflick". You can check this in the Drag End event and deal with a flick if it returns true (it will return false if the release of the mouse/finger does not trigger a Flick Event). So, if we wanted to have the above code only affect the blue crate, we wouldn't use the Flick Event, but instead add the following into the Drag End Event after the existing code:

obj_Crate_Drag: Drag End Event

if event_data[? "isflick"] == true
    {
    flickVelX = event_data[?"diffX"];
    flickVelY = event_data[?"diffY"];
    phy_linear_velocity_x = flickVelX * 25;
    phy_linear_velocity_y = flickVelY * 25;
    }


The Pinch Events (1)

The following parts of this tutorial require a touch screen device for iOS, Android or UWP, as they cover the Pinch and Rotate events, which use two touches on the screen to function. First we'll look at the Pinch Events which are designed to detect two moving touches on a screen: a "pinch" movement inwards or outwards around a central point. This is very useful in many situations, for example, like for letting the user make an item bigger, or for expanding menus, or - and this is what we'll do here - for making a zoom in/out feature for the game area.

Before going any further, we should first look at the DS Map "event_data" that is generated by the Pinch events, as its contents will be different to those of the Tap, Drag and Flick events:

KeyValue Description
"gesture"
This is an ID value that is unique to the gesture that is in play. This allows you to link the different parts of multi-part gestures (such as drag start, dragging and drag end) together.

"touch1"
This is the index of the first touch that is being used as part of the pinch gesture. In general this will be 0, but if the user is touching the screen anywhere else when this event is triggered by another touch, then the value will be greater than 0.

"touch2"
This is the index of the second touch that is being used as part of the pinch gesture. In general this will be 1 more than the value for touch1, but may be some other value depending on the number of touches being detected elsewhere.

"posX1"
This is the room-space X position of the first touch.

"posY1"
This is the room-space Y position of the first touch.

"rawposX1"
This is the raw window-space X position of the first touch (equivalent to getting the mouse position using device_mouse_raw_x()).

"rawposY1"
This is the raw window-space Y position of the first touch (equivalent to getting the mouse position using device_mouse_raw_y()).

"guiposX1"
This is the gui-space X position of the first touch (equivalent to getting the mouse position using device_mouse_x_to_gui()).

"guiposY1"
This is the gui-space Y position of the second touch (equivalent to getting the mouse position using device_mouse_y_to_gui()).

"posX2"
This is the room-space X position of the second touch.

"posY2"
This is the room-space Y position of the second touch.

"rawposX2"
This is the raw window-space X position of the first touch.

"rawposY2"
This is the raw window-space Y position of the second touch.

"guiposX2"
This is the gui-space X position of the second touch.

"guiposY2"
This is the gui-space Y position of the second touch.

"midpointX"
The X position of the mid point between the two touches in room space.

"midpointY"
The Y position of the mid point between the two touches in room space.

"rawmidpointX"
This is the raw window-space X position of the mid point.

"rawmidpointY"
This is the raw window-space Y position of the mid point.

"guimidpointX"
This the gui-space X position of the mid point.

"guimidpointY"
This the gui-space Y position of the mid point.

"relativescale"
This is difference in scale compared to the last event in this gesture (so for Pinch In events this will always be smaller than 1.0, whereas for Pinch Out events it will always be larger than 1.0)

"absolutescale"
This is the scale compared to where the fingers were when the gesture started (so if the distance between the fingers has halved then this will be 0.5 whereas if the distance has doubled it will be 2.0).


As you can see, we can retrieve the position of the touches as either a raw screen position, a room position or as a GUI layer position, meaning that these events can be used in just about any circumstances. We also get a set of values for the midpoint of the gesture. This is calculated as the point halfway between the two initial touches, and it's important to note that when checking for a gesture using the instance Pinch Events (not the global ones), it is the midpoint that is used and not the actual touch positions.

Before we can go ahead an use the pinch events to create a camera zoom effect, we first need to enable views in the game room and set up some variables to hold values we'll need later. We also need to make a new object for this, so do that now and call it "obj_Camera_Control". We don't need to give this object a sprite, but we do need to give it a Create Event with the following code:

obj_Camera_Control: Create Event

/// @description Setup Vars And Camera

// Setup camera
view_enabled = true;
view_visible[0] = true;
view_camera[0] = camera_create_view(0, 0, room_width, room_height, 0, noone, 0, 0, 0, 0);


Here we enable view cameras, then enable the viewport 0 and then we create a camera with a view the size of the room and assign it to viewport 0. We then need to set up our variables:

obj_Camera_Control: Create Event

// Setup Vars
rotating = false;
pinching = false;
view_a = 0;


We have initialised a variable for pinching which will be true while any pinch gesture is being triggered and false otherwise, and we've also added in two other variables to control the camera view angle which we'll use later when we talk about the Rotate Event. For the sake of this tutorial we are going to add in some debug variables as well. These variables are not required when working on your own projects, but as you'll see they will help you visualise what is happening when we use the Pinch (and later the Rotate) Events:

obj_Camera_Control: Create Event

//Debug
touch_x[0] = 0;
touch_y[0] = 0;
touch_x[1] = 0;
touch_y[1] = 0;
midpoint_x = 0;
midpoint_y = 0;


We will also take a moment to add in a Clean Up event. This will be triggered on room end or if the instance is destroyed and is where you would normally delete any dynamic resources for an instance to prevent memory leaks. In this we need to add a single line of code to tell GameMaker Studio 2 to remove the camera we created from memory:

obj_Camera_Control: Clean Up Event

/// @description Remove Camera

camera_destroy(view_camera[0]);


As mentioned above, we have added variables to help us debug the events we are using, so let's add in a Draw Event before continuing. This event will use the debug variables to draw the touches on the screen as well as the center point for them (the midpoint for a pinch and the pivot point for the rotate). Add in a Draw Event now with the following:

obj_Camera_Control: Draw Event

/// @description Debug Draw

if pinching || rotating
    {
    var _scale = camera_get_view_width(view_camera[0]) / room_width;
    draw_circle_colour(touch_x[0], touch_y[0], 48 * _scale, c_yellow, c_yellow, false);
    draw_circle_colour(touch_x[1], touch_y[1], 48 * _scale, c_blue, c_blue, false);
    draw_circle_colour(midpoint_x, midpoint_y, 16 * _scale, c_green, c_green, false);
    }


Here we check the control variables to see if either of them is true and if they are then we draw circles on the screen to represent the positions of the fingers and the midpoint/pivot. You can now open the Room Editor and drag an instance of our "obj_Camera_Control" object into the game room, and then run the game again. If you have done everything correctly you shouldn't see any difference at all... yet!


The Pinch Events (2)

We have set up the camera and we have initialised our variables so our Create Event should look like this:

Camera Controller Create Event

We can now add in our zoom feature using the Pinch Event, so start by adding a Global Pinch Start Event (we are using the global events now as we want to detect a pinch from anywhere in the room) and add into it the following code:

obj_Camera_Control: Global Pinch Start Event

/// @description Start Zoom and Set Vars

pinching = true;

// Debug
touch_x[0] = event_data[? "posX1"]
touch_y[0] = event_data[? "posY1"]
touch_x[1] = event_data[? "posX2"]
touch_y[1] = event_data[? "posY2"]
midpoint_x = event_data[? "midpointX"]; midpoint_y = event_data[? "midpointY"];


Here all we are really doing is setting our pinch controller variable to true so we know we're performing a pinch event, but for the tutorial we are also setting our debug values using information from the "event_data" DS map. We now need to add in a Global Pinch In Event, and give it the following code:

obj_Camera_Control: Global Pinch In Event

/// @description Zoom In

var _scale = event_data[? "relativescale"];
var _w = camera_get_view_width(view_camera[0]);
var _h = camera_get_view_height(view_camera[0]);
_w *= _scale;
_h = _w * (room_height / room_width);

var _x = (room_width / 2) - (_w / 2);
var _y = (room_height / 2) - (_h / 2);

camera_set_view_pos(view_camera[0], _x, _y);
camera_set_view_size(view_camera[0], _w, _h);

// Debug
touch_x[0] = event_data[? "posX1"]
touch_y[0] = event_data[? "posY1"]
touch_x[1] = event_data[? "posX2"]
touch_y[1] = event_data[? "posY2"]


Here all we are doing is getting the current current camera view width and height then scaling them based on the relative scale of the pinch (ie: how much it has travelled since the last step to this one). We then use these scaled values to set the width, height and position of the view.

For zooming out we can simply duplicate this event in the event editor for the object as the code required is exactly the same, and it's only the returned value for "relativescale" that will be different and so affect how the view is displayed. To duplicate an event, simply right click on the Global Pinch In event and select Duplicate, then select the Global Pinch Out event. You should see this event gets added and if you select it, it'll show the code we just added. You can change the "@" event description if you want to:

obj_Camera_Control: Global Pinch Out Event

/// @description Zoom out


The last Pinch Event that we need to add is the Global Pinch End Event, and in that we place this code:

obj_Camera_Control: Global Pinch Out Event

/// @description End Zoom

pinching = false;


If you run the game now you should be able to touch two fingers to the screen and move them together and apart to zoom in and out the camera view, and - thanks to our debug code - you should also see a blue and yellow circle where your fingers touch and a smaller green circle where the initial midpoint between them was.


The Rotate Events

We can now look at the Rotate Events. Since we have a camera object already set up, let's use that combined with the Rotate Events to have the user rotate the view around the center point. Before adding these events however, it should be noted that the DS map "event_data" will have some different values to the map returned by the Pinch Event. It will still have all the position keys related to the screen, gui and room for both touches, but instead of midpoint values and scale keys, it will contain pivot and angle keys, specifically:

KeyValue Description
"pivotX"
The X position of the rotation pivot point in room space.

"pivotY"
The Y position of the rotation pivot point in room space.

"rawpivotX"
This is the raw window-space X position of the rotational pivot point.

"rawpivotY"
This is the raw window-space Y position of the rotational pivot point.

"guipivotX"
This the gui-space X position of the rotational pivot point.

"guipivotY"
This the gui-space Y position of the rotational pivot point.

"relativeangle"
This is difference in rotation compared to the last event in this gesture, measured in degrees

"absoluteangle"
This is the difference in angle compared to where the fingers were when the gesture started, measured in degrees. So, for example, if the fingers have rotated a quarter-circle since the start of the gesture then this value will be 90° or -90°, depending on the direction of rotation.


That said, it's time to add in our Rotate Events, starting with the Global Rotate Start Event, so add that now and give it this code:

obj_Camera_Control: Global Rotate Start Event

/// @description Start Rotation and Set Vars

rotating = true;


As before, we just use this event to set a controller variable to true so that our instance knows it's rotating. We then add a Global Rotating Event with this:

obj_Camera_Control: Global Rotating Event

/// @description Set Camera Angle

var _relangle = event_data[?"relativeangle"];
var _a = camera_get_view_angle(view_camera[0]);
_a += _relangle;

camera_set_view_angle(view_camera[0], _a);


Finally, we will make use of the Global Rotate End Event, which will be triggered when one or both of the touches is released from the screen. In this case we are simply going to set the control variable to false:

obj_Camera_Control: Global Rotate End Event

/// @description End Rotation

rotating = false;


You can now test the project on your mobile device, and if you touch the screen with two fingers and pinch them in or out, or rotate them around, then the view should zoom in/out and rotate as well.


Summary

That brings us to the end of this tutorial. You should now have a good working knowledge of what the Gesture Event category is for and how it can be used, especially:

  • When the user taps (touches/clicks) on the screen or an instance it will trigger a Gesture Event - the exact event triggered will depend on the gesture used

  • If the tap is a quick touch/click and release, then it triggers a single Tap Event

  • If there are two quick taps, then it triggers a Double Tap Event

  • If there is a touch/click and hold, then a Drag Start Event will be triggered and if the user moves the finger/mouse then a Dragging Event will be triggered for each step of the movement

  • When the user releases their finger/cursor a Drag End Event will be triggered

  • If the difference between the last Dragging Event position and the Drag End Event position is sufficient, a Flick Event will be triggered

  • If two touches are detected and then accompanied by a movement, then a pinch or a rotate event will be generated

  • Pinch events are detected based on a linear movement in/out between the two touches

  • Rotate events are detected based on a rotational movement
  • In all cases the event will create the event_data DS map with information about the gesture


Note that while this tutorial hasn't covered Global Gesture Events, the concept is exactly the same, only with global events they will be triggered from anywhere in the room and not just when interacting with an instance. Why not try and modify this tutorial to detect a Double Tap and spawn a random crate in the room, or detect a Flick and make all the crate objects fly in the direction of the flick, using the Global Gesture Events, for example?