Event Delivery on iOS: Part 1

Brandon Alexander
BPXL Craft
Published in
5 min readSep 15, 2016

--

If your iOS application handles taps, swipes, pans, or any other external interaction, it is using events behind the scenes. The path these events take is a well-defined process, and we’re going to look at how it works. Understanding how this process works is handy when debugging some tricky issues revolving around text input or remote control events. You can even use this knowledge to have custom data flow through your application.

This is the first article in a series about event delivery. It will cover touch handling, how a simple touch is turned into an event that gets passed through your application, and how components are given a chance to handle that event.

Touch Handling

Touch events are the primary form of events handled by an iOS application. Many of the details of how touches are handled are hidden by the APIs we use on a daily basis. Understanding how these events are passed around an application can help determine how your application is built. You can even use this infrastructure to pass custom events.

Hit Testing

The first thing to cover when talking about event delivery is how the system handles touch events and the path these events take through your application. The events start as soon as the user taps the screen. Before the appropriate component can handle a touch, the system needs to determine where the touch occurred and who gets the first crack at responding to the touch.

This is where hit testing comes into play.

The methods involved in the hit testing process are hitTest:withEvent: and pointInside:withEvent:. hitTest:withEvent: uses pointInside:withEvent: to determine if the point to hit test is within its bounds. If it is not within the bounds, it returns nil and skips that entire branch of the view hierarchy.

If the tested point is within the bounds, it then calls pointInside:withEvent: on each subview. For the subview that returns YES from pointInside:withEvent:, hitTest:withEvent: is called. The final result from the original hitTest:withEvent: call is the result from one of the subviews or self in the case where all of its subviews return nil.

Take the following view hierarchy:

Let’s assume the user tapped in View E. The process starts in View A with hitTest:withEvent:. pointInside:withEvent: returns YES for View A, so then it calls pointInside:withEvent: on View B and View C. View B returns NO. View C returns YES, so then it gets hitTest:withEvent: called on it. View C follows the same process with View D and View E. View D returns NO on pointInside:withEvent:. View E returns YES and then returns itself on hitTest:withEvent:, because it doesn’t have any subviews.

Assuming View D is a subview of View C, what happens if the user taps on View D where it goes beyond the bounds of View C? (This can happen when clipsToBounds is NO.) View A starts the process above. Both View B and View C return NO for pointInside:withEvent:, so View A ultimately receives the touch.

We now have the view from hitTest:withEvent:. This view is referred to as the “hit-test” view. It is now associated with the touch and will be given the first opportunity after any gestures (more on this later) to respond to touch events while the touch is active.

What happens if it doesn’t have a custom implementation for a given touch? It depends. If the view is managed by a view controller, the view controller is given a chance to respond. If that view controller doesn’t respond, the hit-test view’s superview is given a chance to respond. This process is repeated all the way up to something called the “Responder Chain.”

The Responder Chain

The Responder Chain is an implementation of the chain-of-responsibility design pattern. Each participant in the Responder Chain inherits from UIResponder. UIResponder contains the discrete methods for handling various types of events. Beyond touch events, UIResponder declares methods for handling input views, motion events, press events, and remote control events.

For many of these events, the firstResponder is important. The firstResponder is the object that is given the first opportunity to handle an event. If the first responder doesn’t handle the event, its nextResponder is given a chance to handle the event, and the process is repeated until nextResponder on the current object is nil. The last object in the chain is typically the application delegate.

Gestures

In the world of iOS before gesture recognizers came into existence, the touch handling process was how an application detected and handled gestures. Once UIGestureRecognizer was introduced, the process was changed slightly to let gestures handle the touch and perform actions based on the gestures they detect. We’re not going to look at how to use gesture recognizers. Instead, we’ll look at how gestures fit into the touch event delivery system.

Gestures always get the first opportunity to handle a touch event. By default, the gesture gets the touch first and then the view. The main difference is when the gesture’s state hits “recognized,” the touches on the view are cancelled. You can control how these touch events are (or aren’t) delivered to the view by using these properties on UIGestureRecognizer:, cancelsTouchesInView, delaysTouchesBegan, and delaysTouchesEnded. Use these properties wisely as they can potentially create the illusion of an unresponsive view.

Wrapping Up

In this first part, we laid the groundwork for how touch events travel from the window to the view that was touched back up a similar path via the responder chain. We also covered how gesture recognizers fit into this touch system. In Part 2, we’ll talk about other events that are supported and where the responder chain fits in.

For more insights on design and development, subscribe to BPXL Craft and follow Black Pixel on Twitter.

--

--

Brandon Alexander
BPXL Craft

I am an iOS developer, dad, and I wrote a book: http://www.amazon.com/dp/1430236086. I work at Black Pixel.