Human Factors in Choosing Technologies

I recently saw a thread where someone wanted to introduce a more capable architecture pattern than what most apps start out with in a small team, but received some pushback from teammates and was looking for help in countering their arguments.

The thread for the most part focused on the technical benefits of the proposed pattern, such as testability, separation of concerns, modularity, etc. These are all valid trade-offs to consider: we’ve done the same at Lyft when figuring out what architectures would suit us best. This is also not unique to architectures, any big technology that influences the overall direction of the codebase: SwiftUI vs. UIKit, RxSwift vs. Combine vs. async/await, etc.

But over time I’ve realized that even with a technically “perfect” solution (a real oxymoron!), there is an entirely different yet equally important factor to consider: who are the people using it—now and in the future? The success of a particular technology is highly dependent on the answer to that question, and it plays a huge role at different stages of the development of that technology.

In the thread I mentioned above there was very little focus on the human aspect of the proposal, so I wanted to list a number of things that I personally ask myself and our teams before considering moving forward:

How much is the onboarding/ramp-up cost? While I generally think short-term pain is worth long-term gain, onboarding is often a continuous cost. New people join, you might have interns, non-mobile engineers wanting to make quick contributions, etc. If those people first need a lot of time to ramp up, it’s worth wondering if the benefits are worth it, or how to reduce that burden. For example, while we currently aren't using any SwiftUI at Lyft, we have a layer of sugar syntax on top of UIKit that enables us to use SwiftUI-like syntax anyway. This makes it easier for both new people that join and already know SwiftUI and for everybody to move over to SwiftUI if/when we're ready for that.

How easy is it to undo? Or: what is the cost of mistakes? If things don’t pan out the way we want them to, how easily could we switch to something else? The more difficult switching back is, the higher the commitment level and the more we need to be sure it’s worth it. This applies both to the internals of the framework and how the code that uses the framework is structured.

Is it easy to do the right thing? This one is straightforward: if it’s easy to do the right thing it’s more likely people will do the right thing and achieve the architecture’s potential more. Conversely, if it’s easy to do the wrong thing, the benefits aren’t realized as much. Especially considering my previous point, if it’s hard to undo bad usage it’s maybe worth going back to the drawing board.

How much support is available? Popular technologies have a lot of online material available for support in the form of Stack Overflow questions and answers, blog posts, videos, open source code + discussions on GitHub, etc. A home-built solution means this knowledge only lives in-house which increases the bus factor. The same is true for very opinionated third-party libraries like RxSwift or The Composable Architecture. I’m a fan of both, but without fully understanding how they’re implemented you’re at the mercy of the developers and contributors of these libraries for years to come.

How much institutional knowledge does it require? Good architectures hide domain complexity for its consumers, and incur the complexity internally. To some extent that’s fine, but if the internals become so complicated that few people know how it works there is again a high bus factor. It can absolutely be worth putting some complexity/boilerplate burden onto feature owners to avoid making complex abstractions that are hard to change in the future once it’s used everywhere and the original developers have left.

How much effort does it take to see 100% adoption? Depending on the size of the existing code base, it could take a long time to get 100% adoption. That can be OK if this is the codebase’s first serious architecture, but if it’s version 5 and some parts of the codebase still use version 1 through 3, it’s probably worth removing those first and reducing lava layers. Even if the change from version 10 to 11 is small and easy, the fragmentation of the codebase inhibits developer productivity. The quicker the migration the better, and if the codebase can safely be migrated through automation that’s the best case outcome.

But the most important one of all: do people actually like the architecture? No one likes working in a codebase where everything is a hassle, the underlying concepts never seem to make sense, abstractions are leaky, and you seem to always have to do work for it instead of it working for you. Those codebases diminish the team’s motivation levels and will affect many of the other points from above.

On the flip side, if people like the proposed patterns, they will put in a lot more work in to use them correctly, try harder to do the right thing, are willing to help others, etc. If not, forcing patterns people don’t like could lead to developer unhappiness and attrition. We have more than a few examples of this at Lyft, where a slightly inferior technical solution is overall much more beneficial because the pattern is a bit simpler to use than the alternative.

Going back to why I started writing this in the first place: in my opinion the question “what counterarguments can I use” is not a great first question to ask when it comes to convincing people your solution is the best one out there. Understanding why people are resistant is key. Sure, some people just don’t like change, but papering over any of the concerns above with a technically superior solution is a recipe for a bunch of barely-adopted technologies in a codebase that’s often worse off than if nothing had been done in the first place.

Third-party libraries are no party at all

What better way to end the week than with a hot take?

In my 8 years at Lyft, product managers or engineers have often wanted to add third-party libraries to one of our apps. Sometimes it’s necessary to integrate with a specific vendor (like PayPal), sometimes it’s to avoid having to build something complicated, and sometimes it’s simply to not reinvent the wheel.

While these are generally reasonable considerations, the risks and associated costs of using a third-party library are often overlooked or misunderstood. In some cases the risk is worth it, but to be able to determine that you first need to be able to define that risk accurately. To make that risk assessment more transparent and consistent, we defined a process of things we look at to determine how much risk we're incurring by integrating it and shipping it in one or more production apps.

Risks

Most larger organizations, including ours, have some form of code review as part of their development practices. For those teams, adding a third-party library is equivalent to adding a bunch of unreviewed code written by someone who doesn't work on the team, subverting the standards upheld during code review and shipping code of unknown quality. This introduces risk in how the app runs, long-term development of the app, and, for larger teams, overall business risk.

Runtime risks

Library code generally has the same level of access to system resources as general app code, but they don't necessarily apply the best practices the team put in place for managing these resources. This means they have access to the disk, network, memory, CPU, etc. without any restrictions or limitations, so they can (over)write files to disk, be memory or CPU hogs with unoptimized code, cause dead locks or main thread delays, download (and upload!) tons of data, etc. Worse, they can cause crashes or even crash loops. Twice.

Many of these situations aren't discovered until the app is already available to customers, at which point fixing it requires creating a new build and going through the review process which is often time intensive and costly. The risk can be somewhat mitigated by invoking the library behind a feature flag, but that isn't a silver bullet either (see below).

Development risks

To quote a coworker: "every line of code is a liability", and this is even more true for code you didn't write yourself. Libraries could be slow in adopting new technologies or APIs holding the codebase back, or too fast causing a deployment target that's too high. When Apple and Google introduce new OS versions each year, they often require developers update their code based on changes in their SDKs, and library developers have to follow suit. This requires coordinated efforts, alignment in priorities, and the ability to get the work done in a timely manner.

As the mobile platforms are ever-changing this becomes a continuous, ongoing risk, compounded by the problem that teams and organizations aren't static either. When a library that was integrated by a team that no longer exists needs to be updated, it takes a long time to figure out who should do so. It has proven extremely rare and extremely difficult to remove a library once it's there, so we treat it as a long-term maintenance cost.

Business risks

As I mentioned above, modern OSes make no distinction between app code and library code, so in addition to system resources they also have access to user information. As app developers we're responsible for using that information properly, and any libraries are part of that responsibility.

If the user grants location access to the Lyft app, any third-party library automatically gets access too. They could then upload that data to their own servers, competitors' servers, or who knows where else. This is even more problematic when a library needs a new permission we didn't already have.

Similarly, a system is as secure as its weakest link but if you include unreviewed, unknown code you have no idea how secure it really is. Your well-designed secure coding practices could all be undone by one misbehaving library. The same goes for any policies Apple and Google put in place like "you are not allowed to fingerprint the user".

Mitigating the risk

When evaluating a library for production usage, we ask a few questions to understand the need for the library in the first place.

Can we build this functionality in-house?

In some cases we were able to simply copy/paste the parts of a library we really needed. In more complex scenarios, where a library talked to a custom backend we reverse-engineered that API and built a mini-SDK ourselves (again, only the parts we needed). This is the preferred option 90% of the time, but isn't always feasible when integrating with very specific vendors or requirements.

How many customers benefit from this library?

In one scenario, we were considering adding a very risky library (according to the criteria below) intended for a tiny subset of users while still exposing all of our users to the library. We ran the risk of something going wrong for all our customers in all our markets for a small group of customers we thought would benefit from it.

What transitive dependencies does this library have?

We'll want to evaluate the criteria below for all dependencies of the library as well.

What are the exit criteria?

If integration is successful, is there a path to moving it in-house? If it isn't successful, is there a path to removal?

Evaluation criteria

If at this point the team still wants to integrate the library, we ask them to “score” the library according to a standard set of criteria. The list below is not comprehensive but should give a good indication of the things we look at.

Blocking criteria

These criteria will prevent us from including the library altogether, either technically or by company policy, and need to be resolved before we can move forward:

Major concerns

We assign point values to all these (and a few others) criteria and ask engineers to tally those up for the library they want to include. While low scores aren't hard-rejected by default, we often ask for more justification to move forward.

Final notes

While this process may seem very strict and the potential risk hypothetical in many cases, we have actual, real examples of every scenario I described in this blog post. Having the evaluations written down and publicly available also helps in conveying relative risk to people unfamiliar with how mobile platforms works and demonstrating we're not arbitrarily evaluating the risks.

Also, I don't want to claim every third-party library is inherently bad. We actually use quite a few at Lyft: RxSwift and RxJava, Bugsnag's SDK, Google Maps, Tensorflow, and a few smaller ones for very specific use cases. But all of these are either well-vetted, or we've decided the risk is worth the benefit while actually having a clear idea of what those risks and benefits really are.

Lastly, as a developer pro-tip: always create your own abstractions on top of the library's APIs and never call their APIs directly. This makes it much easier to swap (or remove) underlying libraries in the future, again mitigating some risk associated with long-term development.

iOS Architecture at Lyft

June 30, 2014 was my first day at Lyft as the first iOS hire on the ~3 person team. The app was written in Objective-C, and the architecture was a 5000-line nested switch statement.

Since then, the team has grown to about 70 people and the codebase to 1.5M lines of code. This required some major changes to how we architect our code, and since it had been a while since we've given an update like this, now seems as good a time as any.

Requirements

The effort to overhaul and modernize the architecture began around mid-2017. We started to reach the limits of the patterns we established in the 2015 rewrite of the app, and it was clear the codebase and the team would continue to grow and probably more rapidly than it had in the past.

The primary problems that the lack of a more mature architecture presented and that we wanted to solve were:

There was not going to be one solution that would solve all of this inherently, but over the course of a few years we developed a number of processes and technical solutions to reduce these problems.

Modules

First, to provide better feature separation, we introduced modules. Every feature had its own module, with its own test suite, that could be developed in isolation from other modules. This forced us to think more about public APIs and hiding implementation details behind them. Compile times improved, and it required much less collaboration with other teams to make changes.

We also introduced an ownership model that ensured each module has at least one team that's responsible for that module's tech debt, documentation, etc.

Module types

After fully modularizing the app and having 700 modules worth of code, we took this a step further and introduced a number of module types that each module would follow.

Breaking modules down this way enabled us to implement dependency validators: we can validate that certain modules can't depend on others. For example, a logic module can't depend on a UI module, and a Service module can't import UIKit.

This module structure also prevents complicated circular dependencies, e.g. a Coupons module depending on Payments and vice versa. Instead, the Payments module can now import CouponsUI without needing to import the full Coupons feature. It's led to micromodules in some areas, but we've generally been able to provide good tooling to make this easier to deal with.

All in all we now have almost 2000 modules total for all Lyft apps.

Dependency Injection

Module types solved many of our dependency tree problems at the module level, but we also needed something more scalable than singletons at the code level.

For that we've built a lightweight dependency injection framework which we detailed in a SLUG talk. It resembles a service locator pattern, with a basic dictionary mapping protocols to instantiations:

1
2
let getNetworkCommunicator: NetworkCommunicating =
    bind(NetworkCommunicating.self, to: { NetworkCommunicator() })

The implementation of bind() doesn't immediately return NetworkCommunicator, but requires the object be mocked if we're in a testing environment:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
let productionInstantiators: [ObjectIdentifier: () -> Any] = [:]
let mockedInstantiators: [ObjectIdentifier: () -> Any] = [:]

func bind<T>(protocol: T.Type, instantiator: () -> T) -> T {
    let identifier = ObjectIdentifier(T.self)

    if NSClassFromString("XCTestCase") == nil {
        return productionInstantiators[identifier] ?? instantiator()
    } else {
        return mockedInstantiators[identifier]!
    }
}

In tests, the mock is required or the test will crash:

1
2
3
4
5
6
7
8
9
final class NetworkingTests: XCTestCase {
    private var communicator = MockNetworkCommunicator()

    func testNetworkCommunications() {
        mock(NetworkCommunicating.self) { self.communicator }

        // ...
    }
}

This brings two benefits:

  1. It forces developers to mock objects in tests, avoiding production side effects like making network requests
  2. It provided a gradual adoption path rather than updating the entire app at once through some more advanced system

Although this framework has some of the same problems as other Service Locator implementations, it works well enough for us and the limitations are generally acceptable.

Flows

Flows, inspired by Square's Workflow, are the backbone of all Lyft apps. Flows define the navigation rules around a number of related screens the user can navigate to. The term flow was already common in everyday communications ("after finishing the in-ride flow we present the user with the payments flow") so this terminology mapped nicely to familiar terminology.

Flows rely on state-driven routers that can either show a screen, or route to other routers that driven by different state. This makes them easy to compose, which promoted the goal of feature isolation.

At the core of flows lies the Routable protocol:

1
2
3
protocol Routable {
    let viewController: UIViewController
}

It just has to be able to produce a view controller. The (simplified) router part of a flow is implemented like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
final class Router<State> {
    private let routes: [(condition: (State) -> Bool, routable: Routable?)]

    func addRoute(routable: Routable?, _ condition: @escaping (State) -> Bool) {
        self.routes.append((condition, routable))
    }

    func route(for state: State) -> Routable? {
        self.routes.first { $0.condition(state) }
    }
}

In other words: it takes a bunch of rules where if the condition is true (accepting the flow's state as input) it provides a Routable. Each flow defines its own possible routes and matches those to a Routable:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
struct OnboardingState {
    let phoneNumber: String?
    let verificationCode: String?
    let email: String?
}

final class OnboardingFlow {
    private let router = Router<OnboardingState>
    private let state = OnboardingState()

    init() {
        self.router.addRoute({ $0.phoneNumber == nil }, EnterPhoneNumberViewController())
        self.router.addRoute({ $0.verificationCode == nil }, VerifyPhoneViewController())
        self.router.addRoute({ $0.email == nil }, EnterEmailViewController())

        // If all login details are provided, return  `nil` to indicate this flow has
        // no (other) Routable to provide and should be exited
        self.router.addRoute({ _ in }, nil)
    }

    func currentRoutable() -> Routable {
        return self.router.route(for: state)
    }
}

We're then composing flows by adding Routable conformance to each flow and have it provide a view controller, adding its current Routables view controller as a child:

1
2
3
4
5
6
7
extension Flow: Routable {
    var rootViewController: UIViewController {
        let parent = UIViewController()
        parent.addChild(self.currentRoutable().viewController)
        return parent
    }
}

Now a flow can also route to another flow by adding an entry to its router:

1
self.router.addRoute({ $0.needsOnboarding }, OnboardingFlow())

This pattern could let you build entire trees of flows:

Simplified flow diagram

When we first conceptualized flows we imagined having a tree of about 20 flows total; today we have more than 80. Flows have become the "unit of development" of our apps: developers no longer need to care about the full application or a single module, but can build an ad-hoc app with just the flow they're working on.

Plugins

Although flows simplify state management and navigation, the logic of the individual screens within a flow could still be very intertwined. To mitigate that problem, we've introduced plugins. Plugins allow for attaching functionality to a flow without the flow even knowing that the plugin exists.

For example, to add more screens to the OnboardingFlow from above, we can expose a method on it that would call into its router:

1
2
3
4
5
6
7
8
extension OnboardingFlow {
    public func addRoutingPlugin(
        routable: Routable?,
        _ condition: @escaping (OnboardingState) -> Bool)
    {
        self.router.addRoute((condition, routable))
    }
}

Since this method is public, any plugin that imports it can add a new screen. The flow doesn't know anything about this plugin, so the entire dependency tree is inverted with plugins. Instead of a flow depending on all the functionalities of all of its plugins, it provides a simple interface that lets plugins extend this functionality in isolation by having them depend on the flow.

Simplified plugin setup

Since all Lyft apps operate on a tree of flows, the overall dependency graph changes from a tree shape to a "bubble" shape:

Bubble dependency graph

This setup provides feature isolation at the compiler level which makes it much harder to accidentally intertwine features. Each plugin also has its own feature flag, making it very easy to disable a feature if necessary.

In addition to routing plugins, we also provide interfaces to add additional views to any view controller, deep link plugins to deep link to any arbitrary part of the app, list plugins to build lists with custom content, and a few others very unique to Lyft's use cases.

Unidirectional Data Flow

More recently we introduced a redux-like unidirectional data flow (UDF) for screens and views within flows. Flows were optimized for state management within a collection of screens, the UDF brings the same benefits we saw there to individual screens.

A typical redux implementation has state flowing into the UI and actions that modify state coming out of the UI. Influenced by The Composable Architecture, our implementation of redux actions also includes executing side effects to interact with the environment (network, disk, notifications, etc.).

Declarative UI

In 2018, we began building out our Design System. At the time, it was a layer on top of UIKit, often with a slightly modernized API, that would provide UI elements with common defaults like fonts, colors, icons, dimensions, etc.

When Apple introduced SwiftUI in mid-2019, it required a deployment target of iOS 13. At the time, we still supported iOS 10 and even today we still support iOS 12 so we still can't use it.

However, we did write an internal library called DeclarativeUI, which provides the same declarative APIs that SwiftUI brings but leveraging the Design System we had already built. Even better, we've built binding conveniences into both DeclarativeUI and our UDF Store types to make them work together seamlessly:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import DeclarativeUI
import Unidirectional

final class QuestionView: DeclarativeUI.View {
    private let viewStore: Store<QuestionState>

    init(store: Store<QuestionState>) {
        self.viewStore = store
    }

    var body: DeclarativeUI.View {
        return VStackView(spacing: .three) {
            HeaderView(store: self.store)
            Label(text: viewStore.bind(\.header))
                .textStyle(.titleF1)
                .textAlignment(.center)
                .lineLimit(nil)
                .accessibility(postScreenChanged: viewStore.bind(\.header))
            VStackView(viewStore.bind(\.choices), spacing: .two) { choice in
                TwoChoiceButton(choice: choice).onEvent(
                    .touchUpInside,
                    action: viewStore.send(.choiseSelected(index: choice.index)))
            }
            .hidden(viewStore.bind(\.choices.isEmpty))

            if viewStore.currentState.model.usesButtonToIncrementQuestion {
                NextQuestionButton(store: self.store)
                    .hidden(viewStore.bind(\.choices.isEmpty))
            }
        }
    }
}

Putting it all together

All these technologies combined make for a completely different developer experience now than five years ago. Doing the right thing is easy, doing the wrong thing is difficult. Features are isolated from each other, and even feature components are separated from each other in different modules.

Testing was never easier: unit tests for modules with pure business logic, snapshot tests for UI modules, and for integration tests it takes little effort to sping up a standalone app with just the flow you're interested in.

State is easy to track with debug conveniences built into the architectures, building UI is more enjoyable than it was with plain UIKit, and adding a feature from 1 app into another is often just a matter of attaching the plugin to a second flow without detangling it from all other features on that screen.

It's amazing to look back at where the codebase started some 6 years ago, and where it is now. Who knows where it will be in another 6 years!

Note: If you're interested in hearing more, I also talked about many of these technologies on the Lyft Mobile Podcast!

Re-binding self: the debugger's break(ing) point

Update 07-29-2019: The bug described below is fixed in Xcode 11 so this blog post has become irrelevant. I'm leaving it up for historical purposes.

For the Objective-C veterans in the audience, the strong-self-weak-self dance is a practice mastered early on and one that is used very frequently. There are a lot of different incantations, but the most basic one goes something like this:

1
2
3
4
__weak typeof(self) weakSelf = self;
dispatch_group_async(dispatch_get_main_queue(), ^{
    [weakSelf doSomething];
});

Then, if you needed a strong reference to self again inside the block, you'd change it to this:

1
2
3
4
5
__weak typeof(self) weakSelf = self;
dispatch_group_async(dispatch_get_main_queue(), ^{
    typeof(weakSelf) strongSelf = weakSelf;
    [strongSelf.someOtherObject doSomethingWith:strongSelf];
});

Fortunately, this was much easier on day 1 of Swift when using the [weak self] directive:

1
2
3
4
5
DispatchQueue.main.async { [weak self] in
    if let strongSelf = self {
        strongSelf.someOtherObject.doSomething(with: strongSelf)
    }
}

self is now weak inside the closure, making it an optional. Unwrapping it into strongSelf makes it a non-optional while still avoiding a retain cycle. It doesn't feel very Swifty, but it's not terrible.

More recently, it's become known that Swift supports re-binding self if you wrap it in backticks. That makes for an arguably much nicer syntax:

1
2
3
4
DispatchQueue.main.async { [weak self] in
    guard let `self` = self else { return }
    self.someOtherObject.doSomething(with: self)
}

This was long considered, and confirmed to be, a hack that worked due to a bug in the compiler, but since it worked and there weren't plans to remove it, people (including us at Lyft) started treating it as a feature.

However, there is one big caveat: the debugger is entirely hosed for anything you do in that closure. Ever seen an error like this in your Xcode console?

1
2
3
error: warning: <EXPR>:12:9: warning: initialization of variable '$__lldb_error_result' was never used; consider replacing with assignment to '_' or removing it
    var $__lldb_error_result = __lldb_tmp_error
        ~~~~^~~~~~~~~~~~~~~~~~~~

That's because self was re-bound. This is easy to reproduce: create a new Xcode project and add the following snippet to viewDidLoad():

1
2
3
4
5
6
DispatchQueue.main.async { [weak self] in
    guard let `self` = self else { return }

    let description = self.description
    print(description) // set a breakpoint here
}

When the breakpoint hits, execute (lldb) po description and you'll see the error from above. Note that you're not even using self - merely re-binding it makes the debugger entirely useless inside that scope.

People with way more knowledge of LLDB than I do can explain this in more detail (and have), but the gist is that the debugger doesn't like self's type changing. At the beginning of the closure scope, the debugging context assumes that self's type is Optional, but it is then re-bound to a non-optional, which the debugger doesn't know how to handle. It's actually pretty surprising the compiler supports changing a variable's type at all.

Because of this problem, at Lyft we have decided to eliminate this pattern entirely in our codebases, and instead re-bind self to a variable named this.

If you do continue to use this pattern, note that in a discussion on the Swift forums many people agreed that re-binding self should be supported by the language without the need for backticks. The pull request was merged shortly after and with the release of Swift 4.2 in the fall, you'll be able to use guard let self = self else { return } (at the cost of losing debugging capabilities!)

Using Interface Builder at Lyft

Last week people realized that Xcode 8.3 by default uses storyboards in new projects without a checkbox to turn this off. This of course sparked the Interface Builder vs. programmatic UI discussion again, so I wanted to give some insight in our experience using Interface Builder in building the Lyft app. This is not intended as hard "you should also use Interface Builder" advice, but rather to show that IB can work at a larger scale.

First, some stats about the Lyft app:

With the rewrite of our app we moved to using IB for about 95% of our UI.

The #1 complaint about using Interface Builder for a project with more than 1 developer is that it's impossible to resolve merge conflicts. We never have this problem. Everybody on the team can attest that they have never run into major conflicts they couldn't reasonably resolve.

With that concern out of the way, what about some of the other common criticisms Interface Builder regularly gets?

Improving the workflow

Out of the box, IB has a number of shortcomings that could make working with it more painful than it needs to be. For example, referencing IB objects from code still can only be done with string identifiers. There is also no easy way to embed custom views (designed in IB) in other custom views.

Over time we have improved the workflow for our developers to mitigate some of these shortcomings, either by writing some tools or by writing a little bit of code that can be used project-wide.

storyboarder script

To solve the issue of stringly-typed view controller identifiers, we wrote a script that, just before compiling the app, generates a struct with static properties that exposes all view controllers from the app in a strongly-typed manner. This means that now we can instantiate a view controller in code like this:

1
let viewController = Onboarding.SignUp.instantiate()

Not only is viewController now guaranteed to be there at runtime (if something is wrong in the setup of IB the code won't even compile), but it's also recognized as a SignUpViewController and not a generic UIViewController.

Strongly-typed segues

All our view controllers have a base view controller named ViewController. This base controller implements prepare(for:sender:) like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
open override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
    guard let identifier = segue.identifier else {
        return
    }

    let segueName = identifier.firstLetterUpperString()
    let selector = Selector("prepareFor\(segueName):sender:")
    if self.responds(to: selector) {
        self.perform(selector, with: segue.destination, with: sender)
    }
}

This means that a view controller that has a segue to TermsOfServiceViewController can now do this:

1
2
3
4
@objc
private func prepareForTermsOfService(_ viewController: TermsOfServiceViewController, sender: Any?) {
    viewController.onTermsAccepted = { [weak self] self?.proceed() }
}

We no longer have to implement prepareForSegue and then switch on the segue's identifier or destination controller, but we can implement a separate method for every segue from this view controller instead which makes the code much more readable.

NibView

We wrote a NibView class to make it more convenient to embed custom views in other views from IB. We marked this class with @IBDesignable so that it knows to render itself in IB. All we have to do is drag out a regular UIView from the object library and change its class. If there is a XIB with the same name as the class, NibView will automatically instantiate it and render it in the canvas at design time and on screen at runtime.

Every standalone view we design in IB (which effectively means every view in our app) inherits from NibView so we can have an "unlimited" number of nested views show up and see the final result.

Basic @IBDesignables

Since a lot of our views have corner radii and borders, we have created this UIView extension:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
public extension UIView {
    @IBInspectable public var cornerRadius: CGFloat {
        get { return self.layer.cornerRadius }
        set { self.layer.cornerRadius = newValue }
    }

    @IBInspectable public var borderWidth: CGFloat {
        get { return self.layer.borderWidth }
        set { self.layer.borderWidth = newValue }
    }

    @IBInspectable public var borderColor: UIColor {
        get { return UIColor(cgColor: self.layer.borderColor!) }
        set { self.layer.borderColor = newValue.cgColor }
    }
}

This lets us easily set these properties on any view (including the ones from UIKit) from Interface Builder.

Linter

We wrote a linter to make sure views are not misplaced, have accessibility labels, trait variations are disabled (since we only officially support portrait mode on iPhone), etc.

ibunfuck

A bug impacting developers that use Interface Builder on both Retina and non-Retina screens (which at Lyft is every developer) has caused us enough grief to write ibunfuck - a tool to remove unwanted changes from IB files.

Color palette

We created a custom color palette with the commonly used colors in our app so it's easy to select these colors when building a new UI. The color names in the palette follow the same names designers use when they give us new designs, so it's easy to refer to and use without having to copy RGB or hex values.

Our approach

In addition to these tools and project-level improvements, we have a number of "rules" around our use of IB to keep things sane:

Of course, even with these improvements everything is not peaches and cream. There are definitely still problems. New versions of Xcode often change the XML representation which leads to a noisy diff. Some properties can simply not be set in IB meaning we're forced to break our "do everything in IB" rule. Interface Builder has bugs we can't always work around.

However, with our improved infrastructure and the points from above, we are happy with how IB works for us. We don't have to write tons of Auto Layout code (which would be incredibly painful due to the nature of our UIs), get a visual representation of how a view looks without having to run the app after every minor change, and maybe one day we can get our designers make changes to our UI without developers' help.