| Comments

The Tuple Pattern

Posted by Jonathan Lehr

In Swift, a tuple is a parenthesized list of two or more elements of any type. For example, (0.5, "foo") is a tuple where the first element is a Double, and the second is a String. One of the key benefits tuples provide is that they make it possible to write a method or function that returns more than one value.

A common use case for this is enumerating a dictionary in a for-in loop. The loop first obtains an iterator object from the dictionary, and then calls the iterator's next() method each time through the loop to obtain the current key and value as a tuple. Here's an example:

let dogInfo: [String: Any] = ["name": "Fido", "breed": "Lab", "age": 5]

for (key, value) in dogInfo {
    print("\(key): \(value)")
}
// Prints
// name: Fido
// breed: Lab
// age: 5

Understanding the Tuple Pattern

Tuples can be used in several different contexts, one being an expression value. So, for example, you can use a tuple as an initializer:

let temperature = (72, "Fahrenheit")
print(temperature)   // Prints "(72, "Fahrenheit")"

You can then access individual elements by position:

print(temperature.0) // Prints "72"
print(temperature.1) // Prints "Fahrenheit"

You can also use a tuple in a type declaration:

var temperature: (Double, String)

Intuitively then, you might think that tuple is a type, but it's not --- it's a pattern. So, what's a pattern? Well, here's how pattern is defined in The Swift Programming Language (3.1 Edition):

A pattern represents the structure of a single value or a composite value. For example, the structure of a tuple (1, 2) is a comma-separated list of two elements. Because patterns represent the structure of a value rather than any one particular value, you can match them with a variety of values. For instance, the pattern (x, y) matches the tuple (1, 2) and any other two-element tuple. In addition to matching a pattern with a value, you can extract part or all of a composite value and bind each part to a constant or variable name.

Okay, that's a mouthful, so perhaps walking through some examples can add a bit of clarity. To begin with, a tuple doesn't define an object with properties or behaviors. Instead, a tuple describes the internal structure of a single, compound value. For example, the following line binds a variable, item to a tuple of data types:

var item: (Double, Int)

So item's data type is an ordered list of two types. This means that when we assign a value to item, it must be a grouping of two individual values, the first a Double, and the second an Int:

item  = (19.99, 2)

Binding Individual Elements

You can use the tuple pattern to define variables and constants. The following line defines two constants, x and y, whose types are both inferred to be Int:

let (x, y) = (10, 20)

which is conceptually similar to:

let x = 10, y = 20

However, in the former case, the tuple pattern has the effect of decomposing the individual values of the tuple on the right, before binding them to the constants on the left.

Note that the same pattern (x, y), can be used to define constants that differ not only by value, but also by type. In the following definition, x is a Double, and y is a String.

let (x, y) = (0.5, "foo")

Accessing Tuple Elements

A tuple's elements can be referenced by position:

let item  = (19.99, 2)

print("price: \(item.0), quantity: \(item.1)")

// Prints "price: 19.99, quantity: 2"

In addition, you can label a tuple's elements using the same syntax you'd use to define a function's parameter names (tuples are intentionally similar to parameter lists), and then access individual values by name:

let item = (price: 19.99, quantity: 2)

print("price: \(item.price), quantity: \(item.quantity)")

// Prints "price: 19.99, quantity: 2"

Again, these identifiers aren't analogous to object properties or keys in a dictionary; they simply provide the compiler with labels for individual components of a compound value.

Another way to access the elements of a tuple of values is to bind them to individual variables or constants. For example, suppose we're working with a class that has a computed property that returns a tuple:

// Defines a computed property with two named elements,
// 'price' and 'quantity', that returns (Double, Int).
//
var defaultItem: (price: Double, quantity: Int) {
    return (19.99, 2)
}

Then in the body of an instance method, we could use the tuple pattern to define a pair of let constants, and bind the values of the individual elements of the tuple as follows:

// Defines individual let constants, 'amount' and 'number'.
// Compiler infers their types from the type of 'defaultItem`.
//
let (amount, number) = defaultItem

print("price: \(amount), quantity: \(number)")
// Prints "price: 19.99, quantity: 2"

The Enumeration Case and Value Binding Patterns

The Swift case keyword makes it possible to combine two additional Swift patterns, enumeration case, and value-binding. Here's what the documentation): says about them:

Enumeration Case Pattern

An enumeration case pattern matches a case of an existing enumeration type. Enumeration case patterns appear in switch statement case labels and in the case conditions of if, while, guard, and for-in statements.

Value-Binding Pattern

A value-binding pattern binds matched values to variable or constant names. Value-binding patterns that bind a matched value to the name of a constant begin with the let keyword; those that bind to the name of variable begin with the var keyword.

The code in the following example loops over an array of tuples, using an if case construct to test whether the quantity of the current item is 2, and if so, binding the tuple's price (first element) to amount.

// An array whose elements are tuples representing price and quantity
let items = [(12.99, 2), (14.95, 3), (19.99, 2)]
for item in items {
    // Binds price value to 'amount'
    // Enters the 'if' statement's body if quantity matches the pattern '2'
    if case (let amount, 2) = item {
        print(amount)
    }
}
// Prints 
// "12.99"
// "19.99"

// Note that the let keyword can be moved outside the tuple
for item in items {
    if case let (amount, 2) = item {
        print(amount)
    }
    ...

You may be more familiar with the use of the case keyword in switch statements. The following example is similar to the preceding one in that it shows combined use of the enumeration case and value-binding patterns, but this time in the context of a switch statement:

let discount1 = 10.0
let discount2 = 14.0

let items = [("Shirt", 44.99), ("Shoes", 89.99), ("Jeans", 64.99)]

for item in items {
    // Applies a $10 discount for shirts and a $14 discount for shoes by
    // pattern matching on the string value of the first element
    switch item {
    case let ("Shirt", p):  print("Shirt: $\(p - discount1)")
    case let ("Shoes", p):  print("Shoes: $\(p - discount2)")
    case let (itemName, p): print("\(itemName): $\(p)")
    }
}
// Prints
// Shirt: $34.99
// Shoes: $75.99
// Jeans: $64.99

Back to the Future

Let's revisit the example from the beginning of this post, which led off by defining a dictionary as follows:

let dogInfo: [String: Any] = ["name": "Fido", "breed": "Lab", "age": 5]

The example then proceeded to enumerate the dictionary's keys and values with a for-in loop:

for (key, value) in dogInfo {
    print("\(key): \(value)")
}

From what we've already learned about the enumeration case pattern, we can now correctly infer that there's an implicit case-let after the keyword for:

// Valid Swift, but `case let` will be inferred by the compiler if omitted.
for case let (key, value) in dogInfo {
    print("\(key): \(value)")
}

The earlier example also noted that our for-in loop obtains an iterator (an instance of DictionaryIterator) from the dictionary, and calls the iterator's next() method each time through the loop to obtain a tuple of the current key and value. Here's the declaration of the next() method from the Swift Library documentation:

mutating func next() -> (key: Key, value: Value)?

As you can see, next() returns an optional, generically typed tuple with named elements. (Note: we'll explore optional values and the Optional type in detail in a future blog post.) So we now know that the for-in loop could also have been written like so (though generally, the earlier style is preferable):

for item in dogInfo {
    print("\(item.key): \(item.value)")
}
// Prints
// name: Fido
// breed: Lab
// age: 5

Here we simply capture the current tuple in item, and then access item's elements by name in the argument to the print() function.

Conclusion

Some of the concepts embodied in Swift may seem unintuitive to those of us coming from primarily object-oriented language backgrounds. A solid understanding of Swift patterns can help a lot with comprehension when reading non-trivial code, and can be indispensible when struggling to figure out how to make use of various features of the language to implement details of the apps we're writing.

| Comments

Modeling JSON Mappings – Part 1

Posted by Jonathan Lehr

iOS apps commonly store and retrieve JSON data via REST APIs. Consequently, many development teams initially spend some time formulating an approach for decoding model objects from JSON, and (usually) vice versa. And due diligence requires sifting through a substantial number of frameworks, both in Objective-C and Swift, that provide varying degrees of support for mapping values between these two different representations. Unfortunately, even teams that adopt the best of these frameworks still tend to experience some headaches dealing with the resulting mappings.

Back to the Future

I've been a fan of object-relational mapping (ORM) frameworks since cutting my teeth on NeXT's Enterprise Objects Framework (EOF) in the mid-90s. ORMs are designed to deal with a host of issues that arise when mapping values between relational database tables and object models. Because JSON represents relationships more or less the same way model objects do --- hierarchically --- mapping JSON to model objects is inherently much simpler. Still, there are several capabilities ORMs and JSON mapping frameworks have in common:

Mandatory Capabilities

  • Map a given JSON dictionary to a specific class
    • Construct a model object on decode
    • Construct a dictionary on encode
  • Map JSON data values to object properties
    • Associate JSON element names with object property names
    • Allow specification of value transformations, and automatically apply them during encode/decode
    • Populate model object properties with JSON values on decode
    • Populate dictionary with model object property values on encode
  • Model to-one, and to-many relationships
    • Store type information for related objects
    • Construct child objects and arrays of child objects on decode
    • Construct child dictionaries and arrays of child dictionaries on encode

Nice to Haves

  • Flattened attributes
  • Inverse relationships

But the headaches developers always seem to run into using JSON mapping frameworks isn't because they lack these kinds of capabilities, but rather because the mappings have to be baked right into the code of each of the model classes. As a consequence, the data model is scattered across classes, making it harder to visualize, and harder to maintain.

But one of the killer features of EOF and its successor, Core Data, is its externalization of mapping metadata. Core Data mappings are coalesced into a single XML document that the framework works with at runtime. This has several advantages:

  • Core Data directly supports model versioning, allowing earlier versions of a given data model to be accessed at runtime, making it easier for apps to handle API version differences.
  • External tools can leverage the metadata to, for example, generate base classes (via Xcode's built-in class generation facilities, as well as third-party tools such as mogenerator)
  • An entire data model can be version controlled as a single unit, making differences between versions more apparent.
  • The model can be presented and edited in a visual tool such as Xcode's Model Editor

Give the potential advantages, the team here at About Objects wondered if it would be possible to a) use a Core Data model to store JSON mapping metadata (pretty much a no-brainer), and b) use Core Data models in projects that don't use Core Data for storage. Okay, we actually did more than 'wonder'; we wrote a framework, Modelmatic. Luckily, it turned out the answers were 'yes' and 'yes'.

The Modelmatic Framework

Modelmatic allows you to specify custom mappings between key-value pairs in JSON dictionaries, and corresponding properties of your model objects. For example, suppose you're working with JSON similar to the following (from the Modelmatic example app):

{
  "version" : "2",
  "batchSize" : "10",
  "authors" : [
    {
      "firstName" : "William",
      "lastName" : "Shakespeare",
      "born" : "1564-04-01",
      "author_id" : "101",
      "imageURL" : "https:\/\/www.foo.com?id=xyz123",
      "books" : [
        {
          "tags" : "drama,fantasy",
          "title" : "The Tempest",
          "year" : "2013",
          "book_id" : "3001"
        },
        ...

Step 1: Defining the Model

To use Modelmatic, you start by modeling your data using Xcode's Core Data Model Editor. Don't worry, you're not going to need to use other aspects of Core Data, just the data model -- and just a subset of it's capabilities.

Step 2: Create Swift Classes

If your model is complex, and/or changes frequently, consider using mogenerator to generate model classes (and update them as needed) from the metadata you specified in the model editor. Otherwise, it's simplest to just create the classes you need from scratch. Here's an example:

import Foundation
import Modelmatic

@objc (MDLAuthor)
 class Author: ModelObject
{
    // Name of the Core Data entity
    static let entityName = "Author"

    // Mapped to 'author_id' in the corresponding attribute's User Info dictionary
    var authorId: NSNumber!
    var firstName: String?
    var lastName: String?
    var dateOfBirth: NSDate?
    var imageURL: UIImage?

    // Modeled relationship to 'Book' entity
    var books: [Book]?
}

Key points:

  • import Modelmatic.
  • Subclass ModelObject.
  • Use @objc() to avoid potential namespacing issues.
  • Define a static let constant named entityName to specify the name of the associated entity in the Core Data model file.
  • authorId is mapped to author_id in the model (see the attribute definition's User Info dictionary).
  • Modelmatic automatically maps all the other properties, included the nested books property.

Customizing Mappings

Modelmatic automatically matches names of properties you specify as attributes or relationships in your Core Data model to corresponding keys in the JSON dictionary. For example, given an attribute named firstName, Modelmatic will try to use firstName as a key in the JSON dictionary, and map it to a firstName property in Author.

However, the framework also allows you to specify custom mappings as needed. For instance, the Author class has the following property:

var authorId: NSNumber!

A custom mapping is provided in the model file, binding the authorId attribute to the JSON key path author_id, as shown below:

To add a custom mapping, select an attribute or relationship in the model editor, and add an entry to it's User Info dictionary. The key should be jsonKeyPath, and the value should be the key or key path (dot-separated property path) used in the JSON dictionary. During encoding and decoding, Modelmatic will automatically map between your object's property, as defined by its attribute or relationship name, and the custom key path you specified to access JSON values.

Defining Relationships

Core Data allows you to define to-one and to-many relationships between entities. Modelmatic will automatically create and populate nested objects for which you've defined relationships. For instance, the Modelmatic example app defines a to-many relationship from the Author entity to the Book entity. To create an Author instance along with its nested array of books, you simply initialize an Author with a JSON dictionary as follows:

let author = Author(dictionary: $0, entity: entity)

For example, given the following JSON, the previous call would create and populate an instance of Author containing an array of two Book objects, with their author properties set to point back to the Author instance):

{
      "author_id" : "106"
      "firstName" : "Mark",
      "lastName" : "Twain",
      "books" : [
        {
          "book_id" : "3501",
          "title" : "A Connecticut Yankee in King Arthur's Court",
          "year" : "2014"
        },
        {
          "book_id" : "3502",
          "title" : "The Prince and the Pauper",
          "year" : "2015"
        }
      ],
    }

Property Types

Modelmatic uses methods defined in the NSKeyValueCoding (KVC) protocol to set model object property values. KVC can set properties of any Objective-C type, but has limited ability to deal with pure Swift types, particularly struct and enum types. However bridged Standard Library types, such as String, Array, Dictionary, as well as scalar types such as Int, Double, Bool, etc. are handled automatically by KVC with one notable issue: Swift scalars wrapped in Optionals. For example, KVC would be unable to set the following property:

var rating: Int?

If your ModelObject subclasses uses a Swift type that KVC can't directly handle, you can provide a computed property of the same name, prefixed with kvc_, to provide your own custom handling. For example, to make the rating property work with Modelmatic, add the following:

var kvc_rating: Int {
        get { return rating ?? 0 }
        set { rating = Optional(newValue) }
    }

If Modelmatic is unable to set a property directly (in this case the rating property), it will automatically call the kvc_ prefixed variant (kvc_rating, in this example).

Specifying Value Transformations

In your Core Data model file, you can specify a property type as Transformable. If you do so, you can then provide the name of a custom transformer. For example, the Author class in the Modelmatic example app has a transformable property, dateOfBirth, of type NSDate. Modelmatic automatically uses an instance of the specified NSValueTransformer subclass to transform the value when accessing the property.

Here's the code of the Example app's DateTransformer class in its entirety:

import Foundation

@objc (MDLDateTransformer)
class DateTransformer: NSValueTransformer
{
    static let transformerName = "Date"

    override class func transformedValueClass() -> AnyClass { return NSString.self }
    override class func allowsReverseTransformation() -> Bool { return true }

    override func transformedValue(value: AnyObject?) -> AnyObject? {
        guard let date = value as? NSDate else { return nil }
        return serializedDateFormatter.stringFromDate(date)
    }

    override func reverseTransformedValue(value: AnyObject?) -> AnyObject? {
        guard let stringVal = value as? String else { return nil }
        return serializedDateFormatter.dateFromString(stringVal)
    }
}

private let serializedDateFormatter: NSDateFormatter = {
    let formatter = NSDateFormatter()
    formatter.dateFormat = "yyyy-MM-dd"
    return formatter
}()

The date transformer is registered by the following line of code in the Example app's AuthorObjectStore class:

NSValueTransformer.setValueTransformer(DateTransformer(), forName: String(DateTransformer.transformerName))

Step 3: Loading the Model

Somewhere in your app (you only need to do this once during the app's lifecycle), do something like the following to load the Core Data model file into memory:

let modelName = "Authors"

guard let modelURL = NSBundle(forClass: self.dynamicType).URLForResource(modelName, withExtension: "momd"),
    model = NSManagedObjectModel(contentsOfURL: modelURL) else {
        print("Unable to load model \(modelName)")
        return
}

You'll most likely want to store the reference to the model in a class property.

Step 4: Encoding and Decoding Model Objects

Once you've obtained JSON data, you can deserialize it as follows (Note that deserializeJson wraps a call to NSJSONSerialization):

guard let data = data, dict = try? data.deserializeJson() else { 
    return
}

To construct an instance of your model class, simply provide the dictionary of deserialized values, along with the entity description:

let authors = Author(dictionary: $0, entity: entity)

This will construct and populate an instance of Author, as well as any nested objects for which you defined relationships in the model (and for which the JSON contains data). You then simply work with your model objects. Whenever you want to serialize an object or group of objects, simply do as follows:

// Encode the author
let authorDict = author.dictionaryRepresentation

// Serialize data
if let data = try? dict.serializeAsJson(pretty: true) {
    // Do something with the data...
}

Modelmatic provides methods to make it easier to programmatically set objects for properties that model to-one or to-many relationships. While it's easy enough to remove objects (simply set to-one properties to nil, or use array methods to remove objects from arrays), setting or adding objects to these properties can be slightly more involved. That's because Modelmatic automatically sets property values for any inverse relationships you define in your model, so that child objects will have references to their parents.

While inverse relationships aren't required, they're often convenient. Just be sure to use the weak lifetime qualifier for references to parent objects.

Even if you're not currently using inverse relationships, it's a good idea to use the convenience methods provided by ModelObject for modifying relationship values. That way, if you change your mind later, you won't need to change your code to add support for setting parent references.

To-Many Relationships

ModelObject provides two methods for modifying to-many relationships, as shown in the following examples:

// Adding an object to a to-many relationship
let author = Author(dictionary: authorDict, entity: authorEntity)
let book = Book(dictionary: bookDict, entity: bookEntity)
do {
    // Adds a book to the author's 'books' array, and sets the book's 'author' property
    try author.add(modelObject: book, forKey: "books")
}
catch MappingError.unknownRelationship(let name) {
    print("Unknown relationship \(name)")
}

// Adding an array of objects to a to-many relationship
let books = [Book(dictionary: bookDict2, entity: bookEntity),
             Book(dictionary: bookDict3, entity: bookEntity)]
do {
    // Adds two books to the author's 'books' array, setting each book's 'author' property
    try author.add(modelObject: books, forKey: "books")
}
catch MappingError.unknownRelationship(let name) {
    print("Unknown relationship \(name)")
}

To-One Relationships

An additional method is provided for setting the value of a to-one relationship, as shown here:

// Set the value of a to-one relationship
let book = Book(dictionary: bookDict1, entity: bookEntity)
let pricing = Pricing(dictionary: ["retailPrice": expectedPrice], entity: pricingEntity)
do {
    // Sets the book's 'pricing' property, and sets the pricing's 'book' property
    try book.set(modelObject: pricing, forKey: "pricing")
}
catch MappingError.unknownRelationship(let name) {
    print("Unknown relationship \(name)")
}

Next Installment

In Modeling JSON Mappings -- Part 2, we'll take a look under the hood to see how the Modelmatic framework leverages the data model to automate encoding and decoding.

| Comments

Hamburgers Belong on the Grill, Not on Your iPhone

Posted by Anthony Mattingly

How many projects have you worked on where the client wants to throw in every feature and action they can think of? It often seems they want their app to have everything, plus the kitchen sink. And once they've specified some huge set of features, how do they want them organized? All too often, it's via the infamous hamburger menu.

This is, unfortunately, a common pitfall for iPhone apps. First, an iPhone app isn't a responsive website. Responsive web designers love to use hamburger menus. However, web designers have different constraints on how they must organize their content, and the nature of the content is generally different than that of a mobile app.

An iPhone app is a tool. Every action and task should be so easy that users don't have to think about how to perform them. That way users can just focus on the tasks they're currently trying to carry out.

Second, iPhones are not Android phones. Some folks prefer Android, others love iOS. While both are successful platforms, I personally lean towards the Apple side. I find the iOS platform very efficient and effortless to use. Android may have a lot of bells and whistles, and give you the freedom to do things that iOS doesn't, but more isn't always better. Usability goes a long way, and often trumps other considerations. Too much is a manifestation of complexity. Apple does a tremendous job of reducing the too much to help keep the focus on the essence.

On the left, a standard cable TV remote. On the right, an AppleTV remote.

On the left, a standard cable TV remote. On the right, an AppleTV remote.

For example, your standard cable TV remote has a zillion capabilities, yet, how many buttons do you actually find yourself using? Now, look at an AppleTV remote. It delivers all its available features via six visible buttons and a trackpad. Compare that to a typical cable TV remote, sporting nearly ten times as many controls. Long story short, throwing in lots of functionality and grouping all of it in one place is not a good solution.

Is It Just Me, Or Do You Smell Hamburgers?

Normally, I love the smell of hamburgers, but not when it comes to iPhone apps. Hamburger menus are notorious for being overloaded and unintuitive. Too often, a hamburger menu serves as a catch-all for uncategorized requirements that aren't tied to an app's core purpose. In fact, the use of hamburger menus can become a crutch -- a way to avoid carefully thinking through an app's information architecture, and skip the hard work of designing a solution. It's the creative equivalent of a shoulder shrug.

Example of how certain apps (we won't name names) fill hamburger menus with extensive functionality. When closed all of that functionality is hidden from the user.

Example of how certain apps (we won't name names) fill hamburger menus with extensive functionality. When closed all of that functionality is hidden from the user.

By encouraging an unlimited number of options to be thrown in, hamburger menus tend to result in user interfaces that require more thought and attention from the user. In this scenario, users have to read and scroll through all of the options in the menu to find a given action, and then choose one that best describes the task they want to perform. In addition, when the menu is closed, all of the app's features are hidden, leaving users without any visual indication of the app's range of capabilities.

Okay, okay. We've all read the blogs that say how much hamburger menus suck, but not many of them talk about alternatives.

The Alternative

First, to keep a mobile app as simple as it needs to be, the organization of features and tasks must be thoroughly analyzed, prioritized and mapped out. Limiting an app's features strictly to those required to provide a coherent and meaningful user experience is essential to achieving that Apple-like simplicity. Fewer options yields faster and easier decisions for users.

To begin, classify and categorize app features and tasks into meaningful groups to provide a context in which they're more understandable. Just remember to keep the number of these groups small. You don't want your app to suffer from TV Remote Syndrome!

On the left is an example of hamburger menu information architecture and how it defines the features as peers -- that is, all on the same level. In comparison, reducing and reorganizing the information architecture, as shown on the right, puts features into context with fewer groupings to help users quickly identify tasks while still comprehending app capability. Having fewer items to process allows users to make decisions faster.

On the left is an example of hamburger menu information architecture and how it defines the features as peers -- that is, all on the same level. In comparison, reducing and reorganizing the information architecture, as shown on the right, puts features into context with fewer groupings to help users quickly identify tasks while still comprehending app capability. Having fewer items to process allows users to make decisions faster.

A tab bar is often a better solution than a hamburger menu for a couple of reasons: one, using a tab bar forces you to keep your main navigation to a minimum, as the iPhone displays a maximum of five tabs. Two, it ensures that the essence of the app can be seen immediately, providing calls-to-action for users that can be accessed globally for efficient navigation.

Because a tab bar's items aren't hidden away in a drawer, they allow the most useful tasks and features to be located in an optimal manner, without sacrificing a great deal of UI real estate. More generally, standard iOS framework components embody UI paradigms that provide a consistent and familiar user experience. Using standard iOS components such as tab bars nearly always saves significant development time and cost over other, more custom solutions. In my experience, unnecessary customization can more than double development costs while yielding a sub-par user experience.

In general, it's best to exhaust the possibilities afforded by designing around components from Apple’s iOS frameworks before resorting to custom solutions that reference other platforms. A good resource for help understanding the iOS platform frameworks is the iOS Human Interfaces Guidelines (HIG). Among other things, the HIG provides great insights into where, when and how to use standard UI components.

So next time, before adding the kitchen sink, take a step back and define what users will actually use while on the go with their devices. In our busy lives, most of us just don't have the time to sit and read a user manual or dig through all of the features of apps that suffer from TV Remote Syndrome. Try streamlining your app's design by using native iOS navigational components -- and leave the hamburger for the grill.

| Comments

I'll Take My RESTful Services Well-Done

Posted by LeRoy Mattingly

Last decade was SOA services. This decade is REST services. These days it seems just about everyone is doing REST --- but are they doing it well? From all the evidence, it seems most enterprise IT organizations are struggling with the transition from SOA to REST. And it turns out that the mobile platform is usually at the epicenter of that struggle.

Mobile App Development Readiness Review

We often perform assessments for enterprise IT organizations to help them identify areas of risk related to their mobile development practices. As part of what we call a Mobile App Development Readiness Review, we conduct a four week assessment covering everything from business and mobile strategies, through architecture and design, execution, testing, delivery, and implementation.

Whenever I conduct one of these reviews, one of the first things I look for is the health of the service layer. After all, most mobile apps can't do much without a good backend service layer.

Taking an Inventory of the API

The question I usually start with is, "Can you provide me an inventory of your service APIs?" You might be surprised that most companies can't do this. They can produce a bunch of APIs that are spread all over the place, documented poorly or not at all, and that typically have numerous orphans or single use services. The spaghetti has moved from the code to the service layer, and as a result, opportunities for reusing shared services are often missed --- in many cases, internal consumers are not even aware of the existence of the API. The best way to avoid these kinds of problems is to build a discipline around the holistic management of the entire collection of enterprise services.

Most large enterprises currently have a mix of legacy SOA services and newer, RESTful services. I generally view the ratio of this mix is an indicator of how much progress an organization has made in modernizing their service layer.

Drilling Into the Details

Once the API inventory is complete, I begin digging into the details of selected portions of the REST service layer, looking at design, reusability, documentation, consistency, usability, and maturity level. The following are criteria we use to evaluate an organization's REST services:

Restful Maturity

RESTful maturity was first described in a presentation by Leonard Richardson, and has become know as the Richardson Maturity Model, made famous by Martin Fowler's article.

Maturity Level RESTful? Description
0 No
  • Single URI and single HTTP verb (GET or POST)
  • Include all operations in the payload
  • Use HTTP for tunneling to call rpc-like services (NOTE: this describes XML-RPC and SOAP)
1 No
  • Multiple URIs and a single HTTP verb (GET or POST)
  • Use resources to break down a large service endpoint
  • Include CRUD verbs as part of the URL
  • One URI per method
  • Still using HTTP for tunneling to call resources
2 Yes
  • Multiple URIs and multiple HTTP verbs (GET, POST, PUT, PATCH, DELETE) used with correct semantics
  • Use resources to break down a large service endpoint
3 Yes
  • Use HATEOAS as a discoverable web service
  • Self-documenting API
  • Independent evolution
  • Decoupled implementation


  • Enterprises should be targeting at least Level 2 or 3 on Richardson's Maturity Model. Anything else scores immature and represents an opportunity for improvement.

  • RESTful APIs should be logical (not based on implementation details). All services should be resource-based (as opposed to RPC-like) and based on domain models that reflect the natural business partitions at an enterprise.

  • RESTful service designs should always start with logical business domain models. The nouns in the model serve as the basis for naming the resources in the REST API. That doesn't completely rule out service calls with verb-form names, but those are typically the exception rather than the rule.

Domain Modeling

Defining an enterprise-wide domain model is a perilous task. But defining domain models that map to the natural business partitions in an enterprise is both reasonable and attainable. These domain models provide the blueprint for RESTful service resource APIs. Services can be built to provide these resources on both an as-needed and client-driven basis. Services can evolve to map to these blueprints.

  • RESTful services should be loosely coupled. Services APIs should never expose implementation specific details or explicitly name architectural components. Tight coupling of clients to service APIs limits the ability to make architectural changes. All APIs should remain as logical abstractions over their implementation details.
  • RESTful services should be reusable by multiple clients, now and in the future. Service APIs that are capable of being used by only a single client are barely useful services --- in fact, they're really nothing more than glue code. Narrow-focused APIs miss the opportunity to vend business data and logic in a way that can meet future needs. An over-reliance on single-use services can turn an architecture into a spaghetti-like mess.
  • RESTful services should be documented in a consistent way to make it easier for consumers to understand and reuse the services.

When conducting a full Mobile App Development Readiness Review, we perform a similar analysis across many areas of the organization. We then furnish the client with a risk scorecard to establish a baseline, and provide specific mitigations for areas identified as high risk.

The Benefits of Well-Done RESTful Services

I've seen organizations that follow the above guidelines reap tremendous benefits, some of which are as follows:

  1. A domain-based RESTful service layer is easier to evolve than one based on SOAP. A well-done domain model represents business concepts that have evolved over the life of the system. (Flexibility to evolve should remain one of the top architectural priorities of the RESTful service layer. Do everything possible to ensure that the service layer can change over time without breaking existing clients.)
  2. A well-done RESTful service layer is easy to use and reuse, especially if it adds business value. (Do everything possible to make it easy for clients to consume the service.)
  3. A well-done, domain-based RESTful service layer will reduce architectural sprawl --- in other words, it will organize the spaghetti. The result should be more like manicotti --- wrapped up in nice little bundles that naturally align to each other and are independently consumable.
  4. A well-done service layer also makes it possible for the business, rather than the technology organization, to define which data to vend (both internally and externally) --- provided the business organization works closely with IT in defining the logical domain model.

> Reverse Engineering a Domain Model

If you find yourself reverse engineering a legacy database, you're already in trouble. One thing to pay particularly close attention to is ensuring that implementation details don't leak into the logical domain model. Business domain modeling is hard to do well, and usually requires the skills of a trained analyst who is also adept at working with the business.

A Great Example

The FHIR (Fast Healthcare Interoperability Resources) API is a relatively new API for collecting and exchanging patient record information. It's a great starting point if you're looking for a good example of a well-done, domain-based, RESTful service API. The FHIR Resource Index is particularly useful as a demonstration of how a domain model can be used to clearly identify a set of related resources.

| Comments

NFL Sunday Ticket Kicks Off with Chromecast

Posted by Eric Caminiti

Congratulations to the About Objects team at DIRECTV for their well-received implementation of the NFL Sunday Ticket Chromecast app. The app is currently featured by Google, and was highlighted in Google’s September 2015 new Chromecast release as a premier example of a powerful, second screen Chromecast experience.

Integrating Chromecast

Given my strong affiliation with Apple's platforms, people are sometimes surprised to learn that I've been responsible for leading our Google Chromecast strategic partnership the past few years. Did I go over to the dark side? Maybe. Have I ridden a rainbow colored bicycle? Possibly.

Actually though, we've been working with a number of companies in the digital media space, helping them capitalize on the cord-cutting trend. At its core, Chromecast allows users to stream content on everything from mobile devices to large-screen TVs. It’s sort of somewhere in between Apple’s AirPlay and a full-blown Apple TV app (we develop those too!). The big difference is that with Chromecast your app controls the whole experience and becomes your remote control, allowing for a much more immersive, second screen experience.

Key Challenges

Chromecast implementations can present a number of potential land mines. Problems often crop up in dealing with DRM, adaptive bit rates, CORS headers, environment setup, error handling, networking issues, UI synchronization, and automatic reconnect, as well as determining when and when not to use custom channels.

What’s most interesting to our developers is the challenge of architecting an elegant Chromecast solution in the context of an existing mobile app. Integrating Chromecast (especially with iOS apps) tends to be an afterthought, and none of the apps we've dealt with were designed with that sort of integration in mind. It tends to be such an outlier that it can stress the application architecture in dramatic ways, often uncovering significant weaknesses.

Getting It Right The First Time

Obviously, it's important to spend some time upfront designing a solution that's the best fit for your current app architecture. Most of the Chromecast implementations I've seen tend to follow a basic pattern of intercepting calls to the video player and redirecting them to the Chromecast receiver.

The key is to avoid taking the easy path and tightly coupling your Chromecast implementation with existing code. The alternative would require adding otherwise unnecessary conditional logic to the codebase, making it more fragile. And the likelihood of introducing regression bugs would increase dramatically.

Thinking through these issues affords an excellent opportunity to dust off your Gang of Four design patterns. I’ll be posting soon some examples on how you can take advantage of patterns such as Decorator, Proxy, and Receptionist to simplify your implementation. Coming up with a strategic approach that allows you to encapsulate most, or all, of the Chromecast-specific API calls can greatly reduce overall development time, and improve the long-term maintainability of your integration.

One way or another, it’s important to get your app's Chromecast integration right. Remember, many Chromecast consumers use the device as their primary way to watch TV, and they've gotten used to Chromecast working consistently across all their favorite apps (Netflix, YouTube, HBO Now, etc.).

To that end, Google provides detailed user interface guidelines to help ensure a consistent experience across all Chromecast-enabled applications. The lack of a polished user experience can infuriate those cord-cutting millennials out there --- and they can be pretty harsh in social media and in app store reviews when critiquing apps that don't meet expectations.

| Comments

Managing Technical Debt

Posted by Eric Caminiti

Among the services we offer is an assesment that can span everything from mobile and cloud strategy, to enterprise architecture, process and methodology, design, testing strategy, software tools, and even an in-depth analysis of existing codebases. After conducting a number of these assessments, I noticed a trend. Many of the IT directors who had until recently been hailed as heroes for delivering low-cost mobile apps seem increasingly alarmed about their future. Why? Their projects incurred significant technical debt as a consequence of cost-cutting measures. (See also: wiki article on technical debt by Martin Fowler)

In short, technical debt is the accrued balance of the shortcuts taken and compromises made to get a project out the door on time and on (or under) budget.

While initially, an app whose development team cut one too many corners may seem to work just fine, as the app evolves the code base can quickly degrade into an impenetrable mass of spaghetti. Unfortunately, development teams rarely consider (much less track) the amount of technical debt added from one revision to the next.

Signs and Symptoms

One of the symptoms we typically see when a project is over-leveraged (i.e., has accrued too much debt), is a growing rift between Product (the business) and IT. The mobile development team may be perceived as losing velocity over time, struggling to meet deadlines, delivering increasingly buggy code, and being seemingly incapable of implementing a growing subset of new feature types.

The latter issue tends to be particularly frustrating to Product. It would naturally be hard to reconcile development's claims that certain features are 'impossible' to implement with the presence of the selfsame bright, shiny, organic, grass-fed, free-range features in the latest versions of competitors's apps.

When developers claim that new features are impossible to implement as specified, and plead for requirements changes and simplifications, often what they're really saying is, “Uh, we can’t get there from here --- the current state of our application's design and architecture (or lack thereof) and codebase simply won’t support it."

Fingers get pointed. Things get said. Chairs get thrown.

The Young and the Reckless

While it can be argued that there's always some degree of technical debt in a given codebase, a clear distinction can be drawn between prudent technical debt and reckless technical debt.

For example, reckless technical debt might be incurred by a junior developer writing poor code simply through lack of experience. (This could also be characterized as unintentional technical debt.) Prudent technical debt, on the other hand, would be incurred if an experienced developer decided to go with a temporary, quick-and-dirty implementation to meet time-to-market constraints for a new feature, while planning to incorporate a better, more permanent solution later on.

In general, inexperienced developers tend to introduce a great deal more technical debt --- particularly of the reckless variety --- than do their more seasoned colleagues. And sadly, they're rarely aware of any technical debt they may have introduced.

Ultimately, you get what you pay for. The initial cost saving achieved through cheaper developer rates will often be offset to a great degree by greater technical debt. Unfortunately, the degree can be very large, and is hard --- if not impossible --- to measure. Veteran developers, on the other hand, have bloodied their foreheads enough to know what a reasonable level of debt is, which items to address in the short term, and which to defer to future releases.

The Real Culprits

We often see codebases with unhealthy levels of reckless technical debt. Mostly, we see this in code that was outsourced to low-cost providers. When new features need to be added, they're simply bolted on --- often with the coding equivalent of duct tape --- and pushed out the door. As they rotate inexperienced, generalist developers through projects, any consistency in the codebase quickly evaporates, code and component-level reuse goes from negligible to non-existent, and application architecture and design don't evolve with the codebase.

Surprisingly though, we sometimes also see similar levels of reckless technical debt in codebases developed by in-house teams. The sources are many, including flawed architecture decisions; overly simplistic or sloppy design; failure to leverage available platform resources, patterns, and best practices; failure to keep the codebase current --- but I'll leave the details for another post.

Paying the Piper

Then comes the inevitable shakeup, either thru a new hire, a merger, or a re-organization and new management is now drawing conclusions about all this disfunction in the organization. Hopefully you, the savvy IT director, were already promoted and left this mess to someone else! But often, you're the one who ends up in the cross-hairs. What you failed to manage effectively is the inverse relationship between cost and risk. As a result the project accrued enough technical debt to serve as a significant drag on performance.

At a certain point, the 'interest cost' (the extra effort required to fix bugs and implement new features in a pathological codebase) became crippling, while simultaneously the debt grew too large to repay. (In the worst case scenario, paying the debt might entail throwing away essentially the entire codebase and starting over from scratch.) We know its not all your fault --- you were in the trenches making these decisions in real time, and doing your best given scope, time, and money (It’s a tough racket!). But the new management will tend to focus on the risk first (as they were not around to reap any of the benefits of your low-cost strategy).

Here are several things an IT director can do to avoid ending up in this situation:

  1. Negotiate with the product team on a regular basis to ensure that scope is manageable in the first place so that your team doesn't feel pressured to cut corners again and again.
  2. Make sure project risks are fully captured in writing and communicated to stakeholders on a regular (ideally weekly) basis.
  3. A small team of experts will run rings around a larger but more junior team. Ensure that your budget supports at least seeding --- if not fully populating --- your team with experts. The work should get done much faster, and as a result, the net cost may actually be less, in spite of the higher hourly rates. But more importantly, the project won't incur hidden costs in the form of technical debt that can quickly spiral out of control.