| Comments

Hosting the January 2018 MacDevGroup Meetup

Posted by About Objects


About Objects was pleased to host the Mac Developers Group at our HQ in Reston, VA. Our VP of Training, Jonathan Lehr, was on point to lead discussion on the most useful features and enhancements introduced in Swift 4.

Jonathan started the evening by taking a look at the overhauled String type, improvements to collection types (particularly Dictionary) and the introduction of one-sided ranges. He also spent some time looking at key-value coding enhancements, and did a deep dive on arguably the best new feature of them all: the introduction of the Encodable and Decodable protocols and related types that make it easy to, for example, encode and decode JSON.

Everyone enjoyed the active discussion and walked away with some useful information to consider for on-going development projects.

Check out the slides from the meetup here: https://speakerdeck.com/jonathanlehr/swift-4-highlights

| Comments

The Swift Enumeration Case Pattern

Posted by Jonathan Lehr

When I first started working with enumerations and switch statements in Swift, I found them immediately easy to use. I also appreciated the usefulness of being able to add properties and methods to an enumeration. Very nice!

However there were a number of subtleties that eluded me initially, and took some time to fully appreciate. So after laying out some basics, I’d like to share a number of powerful capabilities I discovered along the way that might not seem obvious in the early going, or that might at first seem confusing.

The Basics

Defining a Trivial Enum

Initially, Swift enumerations seem similar to what most of us have experienced in other languages.

enum Size {
    case small
    case medium
    case large
    case extraLarge

The previous declaration can also be written more compactly:

enum Size {
    case small, medium, large, extraLarge

You can also define the type of a Swift enumeration, allowing each case to be mapped to a raw value that can be a string, character, or numeric type:

enum Size: String {
    case small = "S"
    case medium = "M"
    case large = "L"
    case extraLarge = "XL"

To create an instance of a enum case, you simply refer to it. In the following example, medium is an instance of Size.

let shirtSize = Size.extraLarge

Note that for enumerations of type String, Swift will automatically use case labels as raw values unless you provide explicit raw values yourself, as, for example, in the preceding declaration of Size. The code below first prints a case’s string value, followed by its raw value.

print("Size \(mySize)")
// prints "Size extraLarge"

print("Size \(mySize.rawValue)")
// prints "Size XL"

Switch Statements

An easy and obvious way to use an enum instance is in a switch statement:

switch (shirtSize) {
        case .medium: print("Shirt is size M")
        default: print("Shirt is not size M")

The Pattern

Since you’re probably already familiar with switch statements from other languages, their ability to match enumeration cases may seem unsurprising. However, one subtlety is that enumeration case is generalized as a pattern in Swift.

According to The Swift Programming Language (Swift 4): Patterns , “An enumeration case pattern matches a case of an existing enumeration type. Enumeration case patterns appear in switch statement case labels and in the case conditions of if, while, guard, and for-in statements.”

In other words, you can combine the keyword case with various logic constructs, such as if statements. Precisely how to do that, though, may not be immediately obvious. We’ll come back to this point shortly in the upcoming section on Case Conditions.

The Swift language guide goes on to say: “If the enumeration case you’re trying to match has any associated values, the corresponding enumeration case pattern must specify a tuple pattern that contains one element for each associated value.” [emphasis added]

Associated Values

So what does it mean for an enumeration case to have an associated value? Consider the following enum declaration:

enum Garment {
case tie
case shirt

You could then use the Garment enumeration, to distinguish between shirts and ties. But, suppose you also wanted to distinguish between small shirts and large shirts? To do so, you could append a tuple declaration to the declaration of the shirt case:

enum Garment {
case tie
case shirt(size: String)
//        ^^^^^^^^^^^^^^

Now, initializing an instance of Garment.shirt would look almost the same as initializing a struct:

let myShirt = Garment.shirt(size: "M")

Additional cases could define tuples for their own associated values, if needed:

enum Garment {
case tie
case shirt(size: String)
case pants(waist: Int, inseam: Int)

let myPants = Garment.pants(waist: 32, inseam: 34)

You could then write a switch statement to perform pattern matching on myPants:

switch someGarment {
case .tie: print("tie")
case .shirt: print("shirt")
case .pants: print("pants")
// prints "pants"

You could also specify associated values to further refine the patterns you wish to match:

let items = [
    Garment.shirt(size: "S"),
    Garment.shirt(size: "M"),
    Garment.shirt(size: "L"),
    Garment.pants(waist: 29, inseam: 32),
    Garment.pants(waist: 35, inseam: 34)

for item in items {
    switch item {
    case .tie: print("tie")
    case .shirt("M"): print("\(item) may fit")
    case .shirt("L"): print("\(item) may fit")
    case .shirt: print("\(item) won't fit")
    case .pants(34, 34): print("\(item) may fit")
    case .pants(35, 34): print("\(item) may fit")
    case .pants: print("\(item) won't fit")
// tie
// shirt("S") won't fit
// shirt("M") may fit
// shirt("L") may fit
// pants(waist: 29, inseam: 32) won't fit
// pants(waist: 35, inseam: 34) may fit

Perhaps more importantly, you could then use the value-binding pattern to unwrap associated values with let :

let myPants = Garment.pants(waist: 32, inseam: 34)

switch someGarment {
case .tie: print("tie")
case .shirt(let s): print("shirt, size: \(s)")
case .pants(let w, let i): print("pants, size \(w) X \(i)")
// pants, size 32 X 34

(For more on value-binding and the value-binding pattern, see my earlier blog post on The Tuple Pattern.)

Note that Swift allows you streamline case statements by ‘factoring out’ any let keywords used in value binding, allowing you to rewrite the above switch statement more concisely:

switch someGarment {
case .tie: print("tie")
case let .shirt(s): print("shirt, size: \(s)")
case let .pants(w, i): print ("pants, size \(w) X \(i)")
// pants, size 32 X 34

You could also add where clauses to further refine pattern matches:

switch someGarment {
case let .pants(w, i) where w == 32: print("inseam: \(i)")
default: print("No match")
// inseam: 34

Other Pattern-Matching Capabilities

To be clear, the enumeration case pattern can match a number of other kinds of patterns. While not an exhaustive list, here are a few examples:


Although from Swift’s perspective, pattern matching based on values of strings, characters, and numbers is just base behavior, the fact that it works with strings feels like a special case — and an incredibly useful one at that.

struct Song {
    var title: String
    var artist: String

let aria = Song(title: "Donna non vidi mai", artist: "Luciano Pavarotti")

switch(aria.artist) {
case "Luciano Pavarotti": print(aria)
default: print("No match")
// Song(title: "Donna non vidi mai", artist: "Luciano Pavarotti")


In addition to matching numeric types such as Int or Double value, enumeration cases can match numeric intervals:

let numbers = [-1, 3, 9, 42]
for number in numbers {
    switch(number) {
    case ..<3: print("less than 3")
    case 3: print("3")
    case 4...9: print("4 through 9")
    default: print("greater than 10")
// less than 3
// 3
// 4 through 9
// greater than 10


Enumeration cases can perform pattern matching on tuples:

let dogs = [(name: "Rover", breed: "Lab", age: 2),
            (name: "Spot", breed: "Beagle", age: 2),
            (name: "Pugsly", breed: "Pug", age: 9),
            (name: "Biff", breed: "Pug", age: 5)]

for dog in dogs {
    switch (dog) {
    case (_, "Lab", ...3): print("matched a young Lab named \(dog.name)")
    case (_, "Pug", 8...): print("matched an older Pug named \(dog.name)")
    default: print("no match for \(dog.age) year old \(dog.breed)")
// matched a young Lab named Rover
// no match for 2 year old Beagle
// matched an older Pug named Pugsly
// no match for 5 year old Pug

For more details on tuples, including pattern matching, see my blog post on The Tuple Pattern.

Type Casting

Enumeration also work with the type casting pattern, using either is , which simply checks a value’s type, or as, which attempts to downcast a value to the provided type:

struct Dog {
    var name: String

let items: [Any] = [Dog(name: "Rover"), 42, 99, "Hello", (0, 0)]

for item in items {
    switch(item) {
    case is Dog: print("Nice doggie")
    case 42 as Int: print("integer 42")
    case let i as Int: print("integer \(i)")
    case let s as String: print("string with value: \(s)")
    default: print("something else")
// Nice doggie
// integer 42
// integer 99
// string with value: Hello
// something else

Case Conditions

In addition to pattern matching in switch statements, Swift allows you to use the case keyword to specify pattern matches in conditionals such as if and guard statements, as well as for-in and while loop logic. While Swift makes that easy do, the syntax sometimes confuses people.

Case Conditions in Branches

In a switch, the keyword case is followed by the pattern you’re interested in matching, and then a colon. For example:

let someColor = "Red"

switch someColor {
    case "Red": // do something here
    // ...

However, when you do pattern matching in a conditional, the case keyword is followed by an initializer — in other words, it looks like an assignment (though it’s not).

if case "Red" = someColor {
    // do something here

Now the example above may seem silly, because clearly you could simply have compared the strings directly, which would feel more natural syntactically:

if someColor == "Red" {
    // do something here

But suppose you’re interested in comparing enumeration values rather than strings:

let Garment.shirt(size: "XL")

The enum type doesn’t implement Equatable so you can’t directly compare values with the == operator. You could of course overload == for your enum type, but then you’d need to do that every time you declare a new enumeration if you wanted to use that approach consistently. What a hassle!

Instead, you can pattern match with an enumeration case:

let items = [
    Garment.shirt(size: "M"),
    Garment.shirt(size: "L"),
    Garment.shirt(size: "XL"),
    Garment.pants(waist: 32, inseam: 34)

for item in items {
    if case .shirt = item { print(item) }
// shirt: M
// shirt: L
// shirt: XL

Here we’re printing only those items that match the .shirt enumeration case. We could, of course, be more specific:

for item in items {
    if case .shirt("XL") = item { print(item) }
// shirt: XL

As with switch statements, you can use let to bind associated values:

for item in items {
    if case let .shirt(size) = item, size.contains("L") {
// shirt: L
// shirt: XL

Case Conditions in Loops

Note that you can use pattern matching with case directly in loop logic. For example, you could roughly approximate the two previous examples with the following, more streamlined code:

for case .shirt("XL") in items {
    print("shirt, size XL")
// shirt, size XL

for case let .shirt(size) in items where size.contains("L") {
    print("shirt, size \(size)")
// shirt, size L
// shirt, size XL

The great thing is that the latter examples are not only shorter, but arguably more expressive than their former counterparts. Here’s another example:

let items:[Garment] = [
    .shirt(size: "L"),
    .pants(waist: 35, inseam: 31),
    .pants(waist: 35, inseam: 34),
    .pants(waist: 35, inseam: 35),

for case let .pants(w, i) in items where w == 35 && 34...36 ~= i {
    print("pants, \(w) X \(i)")
// pants, 35 X 34
// pants, 35 X 35

Here the enumeration case matches .pants instances, filtering out shirts and ties. The where clause matches only pants with waist size 35 and inseam between 34 and 36, using the pattern matching operator, ~= to compare the inseam value to an integer range.


Swift’s enumeration case pattern is surprisingly flexible, allowing you to use it creatively in combination with various other patterns, such as the tuple pattern and the value-binding pattern. Many of us haven’t experienced that kind of flexibility in the languages we regularly use. It can take some time to become aware of all the capabilities, and to remember to use them in the heat of battle.

But once you start getting in the habit, enumeration case and other Swift patterns will allow you to write more concise and expressive control logic. And that in turn should lead to code that is both easier to write, and — more importantly — easier to read.

Love Thy Mobile Platform

| Comments

Love Thy Mobile Platform

Posted by LeRoy Mattingly

Illustration by Anthony Mattingly

Mobile teams should have a passion for their platform — not just a passion for technology in general.

In today’s incredibly fast-paced world of mobile innovation, we need development teams that LOVE THEIR MOBILE PLATFORM. If you want to build killer mobile apps, — that is, apps that your users actually enjoy using, then this is paramount.

Or you can keep on looking for that silver bullet that promises it will cost less, that’s ‘write it once, run everywhere', and that provides the same look and feel on every platform. The silver bullet approach will almost guarantee a vanilla app, or at least one that belongs on the ‘island of misfit toys’ on your user’s mobile device. Why spend all that money building an app that no one wants to use?

Resist the temptation. Instead, build and align mobile teams with passion for their platform — not just a passion for technology in general. If some of your developers walk around with iPhones, Apple Watches, and MacBook Pros, they probably belong on the iOS team. And if one of them sports an Apple sticker or an Apple shirt you might have a candidate for the iOS team lead.

Likewise if some of your developers walk around with Android devices, Android watches, and *nix laptops, they probably belong on the Android team. And if one of them has an Android Robot sticker or has ever played around with Google Glass you might have a candidate for the Android team lead.

If some of your developers walk around with Microsoft or Blackberry mobile devices, there’s little hope of them leaving the ‘dark side’. Just tell them to keep on playing it safe, and continue to enjoy working on desktop apps. Maybe some day the public will willingly adopt those mobile platforms because they actually like them.

Developers who have no mobile device preference at all belong on the web development team. Long live HTML5! Let the browser wars continue!

Observations from the Trenches

In my ten years of experience working with numerous enterprises that were building mobile apps, I’ve noticed the following real-life patterns about mobile teams:

Members who fail to embrace the platform:

  • Recreate foreign design patterns that are expensive to maintain — the ‘not invented here’ syndrome.
  • Can be ‘Debbie Downers’, adversely affecting the energy and excitement of your team.
  • Don’t understand or even care about the iOS platform’s Human Interface Guidelines (HIG) or the Android platform’s Material Design.
  • Don’t promote the mobile platform they work with.
  • Invest just enough effort to finish and move on to the next app in their career.

Members who evangelize the platform:

  • Leverage platform design patterns well, resulting in less code and easier to evolve solutions — they ‘use the force’.
  • Promote excitement about their platform, instilling confidence in their team.
  • Are ‘one’ with the HIG/Material Design, ensuring their apps have a natural fit and finish that feels good to their users.
  • Are always promoting the mobile platform they work with.
  • Can’t get enough of the new stuff from Apple or Google, and can’t wait for Google I/O or WWDC.

Love Conquers All

If you want to build killer mobile apps for your enterprise, I recommend the following:

  1. Align with your vendor. Vendor lock-in for rapidly evolving technology is actually a good thing! Think of Apple and Google as your strategic partners, helping you to easily integrate the innovations you’ve come to expect from them every year. They provide a path forward that will be clear the moment it is made available to you.
  2. Don’t try to outsmart your vendor’s frameworks and build your own. You will create technical debt that you’ll wish you didn’t have. Even if you think you are smarter than Apple or Google, limit your innovation, and stay in the lanes.
  3. If you have to build your own frameworks or use open source frameworks, then be prepared to rip them out when Apple or Google deliver similar solutions. They usually do.
  4. Relish change and adapt fast. Build systems that are easy to keep current. The shelf-life of mobile apps is short (three to five years) because of changing business requirements, and because users demand the latest innovations from Apple and Google.
  5. Build small, effective teams who love their respective platforms. They will keep you close to the vendor, and ultimately bring your enterprise more value than you can ever imagine. Passionate teams build the best apps.

The bitterness of poor quality remains long after the sweetness of low price is forgotten.
— Benjamin Franklin
Encoding and Decoding in Swift 4

| Comments

Encoding and Decoding in Swift 4

Posted by Jonathan Lehr

New Capabilities Work With Native Swift Types

Structs, enums, and classes are all now able to take advantage of customizable, automatic encoding and decoding functionality in Swift 4.

Swift 4 adds new protocols, Encodable and Decodable (and for convenience, a typealias, Codable, defined as Encodable & Decodable) that define the behavior necessary for objects to encode and decode themselves (see SE-0167: Swift Encoders). This is a very welcome addition. Previously, support for these capabilities was provided only through the Foundation framework's NSCoding protocol, and was therefore limited in Swift codebases to subclasses of NSObject.

Swift 4 also adds classes that provide encoder and decoder implementations for JSON and property lists (JSONEncoder, JSONDecoder, PropertyListEncoder, and PropertyListDecoder). Codable support has been added to Foundation classes such as Date, Data, and URL, and is directly supported by Swift Library types such as String, Int, and Double, making coding easy to support for custom types. In many cases, Codable conformance requires nothing more than adding the protocol name to a custom type's protocol list.

Types that conform to the Codable protocol can encode/decode themselves automatically. For example, the following Swift struct

struct Person: Codable {
    var name: String
    var age: Int

requires no additional code in order to encode itself. Instead, you simply instantiate an encoder and call its encode(to:) method.

// Given an instance of Person...
let fred = Person(name: "Fred", age: 30)

// Instantiate an encoder
let encoder = JSONEncoder()

// Pass the Person instance to the encoder's encode(to:) method
let data = try! encoder.encode(fred)

And that's it! Because all of Person's properties are Codable, nothing further was needed. The result is the following JSON:

  "name" : "Fred",
  "age" : 30,

Decoding from JSON or plist data is equally simple:

//Instantiate a decoder
let decoder = JSONDecoder()

// Pass the instance type and JSON data to the decoder's decode(from:) method.
let fredsClone = try! decoder.decode(Person.self, from: data)

The previous line constructs a new Person instance by calling init(from decoder:), an initializer defined in the Decodable protocol and implemented via a protocol extension in the Swift 4 Library.

Working with Nested Objects

Since Codable properties are handled automaticaly, you can nest objects of any custom type that conforms to Codable without writing any additional code. For example suppose you want to modify Person to include a property of type Dog, and that one of Dog's properties is an enum named Breed, as shown below:

struct Dog: Codable {
    var name: String
    var breed: Breed

    // Note that the Breed enum adopts Codable, which is
    // automatically supported for enums that have raw values.
    enum Breed: String, Codable {
        case collie = "Collie"
        case beagle = "Beagle"
        case greatDane = "Great Dane"

struct Person: Codable {
    var name: String
    var age: Int
    var dog: Dog

Because both Dog and Breed conform to Codable, you wouldn't need any additional code to encode and decode the entire graph of objects. The only difference would be how you initialize the objects:

// Given an instance of Person with a nested Dog...
let fred = Person(name: "Fred", age: 30, dog: Dog(name: "Spot", breed: .beagle))

The call to the encoder would be the same as in the previous example:

// Encode the object graph
let data = try! encoder.encode(fred)

The resulting JSON would be as follows:

  "name" : "Fred",
  "age" : 30,
  "dog" : {
    "name" : "Spot",
    "breed" : "Beagle"

Decoding the JSON would reproduce the original object graph.

// Decode the object graph.
let fredsClone = try! decoder.decode(Person.self, from: data)

// Person(name: "Fred", age: 30, dog: Dog(name: "Spot", breed: Dog.Breed.beagle))

Since Swift collection types also conform to Codable, properties than contain collections can also be encoded and decoded automatically. The following example declares a pure (i.e., non-objc) Swift class with a property, people, of type Array<Person>.

class Group: Codable {
    var title: String
    var people: [Person]

    init(title: String, people: [Person]) {
        self.title = title
        self.people = people

// Initializing a Group
let jim = Person(name: "Jim", age: 30, dog: Dog(name: "Rover", breed: .beagle))
let sue = Person(name: "Sue", age: 27, dog: Dog(name: "Lassie", breed: .collie))
let group = Group(title: "Dog Lovers", people: [fred, sue])

// Encoding the group
let groupData = try! encoder.encode(group)

The resulting JSON would be:

  "title" : "Dog Lovers",
  "people" : [
      "name" : "Fred",
      "age" : 30,
      "dog" : {
        "name" : "Spot",
        "breed" : "Beagle"
      "name" : "Sue",
      "age" : 27,
      "dog" : {
        "name" : "Lassie",
        "breed" : "Collie"

Decoding would also be accomplished in the same way as the previous example:

// Decoding the group
let clonedGroup = try! decoder.decode(Group.self, from: groupData)

// Group(title: "Dog Lovers", people: [
//   Person(name: "Fred", age: 30, dog: Dog(name: "Spot", breed: Dog.Breed.beagle)),
//   Person(name: "Sue", age: 27, dog: Dog(name: "Lassie", breed: Dog.Breed.collie))])

Working with Dates and URLs

Date and URL conform to Codable in Swift 4, so encoding and decoding properties of those types is equally simple. For example, following struct would be the same way we encoded the struct in the previous example:

struct BlogPost: Codable {
    let title: String
    let date: Date
    let baseUrl: URL = URL(string: "https://media.aboutobjects.com/blog")!

let blog = BlogPost(title: "Swift 4 Coding", date: Date())
let blogData = try! encoder.encode(blog)

The resulting JSON would be:

  "title" : "Swift 4 Coding",
  "date" : 520269022.67031199,
  "baseUrl" : "https://media.aboutobjects.com/blog"

If the format of the date value above is different than what your REST API provides, you can simply set the dateEncodingStrategy property of the encoder to switch to a different format, for example ISO 8601:

encoder.dateEncodingStrategy = .iso8601

let blogData = try! encoder.encode(blog)

The JSON would now be:

  "title" : "Swift 4 Coding",
  "date" : "2017-06-27T15:18:26Z",
  "baseUrl" : "https://media.aboutobjects.com/blog"

To decode an ISO 8601 date, set the decoder's date decoding strategy:

decoder.dateDecodingStrategy = .iso8601
let clonedBlog = try! decoder.decode(BlogPost.self, from: blogData)

// BlogPost(title: "Swift 4 Coding", 
//          date: 2017-06-27 15:22:04 +0000,
//          baseUrl: https://media.aboutobjects.com/blog)

Using Custom Coding Keys

The coding examples we've covered so far have assumed that the keys in the JSON or plist data exactly match the names of corresponding properties. But what if they don't? For example, suppose a REST API provides the following JSON to describe a book:

  "title": "War of the Worlds",
  "author": "H. G. Wells",
  "publication_year": 2012,
  "number_of_pages": 240

Rather than use awkward property names in your Swift types, you can map any or all of the JSON or plist element names to your preferred property names. To do so, add a nested enum named CodingKeys of type String, CodingKey:

struct Book: Codable {
    var title: String
    var author: String
    var year: Int
    var pageCount: Int

    // Provide explicit string values for properties names that don't match JSON keys.
    enum CodingKeys: String, CodingKey {
        case title
        case author
        case year = "publication_year"
        case pageCount = "number_of_pages"

Encoding and decoding will then automatically work with the provided mappings. The following example decodes a Book from a literal string of JSON text:

// Swift 4 multiline string literal:
let bookJsonText =
  "title": "War of the Worlds",
  "author": "H. G. Wells",
  "publication_year": 2012,
  "number_of_pages": 240
let bookData = bookJsonText.data(using: .utf8)!
let book = try! decoder.decode(Book.self, from: bookData)

// Book(title: "War of the Worlds", author: "H. G. Wells", year: 2012, pageCount: 240)

Manual (Custom) Encoding and Decoding

But what if the JSON you want to decode contains some attributes you don't need in your object model, or if your model objects have some additional properties you don't want to encode? Or perhaps you want to structure your object graph in a way that doesn't exactly match how the JSON or plist data is structured. To accommodate such situations, you can provide custom implementations of the protocol methods: encode(to encoder:) to control how your type encodes its values, and init(from decoder:) to control how it decodes its values.

Suppose, for example, a REST API provided JSON similar to the following:

let eBookJsonText = """
  "title":"The Old Man and the Sea",
  "author":"Ernest Hemingway",

Let's assume you don't need the fileSize value in your model object, and that you'd like to model the last two elements as properties of a nested Rating object. In other words, you want your model types to look like this:

struct Rating {
    var average: Double
    var count: Int

struct EBook {
    var title: String
    var author: String
    var releaseDate: Date
    var rating: Rating

    enum CodingKeys: String, CodingKey {
        case title
        case author
        case releaseDate
        case averageUserRating
        case userRatingCount

Simply adopting Codable isn't sufficient because of the mismatched data structure. To accommodate those differences, you simply provide custom implementations of the protocol. For example, to decode EBook objects from JSON, write a custom implementation of the Decodable protocol.

The implementation below first asks the decoder for a KeyedDecodingContainer object, keyed by the EBook type's coding keys. It then calls the decoding container's decode(_:forKey:) method to decode individual values as needed.

extension EBook: Decodable {
    init(from decoder: Decoder) throws {
        let values = try decoder.container(keyedBy: CodingKeys.self)
        title = try values.decode(String.self, forKey: .title)
        author = try values.decode(String.self, forKey: .author)
        releaseDate = try values.decode(Date.self, forKey: .releaseDate)
        let average = try values.decode(Double.self, forKey: CodingKeys.averageUserRating)
        let count = try values.decode(Int.self, forKey: CodingKeys.userRatingCount)
        rating = Rating(average: average, count: count)

Note that implementations of init(from:) are free to simply ignore any elements they're not interested in. They can also use decoded values in arbitrary ways. In the above example, the values of the averageUserRating and userRatingCount elements are used to initialize a Rating instance, which is then used to populate the rating property.

Similarly, you can make the EBook type encodable by implementing Encodable, as shown below:

extension EBook: Encodable {
    func encode(to encoder: Encoder) throws {
        var values = encoder.container(keyedBy: CodingKeys.self)
        try values.encode(title, forKey: .title)
        try values.encode(author, forKey: .author)
        try values.encode(releaseDate, forKey: .releaseDate)
        try values.encode(rating.average, forKey: CodingKeys.averageUserRating)
        try values.encode(rating.count, forKey: CodingKeys.userRatingCount)

There are additional capabilities for enhanced customization. For example, you can populate an encoder's or decoder's userInfo dictionary with custom metadata that your implementation can use as it sees fit. There's also special handling provided for class hierarchies.

Once you've had a chance to get familiar with Swift 4 new coding features, I think you'll find them remarkably easy to use, and they should definitely help you to streamline --- or avoid writing in the first place --- a lot of otherwise tedious code.

| Comments

When Faulty Software Turns Deadly

Posted by Rob Ludwick

Last year my wife and I returned from a week-long conference to our home in Boise and then I immediately returned to Kansas City — practically hopping from one plane to the next — living the dream of a software contractor.

When did we decide that a software failure was okay for a refrigerator?

My wife called me the next day and told me that the fridge had stopped working. The buttons and display were inoperative and the fridge was warm. Literally, she had to 'reboot' it by sliding it out to get to the outlet, unplugging it and plugging it back in.

And magically the fridge started working again.

As it turns out, the refrigerator failed to cool anything for the previous week while we were gone. The cause was not mechanical, nor was it electrical either. No, the problem was with software.

So as I was listening to my wife telling me all about the horrors that were growing in the crisper drawer while being 2,000 miles safe and sound in my hotel room, two thoughts came to my head. The first was how lucky I was that I was not there to clean up that unholy mess. Because if you’ve ever had this happen to you, I don’t have to tell you. You know.

And the second was, when did we decide that a software failure was okay for a refrigerator? I know it may be hard to believe, but there was a time when software did not exist in a fridge. Refrigeration was purely mechanical at one point. A motor, a compressor, and a temperature sensitive spring — that’s it.

But over time, things got more complicated. We added electronics, and microprocessors, and software, under the belief that software somehow magically adds value to the consumer’s experience.

And it turns out that it does when the software works. But when it doesn’t work, the value added is the large furry black mold growing in the jar of mayonnaise. Of all the ways a fridge could die, I never thought it would be software-related.

It’s one thing when it’s just a fridge. It’s an entirely a different story when people are injured or hurt.


The Therac–25 was a radiation therapy machine built in 1982, a successor to the Therac–20.

In the Therac–25, the engineers decided everything should be managed by computer, with two major flaws. First, they replaced the hardware interlocking safety measures with software interlocks. And second, they used the same software from the Therac–20, the previous model.

Malfunction 54 meant there was either an overdose, or underdose, and the software couldn’t figure out which.

As it turns out the software interlocks could fail because of race conditions, exposing bugs in the old Therac–20 software that would have been prevented by the hardware interlocks. The turntable could get into an unknown state and the electron beam could fire in x-ray mode without the x-ray target in place, giving patients massive doses of radiation.

In one case a patient was diagnosed with skin cancer on the side of his face. The technician put in a prescription to the Therac–25 for a low dose of electron radiation, and enabled the beam. The patient screamed, as the machine made a loud sound that was heard over the intercom. Further, the Therac–25 showed an error message, 'Malfunction 54,' and then stopped.

The tech ran into the room, asked what had happened, and the patient said that he saw a bright flash of light and his face felt like it was on fire. When the tech reported the error message to the manufacturer, the manufacturer said that Malfunction 54 meant there was either an overdose, or underdose, and the software couldn’t figure out which.

It took some effort by the technician to reproduce the error condition, but if the prescription information was entered rapidly enough, the Malfunction 54 error condition could be reproduced on demand. When this was reported to the manufacturer, the manufacturer could eventually reproduce it, and measured the center of the beam to be 25,000 rads — about 2 orders of magnitude higher than the prescription.

The patient died about 3 weeks after the treatment. Autopsy records showed the patient to have had significant radiation damage to the right front temporal lobe and to the brain stem.

Overall, 4 people died, and two more were injured seriously in 6 events from 1986 to 1987.

March 2002, Fort Drum, NY

In 2002, I worked for a large corporation, named Raytheon. And personally, I found it a fascinating place to work at the time. It was literally just a 6 month after the 9/11 attacks and the U.S. was on a war footing. It was an anxious time, and the far majority of US citizens were supportive of the president’s use of military force.

In Fort Drum that fateful day, the field artillery unit was training for war.

At that time, especially right after 9/11, there was a clear mission. Everyone knew what it was, and everyone was marching to the same beat. On the morning of September 11th, we all knew war was coming, and the program I was working on, AFATDS, would be used for the first time in a major combat scenario.

AFATDS was developed to do two things: increase the effectiveness of the artillery force, and prevent friendly fire. Friendly fire is one of the worst aspects of war. It’s one thing to have one side shooting at you, it’s another when you have both sides shooting at you.

In Fort Drum that fateful day, the field artillery unit was training for war, and the commanding officer was being demanding of his soldiers. He wanted a round fired immediately. The AFATDS operator chose a target on the AFATDS system. A screen came up with some details about where the target was. The commanding officer was getting impatient, and he wanted his artillery to fire now. The operator clicked on a window, confirming the details about the target, and the target was sent to the artillery gun.

A minute or so later, the round landed on a mess tent. One soldier was killed immediately. Another died of his injuries several weeks later. An additional 13 were injured.

In 2003, the Army cleared Raytheon and the AFATDS program from fault, and so I continued working there, blissfully ignorant of the true underlying details. It took until 2008 for the Fort Wayne Journal Gazette to publish the details of what actually happened.

As it turned out, the artillery's window altitude was set to 0 meters, which was the default if an elevation wasn’t provided with the target. But the altitude of the target was in fact 200 meters. For a trajectory, this meant the target could be off by more than 1000 meters.

AFATDS had opened a form for the operator, but instead of requiring the operator to input the elevation, the altitude was pre-filled out in the form to be 0. But 0 is a valid altitude. And if the operator clicked on the OK button, the software accepted the value.

It might have felt better had it been the first time this issue occurred — but it wasn’t. In 1998, four years earlier, the same thing happened at Fort Bragg, except they caught the problem before the rounds were fired.

Further, within a week of the accident, the bug was fixed.

After the dust raised by the newspaper article settled, I asked a test engineer about the design decision, and the decision to use 0 for the altitude, and he said, “When the operator sees that zero, that tells him to enter an altitude.” I was stunned, because I that was completely the wrong UI experience for the operator. Put simply, if the program needs something from the user, it needs to ask for it.

Lessons Learned

First, we need to realize when a software failure creates an unsafe condition. And in those cases, software development should shift from a trial and error style of development to using strong mathematical proofs of correctness — formal verification. Unfortunately, such systems are complex and expensive to implement, and they can’t be implemented overnight. But they do provide hard proof of a program’s correctness.

For every benefit that software gives us, there is also usually a failure case.

Second, we need to understand that the human interface is a layer of communication between the user and the programmer. If the communication is not clear, then the confusion caused by the interface can lead to an unsafe state. While these kinds of issues may be easier to fix, they require careful testing to make sure the people that use them can use them accurately, especially when under stress.

And for everything, we need to understand that software is not a magic bullet. For every benefit that software gives us, there is also usually a failure case.

Lastly, unfortunately, with software, we more often learn from our failures than we do our successes. And that will continue to be true for the foreseeable future.

| Comments

The Tuple Pattern

Posted by Jonathan Lehr

In Swift, a tuple is a parenthesized list of two or more elements of any type. For example, (0.5, "foo") is a tuple where the first element is a Double, and the second is a String. One of the key benefits tuples provide is that they make it possible to write a method or function that returns more than one value.

A common use case for this is enumerating a dictionary in a for-in loop. The loop first obtains an iterator object from the dictionary, and then calls the iterator's next() method each time through the loop to obtain the current key and value as a tuple. Here's an example:

let dogInfo: [String: Any] = ["name": "Fido", "breed": "Lab", "age": 5]

for (key, value) in dogInfo {
    print("\(key): \(value)")
// Prints
// name: Fido
// breed: Lab
// age: 5

Understanding the Tuple Pattern

Tuples can be used in several different contexts, one being an expression value. So, for example, you can use a tuple as an initializer:

let temperature = (72, "Fahrenheit")
print(temperature)   // Prints "(72, "Fahrenheit")"

You can then access individual elements by position:

print(temperature.0) // Prints "72"
print(temperature.1) // Prints "Fahrenheit"

You can also use a tuple in a type declaration:

var temperature: (Double, String)

Intuitively then, you might think that tuple is a type, but it's not --- it's a pattern. So, what's a pattern? Well, here's how pattern is defined in The Swift Programming Language (3.1 Edition):

A pattern represents the structure of a single value or a composite value. For example, the structure of a tuple (1, 2) is a comma-separated list of two elements. Because patterns represent the structure of a value rather than any one particular value, you can match them with a variety of values. For instance, the pattern (x, y) matches the tuple (1, 2) and any other two-element tuple. In addition to matching a pattern with a value, you can extract part or all of a composite value and bind each part to a constant or variable name.

Okay, that's a mouthful, so perhaps walking through some examples can add a bit of clarity. To begin with, a tuple doesn't define an object with properties or behaviors. Instead, a tuple describes the internal structure of a single, compound value. For example, the following line binds a variable, item to a tuple of data types:

var item: (Double, Int)

So item's data type is an ordered list of two types. This means that when we assign a value to item, it must be a grouping of two individual values, the first a Double, and the second an Int:

item  = (19.99, 2)

Binding Individual Elements

You can use the tuple pattern to define variables and constants. The following line defines two constants, x and y, whose types are both inferred to be Int:

let (x, y) = (10, 20)

which is conceptually similar to:

let x = 10, y = 20

However, in the former case, the tuple pattern has the effect of decomposing the individual values of the tuple on the right, before binding them to the constants on the left.

Note that the same pattern (x, y), can be used to define constants that differ not only by value, but also by type. In the following definition, x is a Double, and y is a String.

let (x, y) = (0.5, "foo")

Accessing Tuple Elements

A tuple's elements can be referenced by position:

let item  = (19.99, 2)

print("price: \(item.0), quantity: \(item.1)")

// Prints "price: 19.99, quantity: 2"

In addition, you can label a tuple's elements using the same syntax you'd use to define a function's parameter names (tuples are intentionally similar to parameter lists), and then access individual values by name:

let item = (price: 19.99, quantity: 2)

print("price: \(item.price), quantity: \(item.quantity)")

// Prints "price: 19.99, quantity: 2"

Again, these identifiers aren't analogous to object properties or keys in a dictionary; they simply provide the compiler with labels for individual components of a compound value.

Another way to access the elements of a tuple of values is to bind them to individual variables or constants. For example, suppose we're working with a class that has a computed property that returns a tuple:

// Defines a computed property with two named elements,
// 'price' and 'quantity', that returns (Double, Int).
var defaultItem: (price: Double, quantity: Int) {
    return (19.99, 2)

Then in the body of an instance method, we could use the tuple pattern to define a pair of let constants, and bind the values of the individual elements of the tuple as follows:

// Defines individual let constants, 'amount' and 'number'.
// Compiler infers their types from the type of 'defaultItem`.
let (amount, number) = defaultItem

print("price: \(amount), quantity: \(number)")
// Prints "price: 19.99, quantity: 2"

The Enumeration Case and Value Binding Patterns

The Swift case keyword makes it possible to combine two additional Swift patterns, enumeration case, and value-binding. Here's what the documentation): says about them:

Enumeration Case Pattern

An enumeration case pattern matches a case of an existing enumeration type. Enumeration case patterns appear in switch statement case labels and in the case conditions of if, while, guard, and for-in statements.

Value-Binding Pattern

A value-binding pattern binds matched values to variable or constant names. Value-binding patterns that bind a matched value to the name of a constant begin with the let keyword; those that bind to the name of variable begin with the var keyword.

The code in the following example loops over an array of tuples, using an if case construct to test whether the quantity of the current item is 2, and if so, binding the tuple's price (first element) to amount.

// An array whose elements are tuples representing price and quantity
let items = [(12.99, 2), (14.95, 3), (19.99, 2)]
for item in items {
    // Binds price value to 'amount'
    // Enters the 'if' statement's body if quantity matches the pattern '2'
    if case (let amount, 2) = item {
// Prints 
// "12.99"
// "19.99"

// Note that the let keyword can be moved outside the tuple
for item in items {
    if case let (amount, 2) = item {

You may be more familiar with the use of the case keyword in switch statements. The following example is similar to the preceding one in that it shows combined use of the enumeration case and value-binding patterns, but this time in the context of a switch statement:

let discount1 = 10.0
let discount2 = 14.0

let items = [("Shirt", 44.99), ("Shoes", 89.99), ("Jeans", 64.99)]

for item in items {
    // Applies a $10 discount for shirts and a $14 discount for shoes by
    // pattern matching on the string value of the first element
    switch item {
    case let ("Shirt", p):  print("Shirt: $\(p - discount1)")
    case let ("Shoes", p):  print("Shoes: $\(p - discount2)")
    case let (itemName, p): print("\(itemName): $\(p)")
// Prints
// Shirt: $34.99
// Shoes: $75.99
// Jeans: $64.99

Back to the Future

Let's revisit the example from the beginning of this post, which led off by defining a dictionary as follows:

let dogInfo: [String: Any] = ["name": "Fido", "breed": "Lab", "age": 5]

The example then proceeded to enumerate the dictionary's keys and values with a for-in loop:

for (key, value) in dogInfo {
    print("\(key): \(value)")

From what we've already learned about the enumeration case pattern, we can now correctly infer that there's an implicit case-let after the keyword for:

// Valid Swift, but `case let` will be inferred by the compiler if omitted.
for case let (key, value) in dogInfo {
    print("\(key): \(value)")

The earlier example also noted that our for-in loop obtains an iterator (an instance of DictionaryIterator) from the dictionary, and calls the iterator's next() method each time through the loop to obtain a tuple of the current key and value. Here's the declaration of the next() method from the Swift Library documentation:

mutating func next() -> (key: Key, value: Value)?

As you can see, next() returns an optional, generically typed tuple with named elements. (Note: we'll explore optional values and the Optional type in detail in a future blog post.) So we now know that the for-in loop could also have been written like so (though generally, the earlier style is preferable):

for item in dogInfo {
    print("\(item.key): \(item.value)")
// Prints
// name: Fido
// breed: Lab
// age: 5

Here we simply capture the current tuple in item, and then access item's elements by name in the argument to the print() function.


Some of the concepts embodied in Swift may seem unintuitive to those of us coming from primarily object-oriented language backgrounds. A solid understanding of Swift patterns can help a lot with comprehension when reading non-trivial code, and can be indispensible when struggling to figure out how to make use of various features of the language to implement details of the apps we're writing.

| Comments

Modeling JSON Mappings – Part 1

Posted by Jonathan Lehr

iOS apps commonly store and retrieve JSON data via REST APIs. Consequently, many development teams initially spend some time formulating an approach for decoding model objects from JSON, and (usually) vice versa. And due diligence requires sifting through a substantial number of frameworks, both in Objective-C and Swift, that provide varying degrees of support for mapping values between these two different representations. Unfortunately, even teams that adopt the best of these frameworks still tend to experience some headaches dealing with the resulting mappings.

Back to the Future

I've been a fan of object-relational mapping (ORM) frameworks since cutting my teeth on NeXT's Enterprise Objects Framework (EOF) in the mid-90s. ORMs are designed to deal with a host of issues that arise when mapping values between relational database tables and object models. Because JSON represents relationships more or less the same way model objects do --- hierarchically --- mapping JSON to model objects is inherently much simpler. Still, there are several capabilities ORMs and JSON mapping frameworks have in common:

Mandatory Capabilities

  • Map a given JSON dictionary to a specific class
    • Construct a model object on decode
    • Construct a dictionary on encode
  • Map JSON data values to object properties
    • Associate JSON element names with object property names
    • Allow specification of value transformations, and automatically apply them during encode/decode
    • Populate model object properties with JSON values on decode
    • Populate dictionary with model object property values on encode
  • Model to-one, and to-many relationships
    • Store type information for related objects
    • Construct child objects and arrays of child objects on decode
    • Construct child dictionaries and arrays of child dictionaries on encode

Nice to Haves

  • Flattened attributes
  • Inverse relationships

But the headaches developers always seem to run into using JSON mapping frameworks isn't because they lack these kinds of capabilities, but rather because the mappings have to be baked right into the code of each of the model classes. As a consequence, the data model is scattered across classes, making it harder to visualize, and harder to maintain.

But one of the killer features of EOF and its successor, Core Data, is its externalization of mapping metadata. Core Data mappings are coalesced into a single XML document that the framework works with at runtime. This has several advantages:

  • Core Data directly supports model versioning, allowing earlier versions of a given data model to be accessed at runtime, making it easier for apps to handle API version differences.
  • External tools can leverage the metadata to, for example, generate base classes (via Xcode's built-in class generation facilities, as well as third-party tools such as mogenerator)
  • An entire data model can be version controlled as a single unit, making differences between versions more apparent.
  • The model can be presented and edited in a visual tool such as Xcode's Model Editor

Give the potential advantages, the team here at About Objects wondered if it would be possible to a) use a Core Data model to store JSON mapping metadata (pretty much a no-brainer), and b) use Core Data models in projects that don't use Core Data for storage. Okay, we actually did more than 'wonder'; we wrote a framework, Modelmatic. Luckily, it turned out the answers were 'yes' and 'yes'.

The Modelmatic Framework

Modelmatic allows you to specify custom mappings between key-value pairs in JSON dictionaries, and corresponding properties of your model objects. For example, suppose you're working with JSON similar to the following (from the Modelmatic example app):

  "version" : "2",
  "batchSize" : "10",
  "authors" : [
      "firstName" : "William",
      "lastName" : "Shakespeare",
      "born" : "1564-04-01",
      "author_id" : "101",
      "imageURL" : "https:\/\/www.foo.com?id=xyz123",
      "books" : [
          "tags" : "drama,fantasy",
          "title" : "The Tempest",
          "year" : "2013",
          "book_id" : "3001"

Step 1: Defining the Model

To use Modelmatic, you start by modeling your data using Xcode's Core Data Model Editor. Don't worry, you're not going to need to use other aspects of Core Data, just the data model -- and just a subset of it's capabilities.

Step 2: Create Swift Classes

If your model is complex, and/or changes frequently, consider using mogenerator to generate model classes (and update them as needed) from the metadata you specified in the model editor. Otherwise, it's simplest to just create the classes you need from scratch. Here's an example:

import Foundation
import Modelmatic

@objc (MDLAuthor)
 class Author: ModelObject
    // Name of the Core Data entity
    static let entityName = "Author"

    // Mapped to 'author_id' in the corresponding attribute's User Info dictionary
    var authorId: NSNumber!
    var firstName: String?
    var lastName: String?
    var dateOfBirth: NSDate?
    var imageURL: UIImage?

    // Modeled relationship to 'Book' entity
    var books: [Book]?

Key points:

  • import Modelmatic.
  • Subclass ModelObject.
  • Use @objc() to avoid potential namespacing issues.
  • Define a static let constant named entityName to specify the name of the associated entity in the Core Data model file.
  • authorId is mapped to author_id in the model (see the attribute definition's User Info dictionary).
  • Modelmatic automatically maps all the other properties, included the nested books property.

Customizing Mappings

Modelmatic automatically matches names of properties you specify as attributes or relationships in your Core Data model to corresponding keys in the JSON dictionary. For example, given an attribute named firstName, Modelmatic will try to use firstName as a key in the JSON dictionary, and map it to a firstName property in Author.

However, the framework also allows you to specify custom mappings as needed. For instance, the Author class has the following property:

var authorId: NSNumber!

A custom mapping is provided in the model file, binding the authorId attribute to the JSON key path author_id, as shown below:

To add a custom mapping, select an attribute or relationship in the model editor, and add an entry to it's User Info dictionary. The key should be jsonKeyPath, and the value should be the key or key path (dot-separated property path) used in the JSON dictionary. During encoding and decoding, Modelmatic will automatically map between your object's property, as defined by its attribute or relationship name, and the custom key path you specified to access JSON values.

Defining Relationships

Core Data allows you to define to-one and to-many relationships between entities. Modelmatic will automatically create and populate nested objects for which you've defined relationships. For instance, the Modelmatic example app defines a to-many relationship from the Author entity to the Book entity. To create an Author instance along with its nested array of books, you simply initialize an Author with a JSON dictionary as follows:

let author = Author(dictionary: $0, entity: entity)

For example, given the following JSON, the previous call would create and populate an instance of Author containing an array of two Book objects, with their author properties set to point back to the Author instance):

      "author_id" : "106"
      "firstName" : "Mark",
      "lastName" : "Twain",
      "books" : [
          "book_id" : "3501",
          "title" : "A Connecticut Yankee in King Arthur's Court",
          "year" : "2014"
          "book_id" : "3502",
          "title" : "The Prince and the Pauper",
          "year" : "2015"

Property Types

Modelmatic uses methods defined in the NSKeyValueCoding (KVC) protocol to set model object property values. KVC can set properties of any Objective-C type, but has limited ability to deal with pure Swift types, particularly struct and enum types. However bridged Standard Library types, such as String, Array, Dictionary, as well as scalar types such as Int, Double, Bool, etc. are handled automatically by KVC with one notable issue: Swift scalars wrapped in Optionals. For example, KVC would be unable to set the following property:

var rating: Int?

If your ModelObject subclasses uses a Swift type that KVC can't directly handle, you can provide a computed property of the same name, prefixed with kvc_, to provide your own custom handling. For example, to make the rating property work with Modelmatic, add the following:

var kvc_rating: Int {
        get { return rating ?? 0 }
        set { rating = Optional(newValue) }

If Modelmatic is unable to set a property directly (in this case the rating property), it will automatically call the kvc_ prefixed variant (kvc_rating, in this example).

Specifying Value Transformations

In your Core Data model file, you can specify a property type as Transformable. If you do so, you can then provide the name of a custom transformer. For example, the Author class in the Modelmatic example app has a transformable property, dateOfBirth, of type NSDate. Modelmatic automatically uses an instance of the specified NSValueTransformer subclass to transform the value when accessing the property.

Here's the code of the Example app's DateTransformer class in its entirety:

import Foundation

@objc (MDLDateTransformer)
class DateTransformer: NSValueTransformer
    static let transformerName = "Date"

    override class func transformedValueClass() -> AnyClass { return NSString.self }
    override class func allowsReverseTransformation() -> Bool { return true }

    override func transformedValue(value: AnyObject?) -> AnyObject? {
        guard let date = value as? NSDate else { return nil }
        return serializedDateFormatter.stringFromDate(date)

    override func reverseTransformedValue(value: AnyObject?) -> AnyObject? {
        guard let stringVal = value as? String else { return nil }
        return serializedDateFormatter.dateFromString(stringVal)

private let serializedDateFormatter: NSDateFormatter = {
    let formatter = NSDateFormatter()
    formatter.dateFormat = "yyyy-MM-dd"
    return formatter

The date transformer is registered by the following line of code in the Example app's AuthorObjectStore class:

NSValueTransformer.setValueTransformer(DateTransformer(), forName: String(DateTransformer.transformerName))

Step 3: Loading the Model

Somewhere in your app (you only need to do this once during the app's lifecycle), do something like the following to load the Core Data model file into memory:

let modelName = "Authors"

guard let modelURL = NSBundle(forClass: self.dynamicType).URLForResource(modelName, withExtension: "momd"),
    model = NSManagedObjectModel(contentsOfURL: modelURL) else {
        print("Unable to load model \(modelName)")

You'll most likely want to store the reference to the model in a class property.

Step 4: Encoding and Decoding Model Objects

Once you've obtained JSON data, you can deserialize it as follows (Note that deserializeJson wraps a call to NSJSONSerialization):

guard let data = data, dict = try? data.deserializeJson() else { 

To construct an instance of your model class, simply provide the dictionary of deserialized values, along with the entity description:

let authors = Author(dictionary: $0, entity: entity)

This will construct and populate an instance of Author, as well as any nested objects for which you defined relationships in the model (and for which the JSON contains data). You then simply work with your model objects. Whenever you want to serialize an object or group of objects, simply do as follows:

// Encode the author
let authorDict = author.dictionaryRepresentation

// Serialize data
if let data = try? dict.serializeAsJson(pretty: true) {
    // Do something with the data...

Modelmatic provides methods to make it easier to programmatically set objects for properties that model to-one or to-many relationships. While it's easy enough to remove objects (simply set to-one properties to nil, or use array methods to remove objects from arrays), setting or adding objects to these properties can be slightly more involved. That's because Modelmatic automatically sets property values for any inverse relationships you define in your model, so that child objects will have references to their parents.

While inverse relationships aren't required, they're often convenient. Just be sure to use the weak lifetime qualifier for references to parent objects.

Even if you're not currently using inverse relationships, it's a good idea to use the convenience methods provided by ModelObject for modifying relationship values. That way, if you change your mind later, you won't need to change your code to add support for setting parent references.

To-Many Relationships

ModelObject provides two methods for modifying to-many relationships, as shown in the following examples:

// Adding an object to a to-many relationship
let author = Author(dictionary: authorDict, entity: authorEntity)
let book = Book(dictionary: bookDict, entity: bookEntity)
do {
    // Adds a book to the author's 'books' array, and sets the book's 'author' property
    try author.add(modelObject: book, forKey: "books")
catch MappingError.unknownRelationship(let name) {
    print("Unknown relationship \(name)")

// Adding an array of objects to a to-many relationship
let books = [Book(dictionary: bookDict2, entity: bookEntity),
             Book(dictionary: bookDict3, entity: bookEntity)]
do {
    // Adds two books to the author's 'books' array, setting each book's 'author' property
    try author.add(modelObject: books, forKey: "books")
catch MappingError.unknownRelationship(let name) {
    print("Unknown relationship \(name)")

To-One Relationships

An additional method is provided for setting the value of a to-one relationship, as shown here:

// Set the value of a to-one relationship
let book = Book(dictionary: bookDict1, entity: bookEntity)
let pricing = Pricing(dictionary: ["retailPrice": expectedPrice], entity: pricingEntity)
do {
    // Sets the book's 'pricing' property, and sets the pricing's 'book' property
    try book.set(modelObject: pricing, forKey: "pricing")
catch MappingError.unknownRelationship(let name) {
    print("Unknown relationship \(name)")

Next Installment

In Modeling JSON Mappings -- Part 2, we'll take a look under the hood to see how the Modelmatic framework leverages the data model to automate encoding and decoding.

| Comments

Hamburgers Belong on the Grill, Not on Your iPhone

Posted by Anthony Mattingly

How many projects have you worked on where the client wants to throw in every feature and action they can think of? It often seems they want their app to have everything, plus the kitchen sink. And once they've specified some huge set of features, how do they want them organized? All too often, it's via the infamous hamburger menu.

This is, unfortunately, a common pitfall for iPhone apps. First, an iPhone app isn't a responsive website. Responsive web designers love to use hamburger menus. However, web designers have different constraints on how they must organize their content, and the nature of the content is generally different than that of a mobile app.

An iPhone app is a tool. Every action and task should be so easy that users don't have to think about how to perform them. That way users can just focus on the tasks they're currently trying to carry out.

Second, iPhones are not Android phones. Some folks prefer Android, others love iOS. While both are successful platforms, I personally lean towards the Apple side. I find the iOS platform very efficient and effortless to use. Android may have a lot of bells and whistles, and give you the freedom to do things that iOS doesn't, but more isn't always better. Usability goes a long way, and often trumps other considerations. Too much is a manifestation of complexity. Apple does a tremendous job of reducing the too much to help keep the focus on the essence.

On the left, a standard cable TV remote. On the right, an AppleTV remote.

On the left, a standard cable TV remote. On the right, an AppleTV remote.

For example, your standard cable TV remote has a zillion capabilities, yet, how many buttons do you actually find yourself using? Now, look at an AppleTV remote. It delivers all its available features via six visible buttons and a trackpad. Compare that to a typical cable TV remote, sporting nearly ten times as many controls. Long story short, throwing in lots of functionality and grouping all of it in one place is not a good solution.

Is It Just Me, Or Do You Smell Hamburgers?

Normally, I love the smell of hamburgers, but not when it comes to iPhone apps. Hamburger menus are notorious for being overloaded and unintuitive. Too often, a hamburger menu serves as a catch-all for uncategorized requirements that aren't tied to an app's core purpose. In fact, the use of hamburger menus can become a crutch -- a way to avoid carefully thinking through an app's information architecture, and skip the hard work of designing a solution. It's the creative equivalent of a shoulder shrug.

Example of how certain apps (we won't name names) fill hamburger menus with extensive functionality. When closed all of that functionality is hidden from the user.

Example of how certain apps (we won't name names) fill hamburger menus with extensive functionality. When closed all of that functionality is hidden from the user.

By encouraging an unlimited number of options to be thrown in, hamburger menus tend to result in user interfaces that require more thought and attention from the user. In this scenario, users have to read and scroll through all of the options in the menu to find a given action, and then choose one that best describes the task they want to perform. In addition, when the menu is closed, all of the app's features are hidden, leaving users without any visual indication of the app's range of capabilities.

Okay, okay. We've all read the blogs that say how much hamburger menus suck, but not many of them talk about alternatives.

The Alternative

First, to keep a mobile app as simple as it needs to be, the organization of features and tasks must be thoroughly analyzed, prioritized and mapped out. Limiting an app's features strictly to those required to provide a coherent and meaningful user experience is essential to achieving that Apple-like simplicity. Fewer options yields faster and easier decisions for users.

To begin, classify and categorize app features and tasks into meaningful groups to provide a context in which they're more understandable. Just remember to keep the number of these groups small. You don't want your app to suffer from TV Remote Syndrome!

On the left is an example of hamburger menu information architecture and how it defines the features as peers -- that is, all on the same level. In comparison, reducing and reorganizing the information architecture, as shown on the right, puts features into context with fewer groupings to help users quickly identify tasks while still comprehending app capability. Having fewer items to process allows users to make decisions faster.

On the left is an example of hamburger menu information architecture and how it defines the features as peers -- that is, all on the same level. In comparison, reducing and reorganizing the information architecture, as shown on the right, puts features into context with fewer groupings to help users quickly identify tasks while still comprehending app capability. Having fewer items to process allows users to make decisions faster.

A tab bar is often a better solution than a hamburger menu for a couple of reasons: one, using a tab bar forces you to keep your main navigation to a minimum, as the iPhone displays a maximum of five tabs. Two, it ensures that the essence of the app can be seen immediately, providing calls-to-action for users that can be accessed globally for efficient navigation.

Because a tab bar's items aren't hidden away in a drawer, they allow the most useful tasks and features to be located in an optimal manner, without sacrificing a great deal of UI real estate. More generally, standard iOS framework components embody UI paradigms that provide a consistent and familiar user experience. Using standard iOS components such as tab bars nearly always saves significant development time and cost over other, more custom solutions. In my experience, unnecessary customization can more than double development costs while yielding a sub-par user experience.

In general, it's best to exhaust the possibilities afforded by designing around components from Apple’s iOS frameworks before resorting to custom solutions that reference other platforms. A good resource for help understanding the iOS platform frameworks is the iOS Human Interfaces Guidelines (HIG). Among other things, the HIG provides great insights into where, when and how to use standard UI components.

So next time, before adding the kitchen sink, take a step back and define what users will actually use while on the go with their devices. In our busy lives, most of us just don't have the time to sit and read a user manual or dig through all of the features of apps that suffer from TV Remote Syndrome. Try streamlining your app's design by using native iOS navigational components -- and leave the hamburger for the grill.

| Comments

I'll Take My RESTful Services Well-Done

Posted by LeRoy Mattingly

Last decade was SOA services. This decade is REST services. These days it seems just about everyone is doing REST --- but are they doing it well? From all the evidence, it seems most enterprise IT organizations are struggling with the transition from SOA to REST. And it turns out that the mobile platform is usually at the epicenter of that struggle.

Mobile App Development Readiness Review

We often perform assessments for enterprise IT organizations to help them identify areas of risk related to their mobile development practices. As part of what we call a Mobile App Development Readiness Review, we conduct a four week assessment covering everything from business and mobile strategies, through architecture and design, execution, testing, delivery, and implementation.

Whenever I conduct one of these reviews, one of the first things I look for is the health of the service layer. After all, most mobile apps can't do much without a good backend service layer.

Taking an Inventory of the API

The question I usually start with is, "Can you provide me an inventory of your service APIs?" You might be surprised that most companies can't do this. They can produce a bunch of APIs that are spread all over the place, documented poorly or not at all, and that typically have numerous orphans or single use services. The spaghetti has moved from the code to the service layer, and as a result, opportunities for reusing shared services are often missed --- in many cases, internal consumers are not even aware of the existence of the API. The best way to avoid these kinds of problems is to build a discipline around the holistic management of the entire collection of enterprise services.

Most large enterprises currently have a mix of legacy SOA services and newer, RESTful services. I generally view the ratio of this mix is an indicator of how much progress an organization has made in modernizing their service layer.

Drilling Into the Details

Once the API inventory is complete, I begin digging into the details of selected portions of the REST service layer, looking at design, reusability, documentation, consistency, usability, and maturity level. The following are criteria we use to evaluate an organization's REST services:

Restful Maturity

RESTful maturity was first described in a presentation by Leonard Richardson, and has become know as the Richardson Maturity Model, made famous by Martin Fowler's article.

Maturity Level RESTful? Description
0 No
  • Single URI and single HTTP verb (GET or POST)
  • Include all operations in the payload
  • Use HTTP for tunneling to call rpc-like services (NOTE: this describes XML-RPC and SOAP)
1 No
  • Multiple URIs and a single HTTP verb (GET or POST)
  • Use resources to break down a large service endpoint
  • Include CRUD verbs as part of the URL
  • One URI per method
  • Still using HTTP for tunneling to call resources
2 Yes
  • Multiple URIs and multiple HTTP verbs (GET, POST, PUT, PATCH, DELETE) used with correct semantics
  • Use resources to break down a large service endpoint
3 Yes
  • Use HATEOAS as a discoverable web service
  • Self-documenting API
  • Independent evolution
  • Decoupled implementation

  • Enterprises should be targeting at least Level 2 or 3 on Richardson's Maturity Model. Anything else scores immature and represents an opportunity for improvement.

  • RESTful APIs should be logical (not based on implementation details). All services should be resource-based (as opposed to RPC-like) and based on domain models that reflect the natural business partitions at an enterprise.

  • RESTful service designs should always start with logical business domain models. The nouns in the model serve as the basis for naming the resources in the REST API. That doesn't completely rule out service calls with verb-form names, but those are typically the exception rather than the rule.

Domain Modeling

Defining an enterprise-wide domain model is a perilous task. But defining domain models that map to the natural business partitions in an enterprise is both reasonable and attainable. These domain models provide the blueprint for RESTful service resource APIs. Services can be built to provide these resources on both an as-needed and client-driven basis. Services can evolve to map to these blueprints.

  • RESTful services should be loosely coupled. Services APIs should never expose implementation specific details or explicitly name architectural components. Tight coupling of clients to service APIs limits the ability to make architectural changes. All APIs should remain as logical abstractions over their implementation details.
  • RESTful services should be reusable by multiple clients, now and in the future. Service APIs that are capable of being used by only a single client are barely useful services --- in fact, they're really nothing more than glue code. Narrow-focused APIs miss the opportunity to vend business data and logic in a way that can meet future needs. An over-reliance on single-use services can turn an architecture into a spaghetti-like mess.
  • RESTful services should be documented in a consistent way to make it easier for consumers to understand and reuse the services.

When conducting a full Mobile App Development Readiness Review, we perform a similar analysis across many areas of the organization. We then furnish the client with a risk scorecard to establish a baseline, and provide specific mitigations for areas identified as high risk.

The Benefits of Well-Done RESTful Services

I've seen organizations that follow the above guidelines reap tremendous benefits, some of which are as follows:

  1. A domain-based RESTful service layer is easier to evolve than one based on SOAP. A well-done domain model represents business concepts that have evolved over the life of the system. (Flexibility to evolve should remain one of the top architectural priorities of the RESTful service layer. Do everything possible to ensure that the service layer can change over time without breaking existing clients.)
  2. A well-done RESTful service layer is easy to use and reuse, especially if it adds business value. (Do everything possible to make it easy for clients to consume the service.)
  3. A well-done, domain-based RESTful service layer will reduce architectural sprawl --- in other words, it will organize the spaghetti. The result should be more like manicotti --- wrapped up in nice little bundles that naturally align to each other and are independently consumable.
  4. A well-done service layer also makes it possible for the business, rather than the technology organization, to define which data to vend (both internally and externally) --- provided the business organization works closely with IT in defining the logical domain model.

> Reverse Engineering a Domain Model

If you find yourself reverse engineering a legacy database, you're already in trouble. One thing to pay particularly close attention to is ensuring that implementation details don't leak into the logical domain model. Business domain modeling is hard to do well, and usually requires the skills of a trained analyst who is also adept at working with the business.

A Great Example

The FHIR (Fast Healthcare Interoperability Resources) API is a relatively new API for collecting and exchanging patient record information. It's a great starting point if you're looking for a good example of a well-done, domain-based, RESTful service API. The FHIR Resource Index is particularly useful as a demonstration of how a domain model can be used to clearly identify a set of related resources.

| Comments

NFL Sunday Ticket Kicks Off with Chromecast

Posted by Eric Caminiti

Congratulations to the About Objects team at DIRECTV for their well-received implementation of the NFL Sunday Ticket Chromecast app. The app is currently featured by Google, and was highlighted in Google’s September 2015 new Chromecast release as a premier example of a powerful, second screen Chromecast experience.

Integrating Chromecast

Given my strong affiliation with Apple's platforms, people are sometimes surprised to learn that I've been responsible for leading our Google Chromecast strategic partnership the past few years. Did I go over to the dark side? Maybe. Have I ridden a rainbow colored bicycle? Possibly.

Actually though, we've been working with a number of companies in the digital media space, helping them capitalize on the cord-cutting trend. At its core, Chromecast allows users to stream content on everything from mobile devices to large-screen TVs. It’s sort of somewhere in between Apple’s AirPlay and a full-blown Apple TV app (we develop those too!). The big difference is that with Chromecast your app controls the whole experience and becomes your remote control, allowing for a much more immersive, second screen experience.

Key Challenges

Chromecast implementations can present a number of potential land mines. Problems often crop up in dealing with DRM, adaptive bit rates, CORS headers, environment setup, error handling, networking issues, UI synchronization, and automatic reconnect, as well as determining when and when not to use custom channels.

What’s most interesting to our developers is the challenge of architecting an elegant Chromecast solution in the context of an existing mobile app. Integrating Chromecast (especially with iOS apps) tends to be an afterthought, and none of the apps we've dealt with were designed with that sort of integration in mind. It tends to be such an outlier that it can stress the application architecture in dramatic ways, often uncovering significant weaknesses.

Getting It Right The First Time

Obviously, it's important to spend some time upfront designing a solution that's the best fit for your current app architecture. Most of the Chromecast implementations I've seen tend to follow a basic pattern of intercepting calls to the video player and redirecting them to the Chromecast receiver.

The key is to avoid taking the easy path and tightly coupling your Chromecast implementation with existing code. The alternative would require adding otherwise unnecessary conditional logic to the codebase, making it more fragile. And the likelihood of introducing regression bugs would increase dramatically.

Thinking through these issues affords an excellent opportunity to dust off your Gang of Four design patterns. I’ll be posting soon some examples on how you can take advantage of patterns such as Decorator, Proxy, and Receptionist to simplify your implementation. Coming up with a strategic approach that allows you to encapsulate most, or all, of the Chromecast-specific API calls can greatly reduce overall development time, and improve the long-term maintainability of your integration.

One way or another, it’s important to get your app's Chromecast integration right. Remember, many Chromecast consumers use the device as their primary way to watch TV, and they've gotten used to Chromecast working consistently across all their favorite apps (Netflix, YouTube, HBO Now, etc.).

To that end, Google provides detailed user interface guidelines to help ensure a consistent experience across all Chromecast-enabled applications. The lack of a polished user experience can infuriate those cord-cutting millennials out there --- and they can be pretty harsh in social media and in app store reviews when critiquing apps that don't meet expectations.

| Comments

Managing Technical Debt

Posted by Eric Caminiti

Among the services we offer is an assesment that can span everything from mobile and cloud strategy, to enterprise architecture, process and methodology, design, testing strategy, software tools, and even an in-depth analysis of existing codebases. After conducting a number of these assessments, I noticed a trend. Many of the IT directors who had until recently been hailed as heroes for delivering low-cost mobile apps seem increasingly alarmed about their future. Why? Their projects incurred significant technical debt as a consequence of cost-cutting measures. (See also: wiki article on technical debt by Martin Fowler)

In short, technical debt is the accrued balance of the shortcuts taken and compromises made to get a project out the door on time and on (or under) budget.

While initially, an app whose development team cut one too many corners may seem to work just fine, as the app evolves the code base can quickly degrade into an impenetrable mass of spaghetti. Unfortunately, development teams rarely consider (much less track) the amount of technical debt added from one revision to the next.

Signs and Symptoms

One of the symptoms we typically see when a project is over-leveraged (i.e., has accrued too much debt), is a growing rift between Product (the business) and IT. The mobile development team may be perceived as losing velocity over time, struggling to meet deadlines, delivering increasingly buggy code, and being seemingly incapable of implementing a growing subset of new feature types.

The latter issue tends to be particularly frustrating to Product. It would naturally be hard to reconcile development's claims that certain features are 'impossible' to implement with the presence of the selfsame bright, shiny, organic, grass-fed, free-range features in the latest versions of competitors's apps.

When developers claim that new features are impossible to implement as specified, and plead for requirements changes and simplifications, often what they're really saying is, “Uh, we can’t get there from here --- the current state of our application's design and architecture (or lack thereof) and codebase simply won’t support it."

Fingers get pointed. Things get said. Chairs get thrown.

The Young and the Reckless

While it can be argued that there's always some degree of technical debt in a given codebase, a clear distinction can be drawn between prudent technical debt and reckless technical debt.

For example, reckless technical debt might be incurred by a junior developer writing poor code simply through lack of experience. (This could also be characterized as unintentional technical debt.) Prudent technical debt, on the other hand, would be incurred if an experienced developer decided to go with a temporary, quick-and-dirty implementation to meet time-to-market constraints for a new feature, while planning to incorporate a better, more permanent solution later on.

In general, inexperienced developers tend to introduce a great deal more technical debt --- particularly of the reckless variety --- than do their more seasoned colleagues. And sadly, they're rarely aware of any technical debt they may have introduced.

Ultimately, you get what you pay for. The initial cost saving achieved through cheaper developer rates will often be offset to a great degree by greater technical debt. Unfortunately, the degree can be very large, and is hard --- if not impossible --- to measure. Veteran developers, on the other hand, have bloodied their foreheads enough to know what a reasonable level of debt is, which items to address in the short term, and which to defer to future releases.

The Real Culprits

We often see codebases with unhealthy levels of reckless technical debt. Mostly, we see this in code that was outsourced to low-cost providers. When new features need to be added, they're simply bolted on --- often with the coding equivalent of duct tape --- and pushed out the door. As they rotate inexperienced, generalist developers through projects, any consistency in the codebase quickly evaporates, code and component-level reuse goes from negligible to non-existent, and application architecture and design don't evolve with the codebase.

Surprisingly though, we sometimes also see similar levels of reckless technical debt in codebases developed by in-house teams. The sources are many, including flawed architecture decisions; overly simplistic or sloppy design; failure to leverage available platform resources, patterns, and best practices; failure to keep the codebase current --- but I'll leave the details for another post.

Paying the Piper

Then comes the inevitable shakeup, either thru a new hire, a merger, or a re-organization and new management is now drawing conclusions about all this disfunction in the organization. Hopefully you, the savvy IT director, were already promoted and left this mess to someone else! But often, you're the one who ends up in the cross-hairs. What you failed to manage effectively is the inverse relationship between cost and risk. As a result the project accrued enough technical debt to serve as a significant drag on performance.

At a certain point, the 'interest cost' (the extra effort required to fix bugs and implement new features in a pathological codebase) became crippling, while simultaneously the debt grew too large to repay. (In the worst case scenario, paying the debt might entail throwing away essentially the entire codebase and starting over from scratch.) We know its not all your fault --- you were in the trenches making these decisions in real time, and doing your best given scope, time, and money (It’s a tough racket!). But the new management will tend to focus on the risk first (as they were not around to reap any of the benefits of your low-cost strategy).

Here are several things an IT director can do to avoid ending up in this situation:

  1. Negotiate with the product team on a regular basis to ensure that scope is manageable in the first place so that your team doesn't feel pressured to cut corners again and again.
  2. Make sure project risks are fully captured in writing and communicated to stakeholders on a regular (ideally weekly) basis.
  3. A small team of experts will run rings around a larger but more junior team. Ensure that your budget supports at least seeding --- if not fully populating --- your team with experts. The work should get done much faster, and as a result, the net cost may actually be less, in spite of the higher hourly rates. But more importantly, the project won't incur hidden costs in the form of technical debt that can quickly spiral out of control.