Great Customer Service is not dead! When I was doing freelance/independent work I used Harvest’s Solo plan to track my time and invoice clients. It was affordable for a single person and wasn’t over-engineered. Since it was cheaper in the long run I paid for a yearly plan and renewed each time it came up. I would’ve recommended them to anyone in a similar position. Now I’ll recommend them even more heavily.

A few months ago I took a full-time position and just didn’t need the paid account any more. I couldn’t figure out how to downgrade though so I sent a request to their customer service. The page told me to expect a response within an hour. Within 5 minutes I got a response from Jennifer telling me that my plan had been downgraded to the free one, AND they refunded the prorated amount for the remainder of the year back to me.

Getting a refund wasn’t even on my radar. I was perfectly happy with keeping the paid plan and just not renewing it when the time came. What I love about this is that Harvest didn’t waffle around the issue for a long time only to end up doing the bare minimum to get the customer to go away. They were fast, proactive, and exceeded expectations.

That’s how you do customer service.

I’ve been noodling with an idea for a watchOS 2 app. It involves Connectivity – transferring files from the Watch to its paired iOS device. The WWDC sessions are very good, but I ran into a couple of snags with the framework. I had tweeted something that I thought was a bug, but on closer inspection of the docs it actually wasn’t. I tweeted another message stating as much, but it hasn’t gotten the notice that the original did.1 Anyways, here’s what I learned.

[ed: This is with iOS 9, watchOS 2, and Xcode 7, all in beta 2. Stuff might change.]

You can run both simulators at once

But it’s not obvious. First, run the watchOS simulator. In the scheme control, it’s the “[App Name] WatchKit App” scheme, which has both the iPhone and Apple Watch available (for example, “iPhone 6 + Apple Watch – 38mm”), and run it.

Xcode Scheme for iPhone6 and Watch

When it’s running, change the scheme (without stopping the simulator) to the normal iPhone scheme. In this case “[App Name] > iPhone 6”.

Xcode Scheme for iPhone6 and Watch

Now, run this scheme without compiling: hold down ⌘ and click the Run button. The iOS app will now be running in the simulator. Sometimes the watch app will close, but you can open it back up again.

The console and breakpoints may get a little squirrelly, so you probably won’t be able to depend on them so well.


Connectivity has one main delegate to implement: WCSessionDelegate. This can be implemented on both the iOS and watchOS apps, and different behavior will happen. For example, when you transfer a file, the receiver will need to implement session:didReceiveFile:. If you want to receive some kind of notification that the file was sent the sender needs to implement session:didFinishFileTransfer:error:.

This seemed counter-intuitive to me at first, but on further reflection, it makes sense. I do wish the docs more clearly illustrated this though.

You’re in the background

Don’t forget that your iOS app is not guaranteed to be running when these notifications come in. And they will come in on a background queue too, so don’t immediately try to update the UI without getting over to the main queue first.


I created a simple demo app to illustrate all of this. It’s up on GitHub. Take a look at the AppDelegate for the iOS app, the ExtensionDelegate and InterfaceController in the watchOS extension target. This demo was thrown together quickly without regard for best practices, blah blah blah.

  1. Such is the nature of Twitter, alas.

When you’re a solo developer, you can use Git in nearly any darn way you choose. No branching? Ok. Branch on everything? Sure. Want all your commit messages to consist of “stuff”? Knock yourself out. You might regret some of that in the long run, but it’s not hassling anyone else. But, as soon as you add another person into the mix, things will have to change.

One gigantic benefit from collaboration is having a second set of eyes look at your code. GitHub makes this easy if you follow a few steps.

[ed: This might be old hat for some of you, but I don’t know if I’ve ever read an entire guide for this, so I’m writing it all down. Please send me feedback, nicely, if there’s a problem.]

Create an Organization

This one is optional, but does help in a few ways. The Organization becomes the face of any projects underneath it. It also makes a few things a little easier with regard to deployments, issue tracking, and documentation.

Everyone Forks from the Organization

If the repository is called some-org/project-x, then each developer forks that to create swilliams/project-x, sally-developer/project-x, and so on. If a repository is private on the Organization, your forks will be private too, and won’t count against your own private project count.

Clone Your Fork

Now get your local copy.

git clone

Set up Remotes

Your fork on GitHub will automatically be your origin remote. Add a remote for the Organizations repository. By convention this is typically called upstream.

git remote add upstream

Work in Branches

Working on a feature? Create a branch feature-abc. Fixing a bug? Create a branch issue-254-login-done-broke. Keep master clean.

git checkout -b feature-abc

Push Branches to Origin

Done with a feature or an issue? Push it back up to origin (your fork).

git push origin feature-abc (you can add a -u flag too to track the remote branch too)

Create a Pull Request

Why do we go to the hassle of creating all those branches? Because with branches, you can create multiple outstanding Pull Requests at once. If you did all your development in master, any additional commits you push up will be added to an open Pull Request, which can cause issues.

Multiple small Pull Requests are much easier to review. Would you rather review 3 files over 5 commits or 50 files and 75 commits?

Someone Else Reviews the Pull Request

Perhaps my favorite piece of functionality in GitHub is the Pull Request review process. Use it to annotate code and discuss. Merge it in if everything is good.

Rules for the Road

  1. Keep the master branch clean. That should be ready to go live if necessary. This means tests should be passing, everything compiles, nothing important should be broken, etc.
  2. Never commit directly to upstream. Upstream should only be updated through Pull Requests. Exception: pushing tags.
  3. Pull from upstream regularly. The more codebases diverge, the more likely a nasty merge problem will occur.
  4. Keep branches small. Just reiterating it again.
  5. There are exceptions to every rule. Use them intelligently.

At some point in your career in technology, you’re going to have to make some kind of presentation. This could be connecting your laptop to a projector, sharing your screen over some video conference, or just have people huddled around your desk. Since I care about you, I’m gonna share some advice about avoiding common pitfalls that might occur.

Turn off notifications. Say you’re right in the middle of explaining how beautifully designed your database is to an important client, and one of your friends sends you an email.

oh dear
Oh dear.

Your only options are to make an awkward joke, or ignore it completely. Hopefully Jane doesn’t send any follow-ups.

On OS X you can turn off notifications by clicking the button in the top right of the menu bar, scrolling down, and then turning “Do not disturb” on.

do not disturb

Hide everything you aren’t showing. Again, say you’re presenting something amazing to the top brass at Giant Health Conglomerate Inc. You need to switch from Keynote to a spreadsheet. You minimize Keynote and your Inbox is now in full view. What if there are some strategy emails from your CEO sitting in front of everyone? What if Jane from above has graphically described other things found in her vomit?

On OS X I like to move my presentation and ancillary windows to a separate space. One with a very plain wallpaper.

Use Chrome? Double check the default sites when you open a new tab. Nothing instills confidence with a prospective client when you open a new tab and the top 8 sites you visit are right there in the open for everyone to see. “Oh, I guess you really like Harry Potter/Star Trek slash fiction?”

An Incognito window is a quick way to hide all that.

Similarly, consider clearing your browser history. Maybe you need to open up a webpage you weren’t anticipating. Maybe you’re checking the redis documentation, but when you type red into the address bar, all the subreddits you’ve ever visited are immediately available. Maybe you just browse /r/aww and everyone is happy, or maybe you visit more… unconventional… ones.

Ultimately it’s up to you how many awkward situations you want to have with your co-workers and/or clients. Try to just think about what would happen if your mother/rabbi/priest/psychologist were watching.

What do you do with a codebase for a client project that ended years ago? Since it’s client work it shouldn’t be made public, but if you keep it “live” in GitHub your private repository count will creep up. Deleting the repository outright seems wrong; it’s not that unusual to have an old client cold call you with an update, and having that old codebase handy can save some headaches. The lazy way to fix this would be to just give GitHub more money to increase the limit. But I felt the itch to solve the problem with code.

Git itself is flexible. It’s trivial to clone a repository, put it in a safe place (or alternate service) and call it a day. But with GitHub, that doesn’t include Issues. There could be some solid ideas (or bugs) stored in Open Issues that should be preserved. GitHub has a great API to retrieve those, and I decided to create a simple Ruby script to make it a smooth process.

Take a look at GitHub Issue Exporter. It’s pretty basic right now — just downloads Issues into a bunch of JSON and will also let you import them back into a new project. The idea is that you clone the repository you want to archive, then export all the open issues, store it all in a safe place, then you can safely delete the repository and free up some space.

Sometimes ideas pop into my head, and sometimes I think they’re awesome. Most times though, they’d require way more time and effort to implement properly than I have available. Since I’ll never get around to making this myself, I’ll just share it here.

I was playing some Team Fortress 2 a little while ago and enjoying myself when I had the epiphany, you could do this in real life. Or at least something similar1.

Modern smartphones have very accurate GPS chips inside of them, such that things like geofencing are possible2. Why not use that technology to add an additional layer of gameplay on top of things like paintball or airsoft? What if you could turn what is typically an unstructured free-for-all into a real-life tactical Capture the Flag game? Here’s how I’d do it.


The Creator creates a new game and divvies up players into teams, and assigns a Commander to each team. The Commander has special privileges (defined below).

The Map

The Map
Some desert

Take the strip of desert that you play in. The Creator defines the boundaries for the game. If a Player strays beyond those boundaries for more than, say 10 seconds, they’re penalized.

Drag to rearrange

Next up is defining team based boundaries.

Team Boundaries

Each team’s Commander can setup their own map dependent points of interest: a base, the Flag, etc.


Red Team

The Commander can also see where each of their team members are currently located, and what directions they are going. The Commander can call out orders or instructions, set a rally point and draw directions on the map that will show up on Players devices.

For Capture the Flag, the Flag could be another kind of device like an iBeacon tied to an actual flag. When the Flag is moved it sets off alarms for its team. Moving the other team’s Flag back to your base scores points for your team.

Other forms of gameplay can be defined here. If you want to have “multiple lives” the Commander can set the place you return to “regenerate”. Once the game starts, these places cannot be changed.


All communication would have to be done by some kind of bluetooth headset and microphone combination. All cues would have to be audio based to avoid having to stare at your screen all the time. Through this you’d have:

  • Open communication with your team
  • Alerts from the game itself (“You have left the playing field, you have 10 seconds to get back in”, “Now entering Blue Space”, “Your Flag has been taken”, etc)
  • Broadcast something to all Players (“Game ends in 10 minutes”)


When the game is over, players can view a replay and see all the movements and events from the game. This is similar to the Commander’s view, but now everyone can see everything. This will let players review tactics and strategies for the next go-around.

Considerations and Potential Issues

Phones are fragile. They won’t hold up very well to paintpalls or airsoft pellets. You can route around this by keeping them in a protective case and inside a deep pocket. The headset interaction becomes key since actually pulling out a phone and looking at it would be a good way to lose focus on the action and become a nice target for the other team. Now that smart watches are becoming a thing, these could also provide something useful, but again fragility would have to be considered.

The Commander may have to use their device more often. I would assume they would be in more of a “base” or out of harm’s way to allow this to happen. But hey, if General Patton was up close to enemy lines, then you can too I suppose.

Battery is also an issue. Constant GPS and Network usage will suck a battery down in no time. I think a 100% charge on a modern phone would last long enough for a couple hours, but something like a Mophie would be recommended.

Lastly, networking could be tricky, especially in remote locations. Having something covered in wifi would ease this restriction (and help battery life) but those logistics could be a bear.

In Conclusion

As it says on the box, this is half-baked. I don’t even think you’d be able to create much of a business out of it, but goshdarnit it would be fun to play. If someone actually did make something like this, I’d be all over it, and demand only a modest stake in the business.

  1. If you know how to double jump in real life, I’m all ears.

  2. Geofencing means that something happens when your phone enters or exits a pre-defined boundary. For example, you could have your lights automatically turn on when you get home.

Let’s take a completely hypothetical situation where you’re developing an app that uses Core Data for the local storage and have a bunch of beta testers eagerly awaiting the next version before your product launch. Your previous attitude towards the data model was something along the lines of “It’s still pre-1.0, I’m not bothering with migrations yet, just delete and reinstall, c’mon.” However, you forgot that requiring the beta testers to deal with that isn’t exactly a friendly experience for them, and you made at least three changes to the data model since your last Beta release. Now if they run the app it’ll crash immediately because the database isn’t in sync with the data model.


Usually when you make changes to the data model, you do so by creating a new version and telling the NSPersistentStoreCoordinator to perform lightweight migrations, then make your changes. Adding a new version of the data model after changes were made accomplishes nothing. Fortunately, you’re not screwed. We’re going to jump back in time, grab the old data model then pretend it was there all along.

Your MyProject.xcdatamodeld file is actually a directory. If you browse it in the Finder or Terminal, you’ll see more folders inside it, one for each version of your model. Inside those folders is a file simply called contents. This is an XML representation of the editor you see in Xcode.

Step 1 — Find the data model from the last beta version you released

Look through the history the xcdatamodeld file in your source control system 1. Hopefully you’ve been tagging all of your releases and can just checkout that specific one.

> git checkout 1.0-beta4

If not, you can mess around with git log to figure out where to go. This snippet can help you see the commits for a single file:

> git log --pretty=format:'%h : %s' --graph -n 45 FILENAME

Then, checkout the particular commit with the right version.

Step 2 — Copy the contents file

Find the contents file within your .xcdatamodeld file. Copy all that XML somewhere safe.

Go back to your HEAD or wherever you were.

Step 3 — Create the new version of the data model

If you didn’t know, the process is:

  • Open the .xcdatamodeld file in Xcode
  • In the Editor menu, click “Add Model Version”. Follow the instructions.
  • Open the File Inspector for your .xcdatamodeld. There is a Model Version segment in the inspector, make sure it’s on the version you just created.

Now you have two data models that are identical. Let’s change the history on the original one.

Step 4 — Change history

Close Xcode. That’s not mandatory but I’ve had it crash when mucking about with these files, and it’s just not worth the hassle.

Open the contents file for the original .xcdatamodeld in a text editor.

Paste in the version you created in Step 2.

Open Xcode. If you haven’t set up the NSPersistentStoreCoordinator to run migrations, do so now. This tutorial is pretty good.

Now when the app runs, the migrations update the users’ data and keep things from crashing.

Note: This is for lightweight migrations. Custom migrations are more complicated. has a great article on these. I don’t know if you can play fast and loose with the data model file like you can here though.

  1. You ARE using Source Control, right? Sometimes new developers will ask me why they need Source Control. I usually parrot the usual answers - branching is good, undo mistakes, tool integration, etc - but situations like this are where it really shines. Without source control here, you’d be hosed. You’d have to manually fix the XML in the contents file, which would be monumentally hard or altogether impossible depending on what changed and how good your memory is.

In the interest of security I’ve started to turn on Two Factor Authentication (aka 2FA) for some of the services I use. I tried it out with GitHub about a year ago, but turned it off shortly thereafter because I encountered a bunch of problems and didn’t have the time to figure them all out. That and Google’s Authenticator app having data loss issues after an update was a big red flag too.

Today it’s a little easier to manage. 1Password has 2FA support built in now, and there’s also Duo Mobile’s app. Turning it on was pretty easy: Go to the security page, click a few buttons, and follow instructions. Once it was enabled I decided to push some changes for a project, and then this happened:

> git push origin master
remote: Invalid username or password.
fatal: Authentication failed for ''

Umm, ok. I mean, I guess the most secure repository is one that nobody can access.

The solution isn’t immediately obvious. I looked at GitHub’s setup docs again, but they didn’t mention anything about 2FA. When in doubt, try it again right? This time I got a username/password prompt. I had assumed I would get some sort of additional prompt to enter a single use code for the 2FA, so I pasted in my GitHub password.

> git push origin master
Username for '': swilliams
Password for '':
remote: Invalid username or password.
fatal: Authentication failed for ''

No dice.

Googling around a bit finally brought me to this page, “Creating an access token for command-line use”. When you enable 2FA you need to use a token as your password for the Terminal. I created this with the default scopes provided1, then copy/pasted the resulting token into the password prompt in my Terminal window.

± git push origin master
Username for '': swilliams
Password for '':
Counting objects: 80, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (78/78), done.
Writing objects: 100% (80/80), 9.02 KiB | 0 bytes/s, done.
Total 80 (delta 58), reused 0 (delta 0)
   01efb2c..445a0b6  master -> master


I think that should handle all the headaches for 2FA with GitHub. I like the warm security feeling it brings, and it seems like the user experience has been cleared up too.

  1. For standard git operations, I don’t think you would need any of the other scopes available for apps, and you could probably remove

    from it too.

Managing memory has become easier. Things like ARC take away much of what was once a painful and bug-ridden task. Things are not always panacea of course; it is still far too possible to do the wrong thing and access memory you are not supposed to and receive the dreaded EXC_BAD_ACCESS error.

Today I was working on a project in Swift. And despite my appreciation of the language, the tooling still remains… suspect. I have covered that topic a couple of times before.

Sometimes it’s obvious where the problem is based on the call-stack. This time it was not so obvious. The problem occurred at the end of a series of steps in a wizard when all the prior screens were finally being released from memory. From what I could tell the error was when one of the view controllers that represented one of the steps in the wizard was being deinitialized, but otherwise I couldn’t immediately see where the problem was.

In Objective C you can turn on what’s called “NS Zombies” mode, which keeps objects allocated (as a special “Zombie” object) and then raises warnings you try to access one of those Zombie objects. In my experience this has not worked quite was well with Swift. With Zombies mode enabled the app ran fine without crashing, but also didn’t raise any warnings about accessing bad memory. It was a Heisenbug!

Eh, close enough

Next I started to play around with the code thinking that I was improperly handling the lifecycle of some of the properties of several classes. I changed around some lazy properties and made certain other things optional, but this was just wheel spinning.

I backed up and reviewed the callstack again. The last frame before explosion was now at swift_unknownWeakRelease in a helper class referenced by the offending ViewController. This helper had this property:

// SearchBarHelper.swift
private unowned let searchBar: UISearchBar

I then re-checked the documentation for unowned.

If you try to access an unowned reference after the instance that it references is deallocated, you will trigger a runtime error. Use unowned references only when you are sure that the reference will always refer to an instance.

Note also that Swift guarantees your app will crash if you try to access an unowned reference after the instance it references is deallocated. You will never encounter unexpected behavior in this situation. Your app will always crash reliably, although you should, of course, prevent it from doing so.

That searchBar was originally defined as an IBOutlet on the ViewController:

// ViewController.swift
@IBOutlet weak var searchBar: UISearchBar!

The lights turned on. Of course it was crashing. An outlet that’s a weak property can and will be deallocated when the controller referencing it is no longer visible. I forgot about that when I created the other class to manage certain characteristics about that search bar. So, the searchBar was released at some point, and when the helper was next called (in deallocation) part of its representation in memory was an unowned property that was nil, a state it cannot be in. Thus, EXC_BAD_ACCESS.

The solution was simple, I still didn’t want SearchBarHelper to have ownership of the searchBar, so I changed it to a weak optional.

// SearchBarHelper.swift
private weak var searchBar: UISearchBar?


Takeaway lesson: read and understand the documentation. Memory and how it is handled is still something that absolutely must be understood if you want to manage it well.

The other day a couple of salespeople for the Arizona Republic came knocking on my door. Here was their pitch:

“We’re going to start delivering you a paper every Wednesday and Sunday. Would you like it in your driveway or on the roof?” and then a pause for laughter. I was more confused; her delivery was pretty bad, and there was a distinct waft of desperation.

I started to explain that I didn’t want a newspaper even if it was free.

“But it’s only $.95 per week! You’ll make that up with all the coupons you’ll be getting!” She was pushing the coupons pretty hard. I guess this is why people get newspapers, in order to buy cereal for $.20 off?

I said I didn’t really care about coupons and politely told them to have a good afternoon. After extricating myself from the conversation I realized something: not once during their pitch did they mention the actual content of the paper. That’s got to be depressing if you’re a journalist — according to your sales staff your work is nothing more than a coupon delivery system with some words in between.

Copyright © 2017 - Scott Williams - Powered by Octopress