In Defense of the Interface

I recently was shown a blog post by Golden Krishna called “The best interface is no interface at all“. Reading it made me cringe. What’s worse, a UX designer shared it with me. IMHO, GK couldn’t be more wrong. And it’s all about perspective.

The History of the UI

As Krishna points out in his article, a fateful relationship between Apple and Xerox introduced the world to the graphical user interface. Gone were the days of command line key strokes, odd abbreviations to commands, and white text on a black background. Sure, your favorite game was perhaps less popular, but all-in-all, the end user welcomed the user interface with open arms.

Afterwards, we got the Palm Pilot with it’s stylus (no doubt a derivative of “stylish” because, let’s be honest, a stylus rocks) and then the iPhone and the plethora of touch-based devices that have followed. All of these have relied on the user interface.

The State of the UI Today

Deny it all you want, but the user interface has irreversibly changed the world of computing as we know it. Analysts believe that mobile web browsing will surpass desktop in 2014. It’s hard to browse an interface-less web on an interface-less device. Apps such as FiftyThree’s Paper have not only seen millions of downloads, but won the respect of some of the most revered design companies around. It’s semi-impossible to jot down ideas, create vibrant drawing, or otherwise interact with your touch-based device without a user interface.

Principle 1: Interfaces are Useful to Enhance Natural Processes

Krishna has some valid points, and I’m not here to dispute the fact that in some cases, the user interface is over used. No, I probably don’t need to check my Twitter feed while I’m filling my water glass from the fridge, but wouldn’t I want to know if I’m running low on condiments? My fridge is connected to the internet, so it has access to my calendar and sees that I am hosting a big BBQ this weekend. You can’t have hot dogs without ketchup. It’s a fact**.

I can already walk up to my car, unlock the doors, and drive away. And I don’t need my smartphone to do it. So the argument isn’t that user interfaces are unnecessary, it’s that they aren’t necessary for everything. Again, do I need my Twitter feed while I’m driving? No. Do I want my car to check traffic reports between my current location and destination and show me if there are any road blocks or closures? You betcha. Do I want to be shown that the tire pressure on the back left tire is a little low, and the location of the nearest gas station or auto shop? Absolutely. Again, the interface is not the problem. In these cases, the interface is used to augment the experience of the user. It should give enough information to inform the user, and then get out of the way. Krishna cites Square’s Auto Tab feature as a prime example of interfaceless superiority (never mind the fact that you need to unlock your phone, find the app, tap on the app icon, wait for the app to load, search for the business by name or location, and then enable Auto Tab for that individual location), but it has some pretty severe drawbacks, namely that it can’t be used at mobile merchants and it won’t work if two Auto Tab-supporting merchants are located too close to each other. Why? Because Square relies on the GPS data from your phone to figure out where you are. It can’t figure out which merchant you’re at if the merchant keeps moving around (sorry, food trucks), and aGPS chips in phones aren’t accurate enough to distinguish between two merchants located right next to each other. Sure, Google introduced NFC and Google Wallet, but Wallet is only supported in the US and less than 5% of all smartphones in the market even have NFC readers built in. Bluetooth 4.0 is a much more promising technology than NFC, but even still, no one wants their phone broadcasting all of their personal information out at all times in hopes that their car or their TV might pick it up. There are security concerns beyond the scope of this post to consider, which is why BLE support is done on an app-by-app basis on iOS. And we can argue about skeumorphism versus flat design another time. It’s been done. Which brings me to my next point.

Principle 2: Leverage Computers to Cater to the User

There’s a reason we have user experience and user interface as two separate disciplines in the studio I work. The two are not the same, but they should compliment each other. As we move towards the Web 3.0 and the Internet of Things, the user interface becomes more important, not less. Krishna claims that technology should be omniscient:

Your TV turns on to the channel you want to watch

Well, maybe if you’re this guy, but the rest of us probably need to tell our TV what we’re in the mood for. Our TVs can get smart enough to make recommendations about what we might like based on the time of day, day of the week, or even who is sitting in front of the TV, but to believe we can live in a world where we just walk around and things get bought, music plays, and TV channels change is a bit unrealistic. And thus, saying that “No UI is about machines helping us” is actually not true. UI’s are necessary so we can tell computers how to best help us. Interface for the sake of interface isn’t the answer. Interface for the sake of the user is. Interface for the sake of augmenting the user experience is. Interface for the sake of interfacing is.

Principle 3: Create a system that adapts for people

I’m actually going to keep this principle titled the same as it originally was, because I think this is very true. And I also believe that good UI does this. One of the arguments for skeumorphic design back in 2006 when the first iPhone was released was that design that modeled the real world in that way made it easier for users to adapt to this new kind of technology. My notepad looks like a notepad? Cool! I know how to use a notepad. My iBooks sit on a bookshelf? That makes me feel comfortable, because I’m used to books on a bookshelf. Sure, skeumorphic design can be and was taken too far, but it served its purpose for the time. As users became more comfortable with the technology, skeumorphism lost its relevance in software interface design. Apple totally redesigned iOS with the launch of iOS 7. They probably spent millions of dollars in designer and engineering hours to bring iOS 7 to market. And sure, some people may have gotten a little dizzy, but they didn’t have to start all over again learning how to use their iPhone. So to say that user interfaces need full redesigns and those redesigns result in the user being forced to relearn just isn’t true. Apple has done a fantastic job of building and maintaining their Human Interface Guidelines, which tell both designers and developers how they should utilize the toolsets of the iOS SDK and platform to create beautiful, consistent user experiences across multiple applications.

Krishna cites Trunk Club as an example of how the no UI concept works. And he’s right. It is a fashion startup. And I’d even further agree that if you’re a startup, thinking about yourself as a company that provides a service and not as a company that builds an app usually leads to more successful results. But they are definitely not a software company. And they also aren’t trying to use software to enhance the lives of their users. Check out their website. It has a sleek, flat design in line with many startups’ websites. Download their app and you can browse featured trunks. Trunk Club undoubtedly uses this information to curate the kinds of clothes they send in your next trunk. And this isn’t new. Plenty of other subscription-based startups that wouldn’t call themselves “software companies” are using technology, along with user-facing interfaces, to learn more about an individual user and cater their services to that user’s needs. Just check out NatureBox or Fancy. Fancy is a great example of using a simple User Interface to curate products to a user’s preferences. You scroll through a list of pictures of items, and fancy the ones you like. Done.

Wrap Up

Suffice it to say, I don’t think that all interfaces are good all of the time. However, I also think that interfaces can be used properly to drive the user experience and help users interact with technology to augment their daily lives. I just don’t like all-or-nothing approaches to topics like this, and think that an article about bad user experience design should make us value good user experience design more, not herald for the end to all interfaces.

** I actually eat my hot dogs plain.

Google Glass App for Rails

Google Glass is a relatively new technology that, although not many people have used it, has a lot of potential for mass adoption. Designing apps for Glass, known as Glassware, is actually not all that simple, though. For one, you need a pair of Glass, and Google isn’t just handing those out willy-nilly. Let’s assume that you have Glass, though. What does Glassware look like?

Not Your Traditional App

Glassware is not an application in the usual sense of the word, in that it doesn’t actually live on the device. Instead, Glassware is a web application that you authenticate and data is then pushed from the Glassware down to the authenticated pair of Glass. In its current form, Glass does support some data push in the form of subscriptions, but these APIs aren’t as robust as one would think, and the amount and kinds of information Glass can send back to Glassware is still pretty limited.

Additionally, instead of having apps that you launch on Glass, Google introduced the concept of timeline items, which act essentially like a news feed of the activity you’ve elected to receive from Glassware. The New York Times and Path are two major Glassware applications currently supported. Instead of launching one of these apps from a home screen as you would on a mobile device or tablet, these apps insert items into your timeline, which you can scroll through at your leisure. Timeline items can have various menu options, including custom options you can define yourself, but beyond that, there’s not really much to them.

Pick Your Stack

Since Glassware is essentially a glorified web app, you can technically build Glassware on top of any web stack you like. Google has quick start guides written in Go, Java, .NET, PHP, Python, and Ruby, but you’re not really limited to these techs if you don’t want to be. Since I’ve been focusing a fair amount of energy lately on building my Rails chops, I thought I would try to build a Glass app using the Ruby quick start guide. When I loaded it up, I noticed it was actually built on the Sinatra stack, not Rails.

“But I really want to use Rails!”, I thought to myself.

Well, that should be easy enough. Ruby’s ruby, after all! So, I started digging through the source code for the quick start, and soon noticed that Rails does a lot of stuff behind the scenes that doesn’t really translate into other libraries, like Sinatra. For those of you who don’t know, Sinatra is a DSL written in Ruby for creating web applications. Granted, Rails is also used for building web apps, so what’s the difference?

Sinatra vs. Rails

In all honesty, when I first started writing this blog post, I didn’t really know what the differences were. From experience, I was aware of the fact that Sinatra was about as bare bones as you could get in terms of web application frameworks. Rails, on the other hand, is an MVC framework with tons of boilerplate code built in (convention over configuration). So, I did some digging, and came across this great article comparing and contrasting the two, and discussing when one is more appropriate than the other.

TL;DR: Rails is great for larger, more complex web applications, whereas Sinatra is fantastic for smaller web apps or APIs.

Could you use Sinatra for a large web app? Of course! Just be prepared to do more configuration than you would with Rails. Could you use Rails to build a small web app or API? Absolutely (and one notable tech giant started by doing just that)!

And so I, too, chose to build this rather small Glassware application using Ruby on Rails. Thus, I needed to begin working to transition the provided Sinatra-based Google quick start to something that played nicer with Rails. I began as I do with all new Rails apps:

rails new glass-app -T

For those of you who don’t know, ‘-T’ tells Rails to skip Test::Unit files. There are two reasons you may want to do this: If you know you’re going to use a different testing framework than Minitest, or if you don’t plan on writing tests at all. While the Rails community at large would shun you for choosing the second option, that’s the one we’re going with for this post. After all of the setup, I opened the new application in Sublime, started my server in another Terminal tab, and checked out localhost just to be sure everything was set up properly. It was.

Designing the Application

Now, before we get too deep into code, let’s take a step back and figure out exactly what we are trying to do with this Glassware app. We know from Google’s documentation that all Glassware apps and requests to the Mirror API must be authenticated using OAuth 2.0. So, right there we know we’re going to have to handle OAuth. After the user is authenticated, we want to send them some basic text and insert that as an item in their timeline. For giggles, let’s make it so they can also have that timeline item read to them out loud. Finally, they should be able to delete the timeline item when they are done with it. There are two things about this final point I found interesting regarding the Mirror API:

  1. Timeline items are automatically removed from Glass after 7 days and from Google’s servers after 10 days if they are not updated
  2. You have to explicitly add a “Delete” menu items to a timeline item

These two points aside, adding the actual menu item is pretty trivial, so it won’t take long for us to do it. From a front end perspective, there are two aspects to consider. First, we have to consider the front end for our web application, where the user will go to authenticate our Glassware. Second, you can optionally send down HTML in your timeline item, which can effect what is displayed to the end user in their timeline.

Our web app front end will be very simple. We’ll give the user the ability to log in and log out. If the user is logged in, they can send some static text down to their Glass as a timeline item. This is, arguably, a pretty trivial example of how to use Glass, but this post is more about getting Glass up and running quickly with Rails and less about the Glass Mirror API.

Building the Front End

Let’s go ahead and get the easy stuff out of the way. We know we’re going to need a controller and a view to insert the timeline items. The actual insertion logic will be a POST request as a result of hitting a button, but we still need an index action to show the root page with the button if a user is logged in. Fire up Terminal and type

rails generate controller timeline index

This will create a controller named TimelineController with an empty index action. Let’s go ahead and route the index action as root. Open up config/routes.rb and add the following line below ‘get “timeline/index”‘:

root "timeline#index"

Your routes file should now look like this:
Figure 1. routes.rb
Go ahead and restart your Rails server so that the configuration changes get picked up, and then visit localhost:3000 again.

Figure 2. localhost:3000 After Updating routes.rb

Figure 2. localhost:3000 After Updating routes.rb

Alright, so not really the most exciting thing in the world, but we know things work. Better yet, this tells us exactly where to find the view file for the index action so that we can actually put in some of the HTML we need for our view.

We’re also going to use Bootstrap for the front end skinning. Since Rails installs the sass-rails gem by default, we’ll just augment that with a sassy bootstrap gem. Open up your Gemfile and add the following under the sass-rails gem:

gem 'bootstrap-sass'

From the command line, run

bundle install

to install the new gem. Then, open up /app/assets/stylesheets/application.css. Rename this file to application.css.scss so we can hook into all the sassy-ness we want, and add @import “bootstrap” to the bottom of the file. Go ahead and save it. Now we can use bootstrap throughout the entire application.

Let’s start with the navigation bar, which will have a link to the main page (“root_path”), as well as link to Log In and to Log Out, depending on the state of the current user. Add a file called “_navigation.html.erb” under /app/views/layouts, and fill it with the following HTML/Ruby:

</pre>
<header class="navbar"><nav class="navbar-inner">
<ul class="nav pull-left">
	<li><%= link_to 'Home', root_path %></li>
</ul>
<ul class="nav pull-right">
<ul class="nav pull-right"><% if current_user %></ul>
</ul>
<ul class="nav pull-right">
<ul class="nav pull-right">
	<li><%= link_to "Sign Out", signout_path, method: :delete %></li>
</ul>
</ul>
<ul class="nav pull-right">
<ul class="nav pull-right"><% else %></ul>
</ul>
<ul class="nav pull-right">
<ul class="nav pull-right">
	<li><%= link_to "Sign in with Google", "/auth/google_oauth2" %></li>
</ul>
</ul>
<ul class="nav pull-right"><% end %></ul>
</nav></header>
<pre>

Here, we’re using some new HTML5 tags (<header> and <nav>) and augmenting them with bootstrap classes. We’re also using embedded ruby (hence the .erb extension) to add anchor tags for the various links and to conditionally set the right side of the navigation bar depending on whether or not we have a current_user instance variable. Go ahead and restart your Rails server and then refresh the home page. Oh crap.

Figure 3. No variable current_user

Figure 3. No variable current_user

It looks like Rails is upset because it can’t find an instance variable called current_user.

Introducing a User model

We need a user. Think about it: When you authenticate the first time with our Glassware app, you expect to be remembered the next time you come back and request to have a timeline item inserted. If you sign out and sign back in, you don’t want to have to reauthenticate the Glassware app with Google again, either. This means we need some sort of persistence and a model object to persist through. A number of gems exist that handle user log in and OAuth, and we’ll get to those later on in this post, but first we need a model object to represent a user. Head back to Terminal and type:

rails generate model user email:string refresh_token:string

A user.rb file, as well as a create_users.rb migration file, should be created for you. Now we need to apply the migration.

rake db:migrate

Rake is just a build program. db:migrate tells Rake to look for all unapplied migration files and apply them in order. In our case, we only have one migration file, which has come code in it about how to create the Users table, which columns to add to it, and what the value types of those columns should be. Go ahead and open app/models/user.rb. Notice it’s completely empty, aside from the actual class definition. We know that Google OAuth requires an email address for the authentication. We also know (from reading the Google documentation on their authentication process) that a refresh token can be used to request a new access token without having to ask the user to reauthenticate. Currently, access tokens for Glassware are only good for about 2 hours, so it’s important to store the refresh token to enhance the user experience. As such, we should really ensure that a user has both an email address and a refresh token before we save them to the database. ActiveRecord provides a validation method (validates) and a method parameter (presence) to ensure that an attribute has a non-empty value before it gets saved to the database. So, add the code on line 2 to validate the presence of these two fields:

class User < ActiveRecord::Base
validates :email, :refresh_token, presence: true
end

Ahh, Rails makes things like this so simple! And this is great and all, but it doesn’t solve the problem we were seeing before when we tried to load up our home page. We still need to add a current_user instance variable. A popular gem for user authentication is Devise, but that’s overkill for what we need here. Instead, we can use a helper method in application_controller.rb to make an instance variable available to all of our controllers that are subclasses of ApplicationController. Open up app/controllers/application_controller.rb and use the helper_method method to add a current_user method. We’ll define the current_user method as a private method, but create a @current_user instance variable that can be accessed in our other controllers. Your ApplicationController should now look like this:

class ApplicationController < ActionController::Base
# Prevent CSRF attacks by raising an exception.
# For APIs, you may want to use :null_session instead.
protect_from_forgery with: :exception

helper_method :current_user

private
def current_user
@current_user ||= User.find(session[:user_id]) if session[:user_id]
end
end

Simple enough, right? We define a private method called current_user, which uses the conditional assignment operator to either assign or just return the current user by looking up the id of a user in the session hash. The session hash is managed by the Rails and is available for each user of your application. You can read more about the session hash here. Save this file and refresh your browser. BOOM!

Figure 5. Home Screen after adding current_user

Figure 4. Home Screen after adding current_user

Alright, still not incredibly impressive, but at least it works! Let’s add some code to index.html.erb to make it a little bit more interesting:


<% if flash[:notice] %>
 <div class="alert alert-success">
 <button type="button" class="close" data-dismiss="alert">&times;</button>
 <%= flash[:notice] %>
 </div>
<% elsif flash[:alert] %>
 <div class="alert alert-error">
 <button type="button" class="close" data-dismiss="alert">&times;</button>
 <%= flash[:alert] %>
 </div>
<% end %>

<% if current_user %>
 <%= link_to 'Send Offer', insert_offer_path, method: :post, class: "btn btn-primary" %>
<% end %>

Alright, so what did we do here? First, we check to see if their is a notice or alert inside the flash hash and, if so, display it with some bootstrap styling. You can read more about the flash hash here. Next, we check for a current_user and, if we have one, show them a button called “Send Message”, which performs an HTTP POST to this as-of-yet-undefined send_message_path. Now, refresh your browser. Hm; boring. Since we don’t have a current user, or any flash messages, we don’t actually see anything. That’s fine for now.

Go ahead and click on “Sign in with Google”. D’oh! “No route matches [GET] ‘/auth/google_oauth2′”. Well, isn’t that disappointing. Our Rails app is trying to match this route, but it can’t find it anywhere. Where does this route come from? Enter omniauth.

Get Authenticated

In its most simple form, OAuth is “an open protocol to allow secure authorization in a simple and standard method from web, mobile, and desktop applications”. History tells us that storing usernames and passwords isn’t really all that safe. So, an alternative method of authentication was created with the OAuth 1.0 standard. That standard has largely been deprecated in favor of OAuth 2.0, but the principle is the same. A user securely logs in to an OAuth provider with their username and password. In response to a successful log in, the provider passes back to the client an OAuth token, and that token is then used to make authenticated requests on behalf of the user. The user name and password are not stored by the provider, and the OAuth token generally has an expiration date set on it, upon which the client can either use a refresh token to request an updated OAuth token (like we will do), or the user is asked to reauthenticate. Writing your own OAuth implementation is cumbersome and error-prone. Calling OAuth a “standard” might be a stretch, since different people implement it differently. So, if you’re not aware of those differences for the different providers, you’ll spend a lot of time banging your head against a wall. Since I’m in no mood to put you through that (or myself, for that matter), we’re going to use a fantastic gem that handles all of the OAuth for us. Omniauth is a gem that handles multi-provider authentication for web applications. So, if you want to allow your users to authenticate with your app using Facebook, Twitter, Google, or any other OAuth provider, omniauth is a fantastic solution. It knows how to communicate with the various providers, pass along user credentials, parse responses, and return tokens. However, we already know we only need to authenticate with Google for our Glassware app, so we can use a gem that focuses solely on Google: omniauth-google-oauth2. In order to communicate with the Mirror API once we’re authenticated, we can use the google-api-client gem. Let’s go ahead and add both of these to our Gemfile:

# Google
gem 'google-api-client'
gem 'omniauth-google-oauth2', :git => 'https://github.com/zquestz/omniauth-google-oauth2.git'

Again, run ‘bundle install’ to install the gems.

Next, we need to create an API project in the Google API Console.

Awesome. We have a client ID and secret. Where do we store those in our Rails app? Well, there are a few places, but no matter what, you don’t want them committed to source control and publicly available for other people to see. A great option for storing sensitive data like this is a configuration file. We can use the figaro gem to handle most of the setup of this configuration file, and then just add key/value pairs for our ID and secret, which we can then reference throughout our application without directly using the values. Add the figaro gem to your Gemfile. Also, add the rest-client gem, which we’ll be using later on:


gem 'figaro'
gem 'rest-client'

And run ‘bundle install’. Next, run the installation command in Terminal to get figaro all set up:


rails generate figaro:install

You should notice two things. First, a file called application.yml was created in the config directory. Second, your .gitignore file was appended. The .gitignore file pretty much does what the file name says: it tells git which files to ignore and not manage under version control. What got appended to this file? Why, your application.yml file, of course! Open up the application.yml file and add the following two lines:


GLASS_CLIENT_ID: [__YOUR CLIENT ID__]
GLASS_CLIENT_SECRET: [__YOUR CLIENT SECRET__]

The client ID and secret don’t need to be surrounded in quotes — just paste them in as they appear in the Google API Console. Now, anywhere we want to use the client ID, we simply type

ENV[“GLASS_CLIENT_ID”]

And the value for the the key “GLASS_CLIENT_ID” will be inserted. The same holds true for the key “GLASS_CLIENT_SECRET”. Fantastic!

Now we need to setup the configuration file for omniauth. Checking the README for the omniauth-google-oauth2 gem gives us some guidance. Add the omniauth.rb file in config/initializers, and fill it with the code below:

Rails.application.config.middleware.use OmniAuth::Builder do
provider :google_oauth2, ENV["GLASS_CLIENT_ID"], ENV["GLASS_CLIENT_SECRET"], {
 access_type: 'offline',
 prompt: 'consent',
 scope: 'https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/glass.timeline',
 redirect_uri:'http://localhost:3000/auth/google_oauth2/callback'
 }
end

Ok, so let’s break this down a bit. As we learned before,

ENV["GLASS_CLIENT_ID"]

and

ENV["GLASS_CLIENT_SECRET"]

are environment variables. Remember when we added the client ID and client secret to the application.yml file? We can keep our ID and secret safe since the application.yml file doesn’t get checked into source control. Rails will use the figaro gem to look up the values for these two keys and replace them here. Next, we ask for ‘offline’ in the

access_type

so that Google will send us back a refresh token along with the OAuth token. This is going to be useful for refreshing the OAuth token without having to ask the user. Setting

prompt

to ‘consent’ means the user is re-prompted for authentication and/or consent. The

scope

key can be a little tricky. First we ask for

https://www.googleapis.com/auth/userinfo.email

This is mostly for the omniauth gem, but we can also use the response from this to store the user’s email. Next, we include the API endpoint for Glass timelines, so we can read and post to a user’s timeline. The redirect_uri key is the URL you set when you were setting up your Glass App Client ID and Client Secret. You can see a full list of other Glass-related scopes here.

We’re almost there! The last thing we need to do is write the method to perform the actual authentication and log in. For that, it’s best to create a new controller. We’ll call this the SessionsController, since it will be responsible for managing sessions. Back to Terminal!


rails generate controller Sessions create destroy

As before, this will create a new controller called SessionsController and fill it with empty implementations of the ‘create’ and ‘destroy’ actions. First we’ll add the code, then we’ll step through it:


class SessionsController < ApplicationController
 def create
 #What data comes back from OmniAuth?
 @auth = request.env["omniauth.auth"]

# See if we have a user with this email
 @user = User.find_by_email(@auth["info"]["email"])

if @user
 @user.refresh_token = @auth["credentials"]["refresh_token"]

@user.save
 else
 @user = User.create(email: @auth["info"]["email"], oauth_token: @auth["credentials"]["token"], refresh_token: @auth["credentials"]["refresh_token"])
 end

# Store the user in the session
 session[:user_id] = @user.id

redirect_to root_path
 end

def destroy
 session[:user_id] = nil

redirect_to root_url, :notice => "Signed out!"
 end

end

When we make an authentication request to Google, the user is asked to grant access to our application. Assuming they grant said access, omniauth takes care of handling the redirect and parsing the response. The part of the response we’re interested in is contained within the “omniauth.auth” hash, and so we stick that into an instance variable named “@auth”. Next, we try to find a user in our database using the “email” key contained within the “info” hash of the @auth hash (lots of hashes here). Email is an appropriately safe way of looking up a user, since a user’s Glass is tied to their Google account. If we have a user in the database already, we update their refresh token with the value from the @auth hash and save it. Otherwise, we create a new user with the email and refresh token from the hash. The ActiveRecord method “create” will automatically save the new model object as long as it’s valid. It’s a one-line approach to @user = User.new() … @user.save. We then store the user’s ID in the session hash so we can look up that user elsewhere, and simply redirect to the root path, since we don’t really have any other pages to show.

The “destroy” action is pretty self-explanatory. We simply clear out the user_id key from the session hash and then redirect back to the root path with a flash message notifying the user that they’ve been signed out. Nothing too crazy there.

The last piece of the puzzle is to update our routes.rb file. Insert these two lines above “root ‘timeline#index'” and delete the two get routes that Rails automatically created for us:


get "/auth/:provider/callback" => "sessions#create"
delete "/signout" => "sessions#destroy", as: :signout

Your whole routes.rb file should now look like this:


GlassApp::Application.routes.draw do
 get "/auth/:provider/callback" => "sessions#create"
 delete "/signout" => "sessions#destroy", as: :signout

root "timeline#index"

end

There is one more route we need to add: the send_message route. It will look similar to the /signout route we added before. Your routes file should now look like this:


GlassApp::Application.routes.draw do
 get "/auth/:provider/callback" => "sessions#create"
 delete "/signout" => "sessions#destroy", as: :signout
 post "/send_message" => "timeline#send_message", as: :send_message
root "timeline#index"

end

Restart the Rails server, and now try logging in. You should see this:

Figure 5. Google Authentication

Figure 5. Google Authentication

Alright! Go ahead and allow the app, and you should be redirected back to the home page with a fancy blue button on the page. Clicking on this button will obviously do nothing, since we’ve yet to define the send_message action in the TimelineController. Let’s do that now.

Sending Messages to Glass

Sending a message to Glass using the Mirror API is a relatively straight forward process, though I’ve found there’s a fair amount of code to be written in order to do it. The first thing we’ll need is the refresh_token and email from the current user. I decided to create a convenience class method on the User model to return a hash containing the credential information for a given user. Granted, our user only contains attributes for email and refresh_token right now, but it’s perfectly conceivable that you may want to add other attributes to your user model, but still only need these two attributes to insert a timeline item into Glass. Open up user.rb and add the following code under the validates method:


def self.get_credentials(user_id)
 user = User.find(user_id)

if user
 # Get the token and refresh as a hash
 hash = {email: user.email, refresh_token: user.refresh_token}
 end
 end

This method is pretty straight forward. Head back over to timeline_controller.rb and make your send_message action look like this:


def send_message
  credentials = User.get_credentials(session[:user_id])

  data = {
   :client_id => ENV["GLASS_CLIENT_ID"],
   :client_secret => ENV["GLASS_CLIENT_SECRET"],
   :refresh_token => credentials[:refresh_token],
   :grant_type => "refresh_token"
  }

  @response = ActiveSupport::JSON.decode(RestClient.post "https://accounts.google.com/o/oauth2/token", data)
  if @response["access_token"].present?
    credentials[:access_token] = @response["access_token"]

    @client = Google::APIClient.new
    hash = { :access_token => credentials[:access_token], :refresh_token => credentials[:refresh_token] }
    authorization = Signet::OAuth2::Client.new(hash)
    @client.authorization = authorization

    @mirror = @client.discovered_api('mirror', 'v1')

    insert_timeline_item( {
      text: 'Google Glass is awesome!',
      speakableText: "Glass can even read to me. Sweet!",
      notification: { level: 'DEFAULT' },
      menuItems: [
        { action: 'READ_ALOUD' },
        { action: 'DELETE' } ]
      })

    if (@result)
      redirect_to(root_path, :notice => "All Timelines inserted")
    else
      redirect_to(root_path, :alert => "Timelines failed to insert. Please try again.")
    end
  else
    Rails.logger.debug "No access token"
  end
end

Let’s step through this. First, we use the user_id in the sessions hash along with our new get_credentials method to get a credentials hash. We then create a new hash called “data” using the client ID and secret from our configuration file, the refresh token from our credentials hash, and set the grant_type to “refresh_token”. We’re going to use this as the JSON body for a POST request to refresh the user’s OAuth token using the refresh token we stored in our database. You can read more about exchanging the refresh_token for an OAuth token here. We then pass this data hash to RestClient and use ActiveSupport to decode the JSON that is returned into a “@response” instance variable. Next, we check if there is a value present for the “access_token” key and, if so, add it to our credentials hash. Otherwise, we just write out to the Rails logger that we couldn’t find an access token. Assuming we got an access token, we proceed to create a new Google API client and assign it to the @client instance variable. We then create a new hash, cleverly named “hash”, and give it the access token and refresh token from the credentials hash. We create an OAuth 2.0 client variable by passing in our hash. Signet::OAuth2::Client takes an options hash and two of the possible values are :access_token and :refresh_token, so we have passed those in with our hash. This then gets set as the OAuth 2.0 client that our Google API client needs.

Next, we create an “@mirror” instance variable for the Mirror API client. We then use the insert_timeline_item method to perform the actual timeline item insertion. To insert a timeline item, we POST a JSON representation of the item to the appropriate endpoint, which is set inside of the insert_timeline_item method. We pass in a few things via a hash for the actual timeline item.

First is the actual text we want displayed in our item. Second, we pass a different string for text that can be read out loud to the user. We set the notification level to ‘DEFAULT’ (currently the only option) so that the user’s glass will “DING” when a new notification from our app comes through. Finally, we define two menu items, both of which are standard menu items for Glass. Their purposes should be self-explanatory. You can read about other possible values to pass to this JSON representation here.

The return value of this method should be a “@result” instance variable if the insertion is successful. Depending on whether or not the insertion worked, we redirect back to the root path with an appropriate flash message. Next, we’ll define the insert_timeline_item method.

Giving credit where credit is due, the insert_timeline_item method is right out of the mirror-quickstart-ruby’s mirror_client.rb file. We’re not going to cover adding attachments to your timeline items, but this method contains the ability to do so. Your insert_timeline_item method should look like this:


def insert_timeline_item(timeline_item, attachment_path = nil, content_type = nil)
 method = @mirror.timeline.insert

# If a Hash was passed in, create an actual timeline item from it.
 if timeline_item.kind_of?(Hash)
 timeline_item = method.request_schema.new(timeline_item)
 end

if attachment_path && content_type
 media = Google::APIClient::UploadIO.new(attachment_path, content_type)
 parameters = { 'uploadType' => 'multipart' }
 else
 media = nil
 parameters = nil
 end

@result = @client.execute!(
 api_method: method,
 body_object: timeline_item,
 media: media,
 parameters: parameters
 ).data
 end

And that’s it! We’ve put into place all of the pieces we need to send some static text down to our Glass. Refresh your browser, click your button, and you should soon see a new timeline item inserted into your timeline. If you select the item, you should have an option to read the item aloud, or to delete it. And that’s all there is to it.

For your own reference (as well as mine at some point in the future), below are some resources I used scattered across the inter webs when building this app:

Google Glass Reference

Exchanging a refresh token for a new access token

Mirror Quickstart in Ruby

Introduction to Objective-C Modules

Current State

Unless you’ve written C in the past, your closest encounter with the preprocessor and its code inclusion is probably the #import statement. But before #import, there was #include.

#include

The preprocessor command #include is a clever little trick. It basically tells the preprocessor to treat the contents the file included as if the entire file actually appeared at the point of the #include. That explanation may seem a little confusing, so let’s just look at an example. Let’s say we have a file, IncludeMe.h:


// IncludeMe.h

#define kMyConstantNumber 42

#define kMyConstantBoolean true

Now, we write a little C program that uses the constants defined in the IncludeMe header file:


// MyProgram.c

#include <stdio.h>

#include "IncludeMe.h"

printf("The constant is %d and the boolean is %d", kMyConstantNumber, kMyConstantBoolean);

How does MyProgram.c know what the values of kMyConstantNumber and kMyConstantBoolean are in order to print them out from printf()? Well, what’s really happening is the preprocessor is going in and injecting the contents of IncludeMe.h into MyProgram.c. So, what the compiler actually sees when it’s compiling your program is the following:


// MyProgram.c

#define kMyConstantNumber 42

#define kMyConstantBoolean true

printf("The constant is %d and the boolean is %d", kMyConstantNumber, kMyConstantBoolean);

Sure, this is a trivial example, but it should make it pretty obvious exactly what the preprocessor is doing. Actually, the above example isn’t entirely true. To see what the preprocessed file really looks like, fire up Xcode and create a new C/C++ file. In terminal, navigate to the directory containing that file (use the cd command to change directories and navigate to the file). Then type gcc -E filename.c and observe all of the code that gets spit out to the terminal window. By default, all new C files have stdio.h included. This is the header file for all of the basic IO functions available to all C programs (such as the printf() function you saw above). The preprocessor sees that your C file includes stdio.h, and so it includes all of the code in stdio and makes it available to your C program. If you scroll aaaall the way to the bottom of the output, you’ll see the code you actually wrote. #include placed all of the code from stdio in your file as if you had copy/pasted it there yourself.

As a side note, notice that we wrapped our included header file in double quotes (” “). This tells the preprocessor “look for IncludeMe.h in the same directory as MyProgram.c”. However, stdio.h is wrapped in angle brackets (< >). This tells the preprocessor to look for this header file in the directory with all of the system headers.

Recursive Includes

So, this preprocessor #include directive is great. It lets you modularize your code, include system headers, and fosters reusability. But what happens when you have a pair of files that look like this:


// FirstFile.h

#include "SecondFile.h"

/* Some code */

// SecondFile.h

#include "FirstFile.h"

/* Some other code */

Well, the preprocessor first goes out and sees that FirstFile.h wants to include SecondFile.h inside of it. But when it goes to do that, it sees that SecondFile.h also tries to include FirstFile.h, which includes SecondFile.h, which includes FirstFile.h, which includes….ok, well, you get the picture. This is called a Recursive Include.

#include vs. #import

The recursive include is the problem that Objective-C tried to solve with the introduction of the #import directive. Using #import, a file would be guarded against recursive includes by first checking to make sure the included file was not already defined. If it was not, the file would be included, otherwise it would be skipped. Traditional C headers also support this in the form of header guards:


#ifndef MyFile_h

#define MyFile_h

// Some code

#endif

The two essentially do the same thing, however Objective-C classes and frameworks should use #import and not #include.

Introducing @import

Back in November in 2012, Doug Gregor of Apple gave a presentation at the LLVM Developers Meeting requesting that modules, a solution to the problems inherent to preprocessor #imports and #includes, be introduced. Modules, Gregor argued, solve two problems the current preprocessor implementation faces:

  1. Fragility
  2. Scalability

With regards to fragility, a simple example exposes how ordering with preprocessor #includes and #imports matters greatly in the end. Let’s say we have the following Objective-C file:


// MyFile.h

#define strong @"this won't work"

#import <UIKit/UIKit.h>

@interface MyFile : NSObject

@property (nonatomic, strong) NSArray *anArray;

@end

What happens after the preprocessor is done doing its work? Your header file now looks like this:


// MyFile.h

#define strong @"this won't work"

// UIKit imports

@interface MyFile : NSObject

@property (nonatomic, @"this won't work") NSArray *anArray;

@end

Notice that we’ve overridden the definition of the strong keyword with something the compiler doesn’t know how to handle.

The other issue, scalability, should be apparent from the above description about #include. #include and #import are both textual inclusions — they are a glorified copy/paste transaction. The contents of the included file are simply pasted inline where the #include or #import statement was placed. Furthermore, any files included in that file also have their contents pasted into the original file, and so on and so forth until the entire #include/#import tree is traversed. This results in a multiplicative compile time between source files and headers. Now, you would think that for as long as C and Objective-C have been around, someone somewhere would have tried to tackle this problem. And you’d be right. Pre-compiled headers (.pch) have been in use for years to combat the scalability issue. Add an #include/#import statement here, headers are compiled into a single on-disk representation of all files in the .pch, and those headers are included in every source file in your project. However, even .pch files come with their own set of problems:

  1. Most developers don’t maintain their .pch files. They soon become unruly and unmanageable
  2. It’s difficult for developers new to a project to understand how files are related if everything is in the .pch file
  3. Sometimes you just don’t want a file included everywhere

Modules break away from this textual inclusion model and instead act as an encapsulation of what a framework is, as opposed to just shoving the headers into your source files. Think of it as making an API of the framework available to your source file. With modules, a framework is compiled once into an efficient, serialized representation that can be efficiently imported when the library is used. Additionally, it ignores preprocessor state within the source file, meaning that you can’t override the definition of a keyword in a module just because you #define something with the same name prior to the import.

But Apple, in typical fashion, has taken modules even further. Think about it: if you import the MapKit framework, why should you also have to tell Xcode to link against the MapKit framework? With modules, you get support for autolinking frameworks by default!

And how do we get all of this header-caching-auto-linking goodness? Say hello to @import. Simply using the @import declaration will kick off the new module parsing and caching. And you can still do selective imports through the use of the dot syntax. So, for instance, to import only the MKMapView classes and none of the other classes in the MapKit module, you simply say

@import MapKit.MKMapView

Done.

Well that’s great, but now I have to go through each of my #import statements and replace them with @import? No! Apple is taking care of this for you, as long as you opt in. To opt in, make sure you turn on the Enable Modules (C and Objective-C) build setting. Additionally, you can turn on/off the auto-linking of frameworks here, as well.

@import Build Setting

@import Build Setting

 

 

 

 

And that’s it! How does this work, you ask? Module Maps. Module Maps are a way for modules to, well, map back to their header counterparts. Then, a separate compiler instance is spawned and the headers from the Module Map are parsed. A module file is written, and then that module file is loaded at the import declaration. As before, that module file is cached for later re-use (where ever you may import the same module).

Defining a Module

Defining a module is a relatively easy process. From Apple’s own presentation to the November LLVM Meeting, here is an example of how to define the C stdio library as a module:

Module for stdio.c

Module for stdio.c

The export keyword specifies the module name. The dot (.) indicates a submodule, so, in this case, stdio is a submodule of the std module. The public: keyword denotes the access to the API. In other words, which variables, methods, etc. will be publicly available. Anything defined outside of this is private to the module and remains that way. And that’s about it!

NOTE: At this time, modules are only available for Apple’s frameworks. and have been implicitly disabled for C++.

Performance Improvements

All of this @import stuff is really cool, in theory. But how does it stack up in practice? Doug Gregor includes an example in his presentation of the difference in the number of lines of compiled code using the traditional preprocessor macro versus the new @import directive. For the ubiquitous “Hello World” C program, the original source code file is a mere 64 lines of code. After the preprocessor has done its include of the stdio.h header, that number jumps to 11,072 lines. That’s 173 times the number of lines in the actual source file. That’s huge! Now let’s say each of your source files imports the stdio.h header (as almost all C programs do). You’re talking about adding over 11,000 lines of code to each file you add. From a mathematical perspective, that leads to an M x N compile time, if you imagine M source files and N headers. Using modules, the stdio module is parsed only once and then cached, dramatically reducing the number of lines in your processed source code files. Now, for smaller projects, the difference in compile time is negligible, and you probably won’t notice much of a benefit (aside from the conveniences of things like the auto-linking of frameworks). For larger projects, however, you’re looking at potential compile time improvements of a couple of percentage points or more! Either way, this is a really interesting and welcome addition to the LLVM compiler, and as the community continues to expand on what a module is and does, the benefits of using modules will continue to grow.

Automatically Build, Archive, and Distribute to TestFlight

For the first official tutorial post, I thought I’d write about something I never saw or read too much about when I first set out to accomplish this task.

A few months ago, I wanted to streamline my TestFlight distribution process. For those of you who don’t know, TestFlight is a service that makes beta list management, provisioning, and distribution an absolute dream for iOS developers. Gone are the days of mailing out .ipa files and managing your own distribution lists. If you don’t have a TestFlight account already, sign up; it’s free!

Once you have an account, you’ll need your API token and Team Token. You can find those here and here, respectively.

Now then, on to automating!

AutoBuild

Let’s first create a new Xcode project. Nothing fancy here, just a Single View iPhone application. Go ahead and name it AutoBuild. For ease-of-file paths, feel free to save it to the Desktop. We don’t actually need to do anything with the application code, specifically, but we’re going to make a few changes to the project settings itself.

Build Configurations

Build configurations are an often-overlooked but pretty handy feature of Xcode. Using build configurations, you can set various build settings without having to change each individual setting between things like development and production builds. By default, Xcode creates two configurations for you: Debug and Release. To see what build configurations you currently have, select your project in the Project Navigator on the left, and then switch from the AutoBuild Target to the AutoBuild Project, like below:

Build Configurations Screenshot

Figure 1. Build Configurations

Let’s add a new Build Configuration for TestFlight. This will allow us to set specific build settings (such as which Provisioning Profile to use) for when we build for TestFlight releases. Start by clicking the little “+” button in the Configurations section, and choose “Duplicate ‘Release’ Configuration”. Name it “TestFlight”.

Now we need to tell Xcode that when we archive, we want to use this new TestFlight configuration. In the top of the Xcode window, click on the AutoBuild scheme (this is next to the drop down that lets you choose a different build target. It doesn’t look like it, but that’s actually two different buttons) and select “Edit Scheme”. In the window that appears, select “Archive” in the left pane. This is where we tell Xcode which configuration to use. From the drop down list, choose “TestFlight”. Your screen should now look like this:

Figure 2. TestFlight Configuration

Figure 2. TestFlight Configuration

Go ahead and hit “OK”. The last thing we’ll need to do here is set the Provisioning Profile for the TestFlight configuration. Go ahead and log into your Apple iOS Developer Portal and visit the Provisioning Portal. Click on “Provisioning” and create a new wildcard Provisioning Profile called “AutoBuild Profile”. Creating this in the Development tab is fine, even though TestFlight’s documentation says (or used to say) to create an AdHoc Distribution Profile. This isn’t actually necessary for uploading to TestFlight (and, in fact, you would need a Development profile if you wanted to test Push Notifications with a TestFlight-distributed app). Download that new profile and move it to your Desktop, as well.

Add this new profile to your application by dragging and dropping it into the Xcode Organizer (shift+cmd+2). If we go back to view the Build Settings for our Target, this provisioning profile should have been automatically selected for the TestFlight configuration. If it wasn’t, just select the new profile for the TestFlight setting.

The last thing we need to be sure of is that we have the Command Line Tools installed for Xcode. To check this, in the menu bar select Xcode → Preferences, and select the Downloads tab. The line for Command Line Tools should say “Installed”. If it doesn’t, install them now.

As for the automation, this will be done using Automator. This is done for no particular reason, but if you’ve never used Automator before, this may be a good time to get somewhat familiar with it.

Create The Automator Application

Begin by opening Automator and choosing to create a new Application.

New Automator Application Image

Figure 3. A new Automator Application

Once that opens, you’ll see that there are a whole slew of pre-defined actions you can choose from. In true Apple fashion, you can simply drag-and-drop these actions out onto the palette to work with them. And that’s exactly what we are going to do! We only need one action for this tutorial: “Run Shell Script”. We’ll use the Shell Script to actually run our build/archive/load to TestFlight commands. Go ahead and pull out this Action, so that your workspace now looks like Figure 4 below:

Figure 4. Blank Shell Script

Figure 4. Blank Shell Script

“Shell” We Script Some More?

There are a few things you’re going to need to know before we get going on this shell script. You’ll need to know:

  1. The location of your project. This should be /Users/[your_user_name]/Desktop/AutoBuild.
  2. The location of the Provisioning Profile for your application. This should be /Users/[your_user_name]/Desktop/AutoBuild.mobileprovision
  3. The name of the developer associated with your Provisioning Profile. This will look something like “John Doe (8FG28KU8R)”
  4. The name of the configuration you are using when you Archive. We created the TestFlight configuration earlier.

With this information in hand, we can begin our script! In the “Run Shell Script” section of our Automator app, let’s start by defining some variables. For this script, it will save us from subjecting ourselves to the possibility of typing the same file path differently in multiple places, as well as make the final product look at least slightly more legible. When writing a shell script, use to denote a comment. So, let’s start with a comment:

# Variables

Congratulations, you’re a shell-scripting master! Ok, maybe not quite yet, but we’ll get there. The first two variables we are going to define are the project directory and the build configuration. I tend to use ALL_CAPS_VARIABLES_SEPARATED_BY_UNDERSCORES. This seems to be a reasonable convention, and so I have stuck with it. Unless you’re a caps lock-happy programmer, this should make it fairly simple to determine what’s a variable and what isn’t. So, the next two lines of your script should look something like this:

# Variables
PROJ_DIR=/Users/jmstone/Desktop/AutoBuild
CONFIGURATION=TestFlight

Everyone still with me? Good. Next we’ll define some variables that are needed during the archive process: the project name and the scheme name. Unless you’ve explicitly created a new scheme, your project name will suffice here. I’ve broken it into two separate variables either way:

#Archive Variables
PROJ_NAME=AutoBuild
SCHEME_NAME=AutoBuild

Next come the variables for the .ipa file. When you create an archive, Xcode takes the liberty of packaging up files such as the .app file (your actual application), your .dSYM file (used for symbolicating crash reports), and your info.plist. The .app file is simply “Your Project Name”.app. The next set of variables are all necessary for creating the .ipa file. Once you’ve had a chance to look at them, I will explain each one:

#IPA Variables
APP_NAME=AutoBuild.app
IPA_LOC=/Users/jmstone/Desktop
IPA_NAME=AutoBuild.ipa
DEVELOPER_NAME="John Doe (8FG28KU8R)"
PROVISION_PROFILE=/Users/jmstone/Desktop/AutoBuild.mobileprovision

The APP_NAME variable should be self explanatory. It’s the name of your app, in this case, AutoBuild.app. IPA_LOC is a variable denoting where you want to save the .ipa file to. Here, we just save it to the Desktop, but you could save it to a specific folder to keep things organized. IPA_NAME is just the name you want to give to your .ipa file. DEVELOPER_NAME is the name associated with the PROVISION_PROFILE, which is simply the location to the Provisioning Profile denoted in the Build Settings of your target. To make sure we’ll all on the same page, our script so far should look like this:

# Variables
PROJ_DIR=/Users/jmstone/Desktop/AutoBuild
CONFIGURATION=TestFlight

#Archive Variables
PROJ_NAME=AutoBuild
SCHEME_NAME=AutoBuild

#IPA Variables
APP_NAME=AutoBuild.app
IPA_LOC=/Users/jmstone/Desktop
IPA_NAME=AutoBuild.ipa
DEVELOPER_NAME="John Doe (8FG28KU8R)"
PROVISION_PROFILE=/Users/jmstone/Desktop/AutoBuild.mobileprovision

There’s just one more variable we need. Xcode, in its infinite wisdom, sticks our archive in a folder based on the date, YYYY-MM-DD. Assuming we distribute our .ipa file on the same day that we archive it (and that wouldn’t be the case using this script only if somehow your script got to the archive step before 12:00 am and got to the .ipa step after 12:00 am), we can use the date command, with some options, to grab today’s date in the format we need:

ARCH_FOLDER=`date '+%Y-%m-%d'`

Now, for those of you familiar with UNIX commands, the remainder of the script should look reasonably familiar. In most cases, we are simply placing our variables in the correct places. First, we need to change directory (cd) into our top-level project directory

cd ${PROJ_DIR}

This is our first exposure to referencing the variables we created earlier. It’s very simple: just wrap the variable name in ${ }. Done.

The command line tools we installed earlier add a series of bash commands in /usr/bin, including xcodebuild and xcrun. You can navigate to /usr/bin and type ‘ls’ to list all of the bash commands available to you (this is where commands such as git, cd, ls, ruby, etc. are all stored). The documentation Apple provides for these two commands actually isn’t terrible, considering the fact that most people build and archive from within Xcode itself. You can find the documentation for xcodebuild here, and for xcrun here.

Now that we’re inside of our project directory, our first command is going to be the one that builds our project. That line looks like this:

xcodebuild -target ${PROJ_NAME} -sdk iphoneos6.0 -configuration ${CONFIGURATION}

We’re using the xcodebuild command, along with the options -target, -sdk and -configuration, to build our project (${PROJ_NAME}) using the iOS 6.0 SDK. Note that this is uses the iOS SDK, not the iOS deployment target, so even if you’re deploying back to, say, 5.0, you’re still building with the iOS 6 SDK. If for some reason you’re not using the latest SDK, replace this part with the SDK you are using. If you’re unsure of the SDK you are using, Xcode defaults to the latest SDK available, which as of this writing is 6.0. We’ve also told the xcodebuild command to build using our TestFlight configuration.

After the build is complete, the next step will be to archive. This is done with the command

xcodebuild archive -project "${PROJ_NAME}.xcodeproj" -scheme ${SCHEME_NAME} -configuration ${CONFIGURATION}

Here, the archive command is a buildaction on xcodebuild. Another buildaction is, ironically enough, build, and we could have specified that in our first command to xcodebuild. However, build is the default buildaction, so it’s not necessary. Other buildactions include install and clean. In this case, we’re telling xcodebuild that we want to archive the project ${PROJ_NAME} with scheme ${SCHEME_NAME} and, again, use the TestFlight configuration ${CONFIGURATION}.

Now let’s navigate to the folder containing our Archive

cd ~/Library/Developer/Xcode/Archives
cd ${ARCH_FOLDER}
LATEST_FILE=`ls -rt | tail -1`
cd "$LATEST_FILE/Products/Applications"

This changes to the Archives directory within the Xcode directory in ~/Library/Developer (~ is a short cut to your home directory). Then we change directory to the folder named using today’s date (e.g. 2012-12-11). Next, we create a variable called LATEST_FILE, which grabs the file name of the most recently created archive, because if we were to run this script multiple times in a single day, each archive would be placed in the same folder. For those new to UNIX, ‘ls’ is used to list the contents of a directory. The -r flag reverses the order in which we list those contents, and the -t flag sorts them based on time. We then use ‘ | tail -1’ to put the most recently modified file name into the LATEST_FILE variable instead of to stdout. Finally, we change directory to the Applications folder of this archive (you can view the contents of the archive in Finder by navigating to the .xcarchive file, right clicking, and selecting ‘Show Package Contents’). Make sure you wrap this in quote marks, to account for spaces.

Next, we’ll use xcrun to package our Xcode archive into a .ipa file. The following command is used for that, and should give a little insight into how the packaging of an .ipa file actually works.

xcrun -sdk iphoneos PackageApplication -v ${APP_NAME} -o ${IPA_LOC}/${IPA_NAME} --sign ${DEVELOPER_NAME} --embed ${PROVISION_PROFILE}

Here, we’re basically just telling xcrun to package our application. Along with that, we pass in some variables such as the Application name, the location we want to save the .ipa file to, which developer to sign the file with, and which provisioning profile to use. In Xcode 4.x, this is equivalent to clicking “Distribute” in the Archive tab of the Organizer and selecting which Developer/Provisioning Profile to use.

So, that’s it for the build/archive/package process in Xcode. If you look at our script now, it should look something like this:

# Variables
PROJ_DIR=/Users/jmstone/Desktop/AutoBuild
CONFIGURATION=TestFlight

#Archive Variables
PROJ_NAME=AutoBuild
SCHEME_NAME=AutoBuild

#IPA Variables
APP_NAME=AutoBuild.app
IPA_LOC=/Users/jmstone/Desktop
IPA_NAME=AutoBuild.ipa
DEVELOPER_NAME="John Doe (8FG28KU8R)"
PROVISION_PROFILE=/Users/jmstone/Desktop/AutoBuild.mobileprovision

ARCH_FOLDER=`date '+%Y-%m-%d'`

cd ${PROJ_DIR}

xcodebuild -target ${PROJ_NAME} -sdk iphoneos6.0 -configuration ${CONFIGURATION}
xcodebuild archive -project "${PROJ_NAME}.xcodeproj" -scheme ${SCHEME_NAME} -configuration ${CONFIGURATION}

cd ~/Library/Developer/Xcode/Archives
cd ${ARCH_FOLDER}

LATEST_FILE=`ls -rt | tail -1`

cd "$LATEST_FILE/Products/Applications"

xcrun -sdk iphoneos PackageApplication -v ${APP_NAME} -o ${IPA_LOC}/${IPA_NAME} --sign ${DEVELOPER_NAME} --embed ${PROVISION_PROFILE}

So we can finally see some actual progress, let’s go ahead and run what we have so far. Once the script is finished running, you can open the Archive tab in the Organizer, and you should see your new Archive listed (Figure 5). You should also see an AutoBuild.ipa file on your desktop.

Figure 5. The Archive in Organizer

Figure 5. The Archive in Organizer

Next, let’s cd to our .ipa file location

cd ${IPA_LOC}

Now we are going to use the TestFlight API and the cURL command to upload our new .ipa file to TestFlight. As I said at the beginning of this post, you’re going to need your API Token and Team Token. The API has links to both of those, so if you didn’t grab them earlier, just visit the API to get there quickly. Here is the last command of our script:

curl http://testflightapp.com/api/builds.json -F file=@${IPA_NAME} -F api_token='[YOUR_API_TOKEN]' -F team_token='[YOUR_TEAM_TOKEN]' -F notes='API upload from script!'

The -F flag we’re using here is for the cURL utility, and basically mimics you filling out a form and clicking the “Submit” button. It acts essentially like a POST using the Content-Type multipart/form-data. Fill in [YOUR_API_TOKEN] and [YOUR_TEAM_TOKEN] with the tokens you got from TestFlight. The last little bit is to add any release notes you want in the notes= field.

The entire script should now look like this:

# Variables
PROJ_DIR=/Users/jmstone/Desktop/AutoBuild
CONFIGURATION=TestFlight

#Archive Variables
PROJ_NAME=AutoBuild
SCHEME_NAME=AutoBuild

#IPA Variables
APP_NAME=AutoBuild.app
IPA_LOC=/Users/jmstone/Desktop
IPA_NAME=AutoBuild.ipa
DEVELOPER_NAME="John Doe (8FG28KU8R)"
PROVISION_PROFILE=/Users/jmstone/Desktop/AutoBuild.mobileprovision

ARCH_FOLDER=`date '+%Y-%m-%d'`

cd ${PROJ_DIR}

xcodebuild -target ${PROJ_NAME} -sdk iphoneos6.0 -configuration ${CONFIGURATION}
xcodebuild archive -project "${PROJ_NAME}.xcodeproj" -scheme ${SCHEME_NAME} -configuration ${CONFIGURATION}

cd ~/Library/Developer/Xcode/Archives
cd ${ARCH_FOLDER}

LATEST_FILE=`ls -rt | tail -1`

cd "$LATEST_FILE/Products/Applications"

xcrun -sdk iphoneos PackageApplication -v ${APP_NAME} -o ${IPA_LOC}/${IPA_NAME} --sign ${DEVELOPER_NAME} --embed ${PROVISION_PROFILE}

cd ${IPA_LOC}

curl http://testflightapp.com/api/builds.json -F file=@${IPA_NAME} -F api_token='[YOUR_API_TOKEN]' -F team_token='[YOUR_TEAM_TOKEN]' -F notes='API upload from script!'

Checking Our Work

And that’s it! We’re done. If we run the script again, we’ll see a few things. For one, we should have a new entry in our list of Archives in the Xcode Organizer. Better yet, if we visit our list of apps on TestFlight, we should see our AutoBuild app!

App in list

Figure 6. AutoBuild in the TestFlight list of apps

Better yet, if we click on the app, and then our only build in the list of Builds, we can go to Build Information and, in the release notes, see the notes we had in our script!

App with comment

Figure 7. AutoBuild with the comments we supplied

Conclusion

And there we have it! With a little bit of shell scripting, we were able to automate the entire build/archive/package process in Xcode and uploaded it to TestFlight! There’s a lot more power in the TestFlight API than simply uploading the build to TestFlight. For instance, if you have distribution lists set up on TestFlight, you can denote which of those lists receive the build. Even better, you can set “notify” to true in order to automatically notify eligible users that a new build is available!

As I stated at the beginning of this post, this is my first tutorial. If you have some ways to make this script better, or suggestions about how I could make future tutorials more clear, please feel free to leave them in the comments!

Hello World!


int main(int argc, char *argv[])

{

     @autoreleasepool {

          NSLog(@"Hello World!");

     }

}

Every programmer, young and old, remembers their first “Hello World!” program. My first happened to be in Java, even though the code snippet above is Objective-C.

That first program seemed like magic, and as I demystified that first program, I found even more magic underneath. This blog is my way of continuously demystifying the world of programming in Objective-C. In addition to that, they say that the best way to learn is to teach, and I am on a constant mission to continue learning. I’ve also found myself in a number of situations where I was Googling and StackOverflowing (making that a verb now) until I was blue in the face, pulling together pieces from this post or this tutorial in order to solve, what I believed to be, a very common and basic problem. There were times when I read a tutorial and thought to myself “This wasn’t very clear. I should write a tutorial on this same topic, but in a much clearer way”. “Stone of ARC” is an attempt to bring together all of those driving forces. Don’t let the name fool you, though! We’ll be covering some topics that dive deep into the world of Memory Management, and outline some of the pitfalls of using ARC. I’ll also be discussing some new features of the latest version of iOS, trends within the industry, and tutorials on various other topics!

I’ll be thinking hard on a good topic for the first tutorial, so stay tuned!