Hiring: Screening Rails developers & “Derailed”

I’m in the current position of hiring Senior Rails developers at my company, PrimeRevenue. Finding developers that fit the mold of “I’ve worked with Rails in the trenches”  is a lot different than someone who has deployed a blog on Rails. There is absolutely nothing wrong with the latter, but when trying to parse resume submissions it can be very difficult to find folks who are seasoned senior developers. I’m largely a self-taught Developer, imposter syndrome level 20. So I’m largely not concerned with your degree, education or pedigree. I’m concerned on your abilities, tenacity and focus on self-learning.

The above recalls an interview I heard with the Sandi Metz refer to experience of developers, which turns out to be attributed to: corgibytes.com.

“When I hear someone say they have 20 years of experience, I wonder if that’s really true or if they merely had 1 year of experience 20 times”.

I interpret the former as someone with a good breadth of knowledge who has licked their wounds and learned from it — all the while improving their knowledge and growing.

I’ve always hated interviews, especially contrived technical ones.

As a hiring manager, I find what matters to me is rolling up your sleeves and solving a somewhat real-world feeling problem in a relaxed environment (as possible). So that is what I’ve finally landed on. And it seems to be working well. My goal is to let you feel at ease and just work on some code. No tricks, no brain teasers. To get to this I have to filter candidates before they come in for the real fun.

The Derailed Interview Process

1. Screen resume online

I look for markers to decline the candidate, examples include:

  • A limited amount of demonstrable working time in Ruby.
  • Alphabet soup resumes where it is apparent they are trying to cast a wide net with Ruby/Rails as being a part of unrelated technologies.
  •  Only experience that is from a boot camp or school project.

I don’t immediately exclude candidates because of lack of personal online presence. If they have one, I will grade them on that, but lacking one doesn’t necessarily disqualify someone. We all have busy lives and sometimes you can’t devote 10 hours a week to being a consistent open-source contributor. This leads to an obvious question:

“What if I don’t have a lot of working Rails or Ruby experience? How can I ever get experience?”

This is the catch 22 of course. You don’t have much or any experience but need experience to get the job that needs experience. The simple and hard answer is, you have to do some moonlighting and work on some side-projects. Get some real experience on your own.

Create a website with some e-commerce for your kids’ school, make a blog engine for your church or contribute to a Gem or project online — anything that solves a real problem and allows for review online. If a picture is worth a 1000 words, your code is worth more.

Don’t know where to get started?

An example is https://www.codetriage.com/. You can pick an open source project and give a couple hours a week. Then reference this in your resume.

2. Technical Phone Screen

This is pretty straight-forward. At the risk of showing my hand to recruiters out there, it’s not a trade secret that much. This screen is focused on using something like http://collabedit.com/ where the interviewee and I look at some Rails code that has problems that need fixing or addressing. The problems are not too obtuse and a Senior dev with some years experience will catch them in a few minutes. I ask the interviewee to refactor and walk me through the code as they address issues.

Their level of expertise should reflect their answers and how quickly they are able to tackle the online code review. If they can’t address the code issues, even verbally, within about 30 minutes, I don’t view them as Senior.

If they pass the hands-on challenge, I then pivot into more open-ended questions focused on certain facets of Rails development. These facets were a result of some work with my coworkers and seem to work well.

  • Backend
  • Ruby/Rails
  • JavaScript
  • CSS
  • Debugging
  • Development Processes
  • Source Control
  • Databases

Going through this list only takes about 15-20 minutes and can give a good overview of where the candidate sits. Using similar questions for every interview ensures the process is more objective.

Candidates are graded on scale of 1-4, this scale was created by my old friend and work colleague Zack Adams.

  1. Noob
  2. Intermediate
  3. Advanced
  4. Expert

I won’t go into the exact ranking, but for each facet the candidate is ranked using the above.

If the candidate passes the hands-on code challenge and the facets questions fairly well, they move on.

3. Technical in-person coding exercise

This is where “Derailed” comes into play. “Derailed” is a public Rails 4 minimal app that is ready to go for hands-on interviews. It’s available here: Derailed

Derailed provides a quickstart method of candidates setting up their preferred development environment on a laptop ahead of time.  It’s a barebones working Rails application with all the basics:

  • one database model
  • a single JSON API endpoint
  • a dashboard page
  • a single passing request Rspec test
  • seeded database
  • and more.

The focus is hitting the ground running when the candidate comes in to interview. No setup, no friction. 

3.1 Next step, backend & frontend challenges.

After the candidate is connected to WiFi and has their computer up on the shared monitor we ask them to implement a straightforward backend feature. While it might be a little contrived, the feature feels real-world. I won’t share all the “challenges”, but here’s an example: “Implement feature in API to fetch random audio (using keyword in record).”

In a short amount of time, I/we are able to ascertain the interviewees’ capability. We add more complexity as the time passes and about at 1/2 point in scheduled time we’ll pivot from a backend feature to presentation. Using the above example, “Using random audio attribute, add button to play audio for each record”.

After about 1-2 hours, I feel we can safely gauge the interviewees’ skill in actually working with Rails and Ruby on a daily basis.


By focusing on semi-real-world problems both in phone screen and in-person with a quickstart application, we hit the ground running. Skip all the fluff and make the best use of everyone’s time. It’s important to cut the interview if there’s not a fit. There’s no sense in prolonging the inevitable.

Digging into Google BigQuery Github Data


At my day job our development team is making future impacting decisions on architectures and frameworks.  I believe in collaboration and openness when it comes to such broad technical decisions and don’t prefer to hand down mandates to the team. Besides, developers will tend to push toward the tools they want anyway.

In particular, the team was vetting frontend UI frameworks to base all new projects on and transition older projects to as part of moving to an API first mentality. When I say frontend UI framework, I mean CSS/HTML (some JS) focused on presentation, responsiveness and cross-browser compatibility.  We had three main contenders that were on the table: Bootstrap, Foundation and Semantic UI.

To help make a quantitative decision, I wanted to get some hard data on usage and penetration of Semantic UI in respect to support, community and long-term prospects of the project. There are plenty of other ways to gauge this, but I recalled an episode of the The Changelog Podcast (highly recommended)1 where I could potentially have a little fun digging deep into all of Github’s public repos. See bottom section for links to this episode.

You can use this same or similar approach to answer questions for your software problems or decision making. The blog is an attempt to show an example use-case.

Create Google Cloud Account, add billing and create project

A Google Cloud account with one billing account setup is required to get started. You will have to use a credit card, but it’s it’s free for first time users. Presently, Google gives you $300 credit to start playing with Google Cloud.(https://support.google.com/cloud/answer/6288653?hl=en).

After you have created an account following the how-to(s) from Google, go to: https://cloud.google.com/bigquery/quickstart-web-ui

The above link will let you setup a Google Cloud project. I created a project called “bigquery-semantic-ui-work”.

There’s one hint, if you can’t see a link or screen as mentioned in docs, you’ll want try going straight to the console at: console.cloud.google.com for billing,etc and/or https://bigquery.cloud.google.com for BigQuery.

While viewing the BigQuery console, click the small blue arrow beside your project to create a dataset to save your results to in next step. A dataset can be thought of like a DB schema or tablespace.


I created a dataset call “semantic_ui_bigquery_fun”.


Phase 1 – Querying data

If you recall my goal was to find all repos in the Github public data that used Semantic UI. The GitHub data and docs don’t list (that I could easily find) the exact columns that you are searching and can query against. So here they are today:

{id, content, copies, sample_repo_name, sample_ref, sample_path,

Here was my final query to get first set of results:

SPLIT(content, '\n') line,
(sample_path LIKE '%.html'
OR sample_path LIKE '%.json'
OR sample_path LIKE '%.js')
AND NOT binary
HAVING (line CONTAINS 'semantic.js' OR line CONTAINS 'semantic.min.js' OR line CONTAINS 'semantic-ui-css'
OR line CONTAINS 'semantic-ui.js'
OR line CONTAINS 'semantic-ui-less'
OR line CONTAINS 'less-plugin-semantic-ui'
OR line CONTAINS 'semantic-ui-ember'
OR line CONTAINS 'semantic:ui'
OR line CONTAINS 'semantic:ui-css'

Enter into the text area click the “Compose Query” red button. After a few seconds the query will return a table and some more options. Pretty fast too.

Query complete (6.7s elapsed, 23.9 GB processed)

Now save the results to a table. I created a table called “semantic_all_repo_results”. This effectively is a temporary table. By using a temp table you aren’t going to have to burn as many resources to get the data you are concerned about.

Using this table I got a distinct of repos that included something related to Semantic UI.

SELECT sample_repo_name from [bigquerysemanticuiwork:semantic_ui_bigquery_fun.semantic_all_repo_results] GROUP BY sample_repo_name

Note, BigQuery doesn’t support the actual DISTINCT operator, you must use GROUP BY to accomplish same task.
Now that I have a list of repos I was ready to get more info relevant to my basic business questions: how much of a community and support surrounds Semantic UI?

Phase 2 – Obtaining meta data from Github on repos

The BigQuery data is great, but it doesn’t include things like “stargazers counts” or “fork counts”. So the last part was saving the above query results with unique repos to a txt file and using a little Ruby to analyze.

#Gemfile contents
source "https://rubygems.org"
gem 'octokit'
gem "pry"
# END Gemfile
require 'rubygems'
require 'bundler/setup'
require 'octokit'
require 'csv'
class GHClient
def initialize(user,token)
@client = Octokit::Client.new \
:login => user,
:password => token
def github
# get an OAuth token from your GH account management
# export your user and token to ENV and run this ruby script
@gh = GHClient.new(ENV['GH_USER'],ENV['GH_TOKEN']).github
@all_repos = File.read('./semantic-repos.txt').split("\n")
STAT_METHODS = ['stargazers_count', 'watchers_count', 'open_issues_count', 'subscribers_count', 'forks_count', 'created_at']
puts "Gathering stats"
CSV.open("/tmp/semantic_ui_repo_stats.csv","w") do |csv|
csv << STAT_METHODS.concat(['repo_name'])
@all_repos.each do |repo|
stats = STAT_METHODS.collect do |meth|
putc '.'
if meth == 'created_at'
rescue Octokit::NotFound
stats = stats.concat([repo])
csv << stats
puts "CSVs generated"

I could now inspect the meta data. It wasn’t perfect. There were some duplicates still, but likely because of my quickly hacked together script. But that’s not the point. I was able to cope with this easily in a spreadsheet and get some metrics on Semantic UI in just about ~hour.  I shared my result file on a Google Doc in case someone wants to see the final result.


Wrapping up

Ensure you clean up your datasets after you are done or you will burn credits! And go big and query.


1The Changelog 209: GitHub and Google on Public Datasets & Google BigQuery – Listen on Changelog.com