Thursday 30 April 2009

Should have form fields (macro)

Quickie snippet form-macro to make sure that the page you're on has the given set of form fields. You can optionally pass in a model name for "form_for" style forms (because these are pretty common), but it also lets you just pass in straight field names (for when you check your submits and hand-crafted tags).

Stick this at the bottom of your test_helper.rb and use as below:
should_have_form_fields [:name, :email, :phone, :gender], 'user'

Note: should work for text inputs as well as radio buttons and checkboxes (which is why we check for name not id - also means you don't match the label by accident). This assumes you've used Rails' helper_methods to generate your form fields - or that you have included a name on every field.

class ActionController::TestCase
  # A macro to check that we should now have form fields for the given
  # columns.
  def self.should_have_form_fields(fields, model_name = nil)
    return if fields.blank? # short circuit
    fields = [fields] unless fields.respond_to?(:[]) # arrayify
    fields.each do |f|
      should "have field #{f} on form" do
        assert_select 'form' do
          if model_name.blank?
            assert_select "[name=#{f}]"
          else
            assert_select "[name='#{model_name}[#{f}]']"
          end
        end
      end
    end
  end
end

Dehumanizing Rails

So we have a string that has been "humanized" - and all the underscores have been stripped out and replaced by spaces, and it's been nicely capitalised (sorry, make that "capitalize"d). So how do we go back the other way?

Plonk the following into config/initializers/inflections.rb (or if you're using a legacy Rails, drop it onto the end of config/environments.rb).

module ActiveSupport::Inflector
  # does the opposite of humanize.... mostly. Basically does a
  # space-substituting .underscore
  def dehumanize(the_string)
    result = the_string.to_s.dup
    result.downcase.gsub(/ +/,'_')
  end
end
class String
  def dehumanize
    ActiveSupport::Inflector.dehumanize(self)
  end
end

Note: it will not reverse any speccy stuff like adding back any "_id"s etc, but it does mean you can do something along the lines of:
assert_equal "network_type", "Network type".dehumanize

Tuesday 28 April 2009

Playing nice with XML and HTTP

So we've got a setup that has a remote API that we're accessing using HyperactiveResource (an extended version of ActiveResource). Now, I'm using Rails to simulate the remote API (for the purposes of testing), and I've come across some annoying behaviour.

One issue is that standard rails routing for a RESTful interface will direct a badly-constructed (or non-existent) URL to a real action... let me demonstrate thus:

Real member path: /users/1.xml Routed to: :controller => 'users', :action => 'show', :id => '1'
Real named collection path: /users/count.xml Routed to: :controller => 'users', :action => 'count'
Non-existent path: /users_party_on.xml Routed to: "Bad Request" handler
Non-existent path2: /users/party_on.xml Routed to: :controller => 'users', :action => 'show', :id => 'party_on'

If I called the last URL with curl, I'd expect to be routed to the "Bad request" handler and receive some sort of error-like http-status and an XML message explaining that no route exists or something similar... what I get instead is a horrible big *html* page telling me it couldn't find the user with an id of "party_on" (unsurprisingly).

So what do I want to have happen? I'd rather this stuff was caught in the router. It'd be nice if there were a way to tell the router that your :controller/:action/:id is only valid for a certain formatting of the :id field. If anybody out there on teh Intarwebs knows how to do that, please tell me now!

Unfortunately, it doesn't seem to do this... and in any case, the router/dispatcher also doesn't seem to return XML to an XML-request... it only seems to know how to handle HTML-based errors (by spitting back the public error pages[1]).

So instead, what I need is to return a "URL not found"-style xml error at the appropriate time.

Most of my controllers have a "find_" function on the member-functions (ie just @thing = Thing.find(paramd[:id])). Now, since the bad URLs tend to converge on the "show" action - this seems as good a place to put a bad-request filter as any. I'll also incorporate it with the 404-code that also seems missing when a doesn't exist (or is not accessible by this person).

So this calls for a helper-method as below, as a hack to fix this lack of proper routing.

  # convenience method for extracting the expected model name from the
  # controller name
  # Note: expects the model to be rails-standard eg "ThingsController"
  # should map to the Thing model
  def model_name
    self.controller_name.singularize.camelize
  end

  # use this to skip out early and return better http status codes for XML
  # requests.
  #
  def find_thing
    the_id = params[:id]
    # ids should be numeric. If they're not - we accidentally got through
    # the router with an unrecognised action - because Railsy named-routes
    # that *don't* exist, look like the "show" action with a bad id.
    if !the_id.blank? && !the_id.to_i.is_a?(Numeric)
      # skip out with a 400 early...
      respond_to do |format|
        format.xml do
          return render :xml => 'Error: URL not recognised', :status => :bad_request 
        end
      end
    end
    begin
      thing = model_name.constantize.find(the_id.to_i)
    rescue ActiveRecord::RecordNotFound
      # skip out with a 404 early...
      respond_to do |format|
        format.xml do
          return render :xml => 'Error: resource not found', :status => :not_found 
        end
      end
    end
    thing
  end

Due to how controller before_filters work[2], you should use this code thus:

class UsersController < ApplicationController
   before_filter :find_user, :except => [:new, :create, :index, :count]
   # actions all go here
   # ...
protected
  def find_user
    @user = find_thing
  end
end

Caveat: if you have non-standard naming of models/controller - the model_name.constantize will not work... so you may want to modify this to pass in an optional klass param.

Notes:
[1] and when exactly are they going to make these into templates so we can use the standard layout rather than hand-coding it for each one?
[2] ie, I'm too slack to figure out exactly how to do: @thingy = in a block passed to the before_filter command. Again - if you know how, let me know.

Monday 27 April 2009

Testing ActiveResource

I've been banging my head against the wall that is ActiveResource for a while now. One big problem is actually getting the testing correct.

Take 1: HTTPMock

The supposedly sanctioned expectation is that you test it against the provided HTTPmock... unfortunately, this is great for testing that ActiveResource makes the correct remote calls eg, when you update a user that ARes POSTs to /users/123.xml... and that it reacts to a (pre-stubbed) 404 by raising a ResourceNotFound; but it doesn't allow you to actually test your model - eg to assert that when you update your user's login field and reload your model, that your user can now login with the new details...

AFAICS you can't test anything that is actually meaningful or useful to your business logic - which kinda defeats the point IMO.

Even with a lot of hacking, I couldn't get the kind of dynamic mock backend that I needed for any meaningful test-coverage.

Take 2: Mocking with Stump

Next up was to try to mock/stub out the backend functions appropriately. I'd heard some good stuff about the stump plugin and dutifully installed it to have a go.

Stump let me do much more dynamic testing. I could stub out the "create" and "find" functions and make it respond with a mock remote object that I could then further stub out the "save" and "update" functions... and after a while it seemed like I was stubbing out functions to return stubbed out stubs and it all got a little bit circular... and in many cases: really complex.

I also found that I got the code to work in the browser I'd often still have to spend ages getting the tests to work with the right set of mocks and stubs to patch all the ways that rails could leak out and try to call the "real" API. :P

I also noticed a few times where I'd somehow accidentally plastered over a bug with a stub by assuming the return-value. I only noticed this because the tests would run fine - yet trying the same operation in the browser revealed the bug.

Now how can I test if I can't rely on the test to actually test? My safety net suddenly has big gaping holes in it that I'll only find if I fall through them while testing by hand! This is such a step backwards that it's horrifying!

Then I hit a wall.

We have a user and during creation (and update) we need to check that the login is unique - which requires a find on the remote API. So in the unit tests we mock out the User object's "find" method to return an empty set... (ie no users were found that match this login field). But then (immediately after creation), we need to be able to find the user - we we re-stub out User.find to return the given user - otherwise the test fails.
... but when you're doing a functional test, all of the above operations are inside the atomic post :create call. You can't stub out one half, then stub out the other half of the operation partway through... because you can't get to the user object partway through the process... so creates were suddenly failing badly with seemingly no solution.

The code didn't do what I needed, and on top of this was tangled and messy and really too ugly. When it comes to rails, ugly code screams out for another solution...

Take 3: A real API

So, I turned to (what I see as) the lesser of two evils. I have created a test-API project that will simulate the real remote API with the models that are used by the (local) project. the test environment calls this API instead of the "real" one by using a defined constant for the site.

In setup I send the ActiveResource model a delete_all which will clear the remote db[1]. This simulates what Rails testing does anyway (ie clearing out the db before each test).

So now the project tests against a real, live-but-fake API that I can run in another window on my local machine. The tests run - they look pretty much the same as my ActiveRecord tests - which means I can check everything I need to check. It also means I can be assured that the code hasn't plastered over a bug with mocks and stubs.

Conclusion:

Pros:

  • Real end-end tests
  • Dynamic remote object instead of static mocks (means you can test that values are changed and that reloading returns what you expect)
  • Fewer assumptions means the test-code is less likely to be buggy - ie you're more likely to be actually testing what you think you're testing.
  • Easier to test as you don't have to mock out things that will return things that you've mocked...
  • Above leads to a more natural Railsy test syntax.

Cons:

  • Keeping two projects in synch
  • extra development time of the fake API
  • Will need to make sure the API a) exists and b) is running before you can run tests
  • Have to verify that the mock API does what the real API does.
  • Tests run slower as there's the (local) network turnaround time to consider.

In my mind, the pros outweigh the cons. YMMV

Notes:
[1] We're currently not using fixtures anyway, but it we were - I would then send the remote API a "please load up your fixtures" command.

Wednesday 22 April 2009

should_set_the_flash better

should_set_the_flash_to doesn't do quite what I'd like. I want to: a) specify which level of flash should be set (ie make sure it's a notice and not an error) and b) not care exactly what notice has been set.

IMO tying your tests down too hard to ephemeral strings is annoying... what is somebody changed the wording from "user created" to "thanks for signing up"? You have to change your test cases for that? :P So, being able to just enter "anything" should be possible.

Note - it piggybacks on the existing flash shoulda macro, and thus allows you to use it as before (pass 'nil' for level if you want - but it's better just to use the original if you need it.

So, here's my updated code. Dump it into the bottom of your test_helper file.

class ActionController::TestCase
  # Make sure that a message has been set at the given flash level
  # you can test that a notice has been posted, but no error thus:
  # should_set_flash :notice
  # should_not_set_flash :error
  def self.should_set_the_flash(level = nil, val = :any)
    val = /.*/i if val == :any
    return should_set_the_flash_to val unless level
    if val.blank?
      should "have nothing in the #{level} flash" do
        assert flash[level].blank?, "but had: #{flash[level].inspect}"
      end
    else
      should "have something in the #{level} flash" do
        assert !flash[level].blank?, "No value set of given flash level: #{level}"
      end
      should "have #{val.inspect} in the #{level} flash" do
        assert_contains flash[level], val, ", Flash: #{flash[level].inspect}"
      end
    end
  end
  # convenience method for making sure nothing has been set for the given
  # flash level
  def self.should_not_set_the_flash(level = nil)
    should_set_the_flash level, nil
  end
end

[Edit: 22-Apr-2009] : ADDED a convenience method for "not_set" and updated "set" to take advantage of this. Also updated comments to include an example of use

Thursday 16 April 2009

rails gotchas: nested transactions

So I've been adding some new plugins for testing that I've never tried before, in this case: shoulda and stump, and found that stump just isn't playing nice with SQLite :P

The problem only occurred when I tried to declare a proxy! function on an existing model. ok, I didn't get around to trying everything, so maybe it occurs on other things too, but it seemed to be happy with stub and mock.

The code would simply fail with the following error:

SQLite3::SQLException: cannot start a transaction within a transaction
<insert useless backtrace here>

It's been driving me crazy for the last few days trying all sorts of ways to get it to work. I almost abandoned transactional fixtures entirely before I finally found this snippet at the very *bottom* of the ActiveRecord::Transactions page in APIdock

"Most databases don’t support true nested transactions. At the time of writing, the only database that we’re aware of that supports true nested transactions, is MS-SQL."

So the problem is not transactions... but SQLite's lack of support of proper nested transactions.

The solution: Install MySQL.

It's annoying to have to go back to creating a MySQL db/user and granting rights before I can run my tests - but if it makes the test code work, then it's worth that extra step.

SQLite is nice, but it's ability to blow away the db by just doing a quick rm isn't worth not being able to properly test my code. Besides - the real db is on MySQL anyway, so I figure I might as well.

Thursday 9 April 2009

Rails gotchas: shoulda not_allow_values_for

If you're using should_not_allow_values_for and getting a failing test something in the lines of:

Failure:
test: User should not allow email to be set to "b lah". (UserTest)
    [/usr/lib/ruby/gems/1.8/gems/thoughtbot-shoulda-2.10.1/lib/shoulda/assertions.rb:56:in `assert_rejects'
     /usr/lib/ruby/gems/1.8/gems/thoughtbot-shoulda-2.10.1/lib/shoulda/active_record/macros.rb:174:in `__bind_1239266378_671612'
     /usr/lib/ruby/gems/1.8/gems/thoughtbot-shoulda-2.10.1/lib/shoulda/context.rb:253:in `call'
     /usr/lib/ruby/gems/1.8/gems/thoughtbot-shoulda-2.10.1/lib/shoulda/context.rb:253:in `test: User should not allow email to be set to "b lah". ']:
Expected errors to include "is invalid" when email is set to "b lah", got errors: email is too short (minimum is 6 characters) ("b lah")email should look like an email address. ("b lah")

You can see in the above test that the email does have an error on it - but the error message is not the default. shoulda specifically checks for the error message - and if you don't have the default, then you need to pass it in thus:

should_not_allow_values_for :email, "b lah", :message => Authentication.bad_email_message

You shoulda test your plugin!

So, lets assume you have a plugin "acts_as_teapot" that you want to include in some of your model classes. You've written a number of useful methods for teapot-like models to use, and now want to check that the models that are implementing acts_as_teapot actually are able to make use of the full spectrum of teapotly functionality.

You could write a set of asserts that you can tell your plugin's users to "please include these asserts/tests in your model classes"... but it's not DRY, and the users might miss one, and they'll get out of date real quick... what you want is kinda one big assert they can just put in once and that calls some library on the plugin itself (so it keeps up to date with the latest plugin code).

Luckily, shoulda is here to save the day! You can create a big context full of all the right tests and save it in the plugin file itself. This will be drawn in by the model at the time it's included. Then the user just has to call a single "shoulda" and it's all done for you.

Now, I was against shoulda for a long time - mainly wondering why anybody would use a half-arsed version of rSpec if they didn't actually want rSpec... but for me, plugin-testing is the killer-app that forced me to re-evaluate shoulda, and so far it actually looks ok. :)

So, to the code...

Plugin code

module Acts
  module Teapot
    def describe_me
      "short and stout"
    end
    def tip_me_over
      "pour me out"
    end
  end
end
class Test::Unit::TestCase
  def self.should_act_as_a_teapot
    klass = model_class

    context "A #{klass.name}" do
      setup { @new_klass = klass.new }

      should "respond to teapotly functions" do
        [:tip_me_over, :describe_me].each do |f|
          assert @new_klass.respond_to?(f), "#{klass.name} should respond to the function: #{f}"
        end
      end
      should "be short and stout" do
        assert_equal "short and stout", @new_klass.describe_me, "#{klass.name} is a funny-looking teapot."
      end
      should "pour me out" do
        assert_equal "pour me out", @new_klass.tip_me_over, "#{klass.name} doesn't make very good tea!."
      end
    end
  end
end

and testing the model:

class MyTeapot < ActiveRecord::Base
  acts_as_a_teapot
  #...
end

class MyTeapotTest < ActiveSupport::TestCase
  fixtures :my_teapots

  # plugin contexts
  should_act_as_a_teapot

end

There's a real-world example in the paperclip shoulda test

rails gotchas: assert_raises a syntax error

I found that assert_raise was causing a syntax error of the form:
syntax error, unexpected '{', expecting kEND (SyntaxError)
For a fairly simple test:

should "remove it from the database" do
  assert_raise ActiveRecord::RecordNotFound { User.find(@uid)}
end

Looks like it's getting confused about nesting. the solution? parenthesise your Error thus:

should "remove it from the database" do
  assert_raise(ActiveRecord::RecordNotFound) { User.find(@uid)}
end

Monday 6 April 2009

rails gotchas: restful_authentication not 2.0 compliant?

restful_authentication[1] is a great plugin, but the standard svn version is showing it's age. It's a whole month or so out of date... which is, of course, an eternity in the fast-paced world of Rails.

Luckily it's actually just moved home. You can find the up-to-date version on github

[1] Formerly known as acts_as_authenticated

rails gotchas: HttpMock not enough variables

A quickie for my own remembrance. I'm currently setting up HttpMock to test ActiveResource. I'd set up a few extra "routes" for it to mock out and kept getting the error below:

NoMethodError: undefined method `size' for :not_found: Symbol

For the code line:

mock.get    "/users/#{uid}.xml", {}, :not_found

The fix was pretty simple - I'd just accidentally missed out the "nil" for the body - ie it was actually an ArgumentError - but not getting picked up. ie, the code should have been:

mock.get    "/users/#{uid}.xml", {}, nil, :not_found

Thursday 2 April 2009

ActiveRecord::Validations in ActiveResource

The holy grail for ActiveResource users is for ARes to actually behave like ActiveRecord. In theory, ARes is just like AR, but in practice it's only kinda, sorta like AR... but missing a few bits that really seem to make all the difference.

ARes is still missing fundamental functionality that we have all grown to know and love... It all looks alright on the surface, but you can't help but notice the giant glowing absence the moment you decide to hide your models away in a Web service and then try to use ARes to implement a Railsy front-end.

Needed functionality includes:

  • Associations (ie has_many/belongs_to)
  • the usual suspects of callbacks (eg before_save)
  • safe-making your attributes (eg attr_accessible)
  • Widget.count
  • Actual conditions in finders (eg :conditions => {:name => 'Joe Bloggs'}
  • and the all-important Validations

I need my validations. The funky way rails handles AR errors is one of the things that makes Rails so special. I love to be able to just type validates_presence_of :foo and for everything else to Just Work.

ARes doesn't bother with them at all - and in that case I hardly see why Rails can call it AR-like when these are missing. Oh yes, sure, you can overload the validate method on your model object, but that seems very crude! Like having to hand-write your database connection code for each model. :P

All I can say is why can't I have validates_presence_of independently to the database connection? It's not really necessary - so why are Validations still locked inside the db-wrapper?

In my opinion, ActiveResource needs a lot of upgrading. Unfortunately, that looks like a fair bit of work... and we don't know how long that will take, and whether it will just be superceded once Merb merges with Rails.

Luckily we have an interim solution, in the form of a plugin called HyperactiveResource. It is fairly crude, but it seems to roughly give us an ActiveRecord-like interface that works fairly well.

Before I hit the code, the plugin already came with a lot of the currently-missing functions - rebuilt with ARes-style processing. They also had a rough implementation of Associations (not entirely AR-like, but getting there). I've been working on adding validations and validation-callbacks.

I can't say it works perfectly as I've just started working on it, as of this morning; but I've already got the basic validations working without falling over completely and I'm using the models auto-generated by restful-authentication to test it.

It's a start...

git gotchas: the wrong forking repository

So I'm pretty new to git, really, and am learning all the places that I can stumble and fall head-over-arse. Today's escapade is entitled "how to recover from forking the wrong repository in github". It's quite a simple recovery, and you'd think it'd be obvious how to recover from it... you'd also think that github would have a "how to" in the main help page...

This solution is based on having just made the mistake and wanting to just delete it and start over straight away. Obviously this doesn't work if, say, you've started making changes and want to keep them - you're on your own there.

The solution... delete the repository and start again

  1. Go to your version of the repository on github
  2. Choose the "Admin" tab (from the list at the top)
  3. Right down the bottom of the page is a section labelled "Administration". At the bottom of the box is a small, blue link labelled "Delete this repository".
  4. Click that link - then click "Yes" a few times to convince github that you really are sure.

Now you're done. The repository may not be deleted for a few minutes (it's a background process) and your dashboard may be cached so it may appear for a little while longer. Once it's gone you can go back and pick the correct fork of the repository that you want.