Showing posts with label API. Show all posts
Showing posts with label API. Show all posts

Wednesday, 23 June 2010

Email campaign tracking with Silverpop

SilverPop offer themselves to the world as an "Engagement marketing solution". They provide click-stream and conversion tracking based on your email marketing campaigns, by putting unique identifiers in the links of your mailings (which they can send out in bulk). If you store these IDs and ping them back to SilverPop's click-stream and conversion-tracking servelets, this means you can get complete conversion analysis based on your email marketing campaigns.

I'm currently working for Moneyspyder, and one of our clients has signed up for a Silverpop account - which means we need a way to easily integrate with SilverPop - and the examples they give are all in php (and a little less than dynamic). So here's my Ruby-on-Rails solution.

1) Cookify the IDs

When a customer clicks on a link in one of the SilverPop emails - Silverpop has cleverly adding tracking IDs to identify the mailing-job and individual customer. When we send tracking information back to SilverPop - we need to send back these ids so the click-tracking is stored against the correct campaign and individual.

Of course, the parameters will disappear after the first time the user clicks another link, and we need to persist the data - to keep tracking all the pages they click on until they eventually convert!

The easiest way to keep track of this to stick it all into the session - then we can just check if it's there and send it each time we want to ping to SilverPop... and of course the best place to do session-wrangling is in a before_filter. So stick the following into something like application.rb

  before_filter :save_silverpop_data_in_session

  # This method does the storing of the ids from the URL into a session
  # cookie for later sending.
  # It will replace any existing values in the cookie - which represents the
  # user having returned to the site after viewing (and clicking on) another
  # link from a different email campaign.
  def save_silverpop_data_in_session
    silverpop_params = %w{spmailingid spuserid spjobid spreportid}
    if silverpop_params.all? {|f| params.keys.map{|k|k.to_s.downcase}.include?(f) }
      params_down = {} # ugly but necessary to get past browser case insensitivity issues
      params.each_pair {|k,v| params_down[:"#{k.to_s.downcase}"] = params[k]  if silverpop_params.include?(k.to_s.downcase) }
      session[:silverpop] = {:m => params_down[:spmailingid], :r => params_down[:spuserid],
                             :j => params_down[:spjobid], :rj => params_down[:spreportid]}
    end

  end

2) Configure your pod

SilverPop calls it's servers "pods" - and you could be using any one of them. This works better as a configuration option than hard-coded in your code - and lets you set up a link to the test-server on your dev/test environments. but in environment.rb you'll have something like this:

  # SilverPop URL = mailing list manager/ClickStream Analysis
  SILVERPOP_SITE_URL =  "recp.mkt41.net"
  SILVERPOP_SITE_HTTPS_URL =  "marketer4.silverpop.com"

3) Helping hands

Next up is to figure out how to ping SilverPop... which they helpfully give us an img tag example of how to build up the correct url with all the requisite values.

SilverPop has two servelets - one for accepting pings for click-stream tracking and one for the conversions. But they're almost identical, just taking different parameters... and I'm lazy and don't want to have to remember all the common details of how to do this in each place in the site. Thus: helpers to the rescue!

  # This method generates the code that calls the ClickStream tracking
  # servelet - we pass in the page name and page URL, along with the values
  # passed through from the last silverpop email - saved in the session
  def silverpop_click_stream_ping(page_name, page_url)
    silverpop_link('cst', :name => page_name, :s => page_url)
  end


  # This method generates the code that calls the Conversion Tracking
  # servelet - we pass in the "action", "detail" and "value" flags - 
  # eg "silverpop_conversion_ping 'CompletedOrder', order.id, order.grand_total" 
  # along with the values passed through from the last silverpop email -
  # saved in the session
  def silverpop_conversion_ping(action,detail,value = nil)
    silverpop_link('cot', :a => action, :d => detail, :amt => value)
  end


  # generate the SilverPop ping-image based on required servelet and parameters
  def silverpop_link(servelet, options)
    # skip out early if this user hasn't come through a silverpop email
    return nil unless session[:silverpop].present?

    base_url = (/https/ =~ request.protocol) ?  "https://#{SILVERPOP_SITE_HTTPS_URL}" : "http://#{SILVERPOP_SITE_URL}" 

    image_tag "#{base_url}/#{servelet}?#{session[:silverpop].merge(options).to_query}", 
      :height => 1, :width => 1, :alt => ""
    # we'd like this alt-tag: "Silverpop #{servelet.upcase} Servelet Ping"
    # to be accessible... but the above is not *really* an image, and thus will display 
    # the alt-text in the html at present... this sux, but I have yet to talk to silverpop 
    # about a way around this.
  end

4) Click-stream - analyse!

So, now it's time to get down and dirty with the click-stream analysis. This couldn't be more simple. Just use the helper to pop a link into your main layout. eg:

  <!-- silverpop cst ping -->>
  <%= silverpop_click_stream_ping(@page_title, url_for(:only_path => false)) -%>

Now, as you can see, this mainly works by using our dynamic page-title and a link to the current-page (using the empty url_for trick). Note that if you leave off the "only_path=false" option it won't provide the hostname. This may be what you want if you want to roll up multiple mirrored domains. You'll also have to adjust the page-name parameter as necessary for the way you generate the page-title... but otherwise you're good to go and this means every page-click gets tracked back to SilverPop from now on.

With one caveat... if you have AJAX-updating, you may need to figure out a neato trick of putting the ping-link into the newly-generated page-pieces or these "clicks" won't get tracked as the layout won't see them as new pages. I'll leave that as an exercise for the reader as it's very site-specific.

5) Mine your conversion gold

Now only the final, and most important, part is left - tracking actual conversions.

A lot has already been written about what constitutes an important conversion for your site. I won't repeat it all here as you can track heaps, and it's really down to what is important for your business. So I'll pretend we only care about when a customer completes an order - which we know because they land on the "thank you" page.

Which means we need to pop a link to the COT servelet there and pass in the important details... nothing easier:

  <!-- silverpop cot ping -->>
  <%= silverpop_conversion_ping("Order Complete", @order.id, @order.grand_total) -%>

Now SilverPop will monitor our marketing mails from go to whoa - and even know which order they completed and, most important, how much money they ended up giving us... Gold!

Update: Sorry for the repost (anyone that noticed). I fixed a few bugs after I finally got a chance to fully test this out.

Anybody implementing a Silverpop solution should also be aware that Silverpop's servers (at this moment) do not correctly redirect links from mailings sent from their test server. Instead you get redirected to your website, but without any values in the required query parameters (eg spMailingID etc are empty).

To do an end-to-end test of SilverPop's link redirection, you must create a real newsletter mailing on the live server, and just restrict the recipients (eg send it to yourself and any other devs on your team).

Monday, 27 April 2009

Testing ActiveResource

I've been banging my head against the wall that is ActiveResource for a while now. One big problem is actually getting the testing correct.

Take 1: HTTPMock

The supposedly sanctioned expectation is that you test it against the provided HTTPmock... unfortunately, this is great for testing that ActiveResource makes the correct remote calls eg, when you update a user that ARes POSTs to /users/123.xml... and that it reacts to a (pre-stubbed) 404 by raising a ResourceNotFound; but it doesn't allow you to actually test your model - eg to assert that when you update your user's login field and reload your model, that your user can now login with the new details...

AFAICS you can't test anything that is actually meaningful or useful to your business logic - which kinda defeats the point IMO.

Even with a lot of hacking, I couldn't get the kind of dynamic mock backend that I needed for any meaningful test-coverage.

Take 2: Mocking with Stump

Next up was to try to mock/stub out the backend functions appropriately. I'd heard some good stuff about the stump plugin and dutifully installed it to have a go.

Stump let me do much more dynamic testing. I could stub out the "create" and "find" functions and make it respond with a mock remote object that I could then further stub out the "save" and "update" functions... and after a while it seemed like I was stubbing out functions to return stubbed out stubs and it all got a little bit circular... and in many cases: really complex.

I also found that I got the code to work in the browser I'd often still have to spend ages getting the tests to work with the right set of mocks and stubs to patch all the ways that rails could leak out and try to call the "real" API. :P

I also noticed a few times where I'd somehow accidentally plastered over a bug with a stub by assuming the return-value. I only noticed this because the tests would run fine - yet trying the same operation in the browser revealed the bug.

Now how can I test if I can't rely on the test to actually test? My safety net suddenly has big gaping holes in it that I'll only find if I fall through them while testing by hand! This is such a step backwards that it's horrifying!

Then I hit a wall.

We have a user and during creation (and update) we need to check that the login is unique - which requires a find on the remote API. So in the unit tests we mock out the User object's "find" method to return an empty set... (ie no users were found that match this login field). But then (immediately after creation), we need to be able to find the user - we we re-stub out User.find to return the given user - otherwise the test fails.
... but when you're doing a functional test, all of the above operations are inside the atomic post :create call. You can't stub out one half, then stub out the other half of the operation partway through... because you can't get to the user object partway through the process... so creates were suddenly failing badly with seemingly no solution.

The code didn't do what I needed, and on top of this was tangled and messy and really too ugly. When it comes to rails, ugly code screams out for another solution...

Take 3: A real API

So, I turned to (what I see as) the lesser of two evils. I have created a test-API project that will simulate the real remote API with the models that are used by the (local) project. the test environment calls this API instead of the "real" one by using a defined constant for the site.

In setup I send the ActiveResource model a delete_all which will clear the remote db[1]. This simulates what Rails testing does anyway (ie clearing out the db before each test).

So now the project tests against a real, live-but-fake API that I can run in another window on my local machine. The tests run - they look pretty much the same as my ActiveRecord tests - which means I can check everything I need to check. It also means I can be assured that the code hasn't plastered over a bug with mocks and stubs.

Conclusion:

Pros:

  • Real end-end tests
  • Dynamic remote object instead of static mocks (means you can test that values are changed and that reloading returns what you expect)
  • Fewer assumptions means the test-code is less likely to be buggy - ie you're more likely to be actually testing what you think you're testing.
  • Easier to test as you don't have to mock out things that will return things that you've mocked...
  • Above leads to a more natural Railsy test syntax.

Cons:

  • Keeping two projects in synch
  • extra development time of the fake API
  • Will need to make sure the API a) exists and b) is running before you can run tests
  • Have to verify that the mock API does what the real API does.
  • Tests run slower as there's the (local) network turnaround time to consider.

In my mind, the pros outweigh the cons. YMMV

Notes:
[1] We're currently not using fixtures anyway, but it we were - I would then send the remote API a "please load up your fixtures" command.