Integrating Retailer Shopping Carts with Sparkle by Group Nine Media

Amazon Sparkle


As the ecommerce landscape evolves, shoppers have raised their expectations of online shopping experiences, and are easily discouraged by burdensome process flows which force multiple clicks, page loads and forms. This is especially true as more eCommerce moves to mobile where conversion is even more dependent upon ease of use of the interface.

One way leading online retailers (Amazon, Walmart, BestBuy) have addressed this issue is by providing 3rd Party Cart APIs for their affiliates and advertisers. This allows the customer to make a buying decision on a partner’s website, and then directly add that product (or multiple products) to their shopping cart. If the customer happens to already be authenticated with the destination site, they can be taken directly to the checkout page where they can review the contents of their cart. This works equally well for anonymous carts, since each retailer has already optimized the login or signup process for that case.

Sparkle, from Group Nine Media, has integrated with several different retailer’s checkout and cart APIs to create a mobile optimized shopping experience easily accessible from the web pages and social feeds of POPSUGAR, Thrillist and The Dodo. This document describes a number of different approaches to providing a shared checkout API and discusses the drawbacks and advantages of each one.

GET Cart with SKUs

This is by far the simplest implementation for a 3rd party application or website to use, but may present special challenges to the retailer. Amazon, Walmart, and BestBuy all support this type of request through their affiliate accounts. It allows any site to create a link on a web page which includes the product SKU(s). For Amazon and Walmart, clicking on that link takes the user to a landing page on the retailer’s site which lists the products, and confirms that the user wants to add them to the cart. For Best Buy, you are taken directly to the cart, but they currently only allow 1 product to be added in a single request.

A couple considerations for the retailer who wishes to implement this mechanism. Without the confirmation page, it’s possible for malevolent bots or other ill-willed systems to flood your site with anonymous cart creation. Assuming anonymous carts are a free resource and/or are garbage collected on a regular basis, this may not be a concern. The retailer does have to decide how to deal with inventory counts, and whether an item in an anonymous cart should count against available inventory. Analyzing cart abandonment rates may justify waiting for customer authentication or account creation before decreasing available inventory.

GET/POST Add-to-Cart

Another option we’ve seen implemented is items are added to a cart via a get or post request usually specifying an SKUs and counts. GET requests are convenient since they don’t require explicit CORS permissions, though it’s possible for the server implementation to allow any domain for POST requests. Once again, the implementation needs to be able to scale anonymous cart creation.

Even with strong CORS controls, retailers have the option of adding “” to their “allowed origin” list and responding to the browser to allow the POST operation.

Typically on the first request, the response will return a cart cookie, which is then extracted on all following requests.

The War on 3rd Party Cookies

Unfortunately, in order to address concerns with privacy around 3rd party tracking, many browsers have built restrictions around cookie management which can undermine our 3rd party cart implementation. Safari will not allow any 3rd cookies to be created until the user has made a “first class” visit to the website. Chrome currently allows the cookies to be created, but does not allow the current site any visibility to those cookies or the return values of the request. Google has announced they plan to further restrict this behavior in the future.

There still exists one simple work-around to this problem which we will call the “Cookied-Redirect”. In short, since Safari requires the user to visit the site before cookies can be created, the retailer can set up a redirect endpoint, which adds a cookie to a 302 response which sends the browsers back to the original page. For example:

Safari will consider this a true visit to the web site, and all future GET and POST requests will now be able to successfully create cookies, thus solving our cart identification problem.

To avoid an Open Redirect Vulnerability, retailers will want to limit the redirect to partner domains which are integrating their cart apis.

Imagine There’s no 3rd Party

So there is one other potential trick to avoid the CORS and 3rd Party Cookie issues. If (and only if) the retailer’s cookies are mapped to the global domain and not a particular host. For instance, if cookie values are scoped to: and NOT, the browser will allow them when Sparkle uses its domain whitelabling feature: This will mean the cart cookies can be easily shared between the two sites without requiring any redirects. This will prevent CORS cross-domain issues, since the sites are on the same domain.

The downside here is the need to manage SSL certificates for the server. Either the retailer will need to issue a certificate for the sparkle web server, or if the retailer uses a Load Balancer to terminate the SSL connection, the requests could then be proxied back to Sparkle. In either case, this will still require some configuration by the retailer’s IT group.

Backend Authenticated Shopping API

If none of the above solutions are acceptable, the last option is to create an authenticated Shopping API, if one doesn’t already exist. This addresses most of the security concerns mentioned above, but may be a significant larger lift for the retailer’s ecommerce engineering team if an existing API isn’t already in place.

In this model, the Sparkle server makes all the cart requests on behalf of the user. This prevents the authentication key from being exposed in Javascript on the browser.

The API requirements here are

  1. Get product information from a retailer product page URL
  2. Create a Shopping Cart and return a unique id for that cart
  3. Add a product to the above cart
  4. Provide and endpoint to initiate the retailer checkout experience given a unique cart id

It may be possible to combine 2 & 3 into a single request that creates a new cart if a cart id isn’t specified. Another option is to combine 2, 3 & 4 by passing all the SKU’s and quantities in a single request and have the response be a redirect to the retailer checkout page. Note, we’ve come full circle now back to the public GET implementation we started with, though by making this an authenticated API call, the retailer is better protected from malevolent anonymous cart creation.

Product Information API

The last component needed for optimal Sparkle integration is a good product query API. Most retailers already have this in place since it’s usually a necessary component for building a shoppable retailer website, but the API isn’t necessarily public or easily accessed by 3rd parties.

For completeness, the key product data requirements for Sparkle are:

  1. Product and Variant SKUs
  2. Product and Variant Images
  3. Variant Types (size, color, etc..)
  4. Swatches for color/pattern Variants
  5. Variant Display Descriptions (“Sky Blue” vs “skyblue”)
  6. Inventory and/or availability

Ideally, the retailer product page URL (or some portion of the URL) can be used to query the API. Depending upon the number of variants of a particular product, multiple requests may be needed to retrieve the data. Often, all the information needed is already available in a JSON object on the product page and the simplest solution may be for Sparkle to simply extract that.


While allowing third party cart API integration may not be a trivial endeavor for an online retailer, it’s obvious the market leaders and early adopters have seen the value in doing so. This is becoming even more important as younger consumers are learning to expect shopping seamlessly integrated into their media consumption and social experiences. The audience drawn to the Group Nine Media properties is very mobile savvy and depends upon our Brands to lead them to equally savvy retailers.

This article originally appeared at on Jan 29th, 2020.

Fun with Redis and Translation Tables

One of the first projects I took on at POPSUGAR was adding Internationalization and Localization capabilities to the site. While the general approach and APIs for I18N/L10N are well known and widely available, there were still a couple of interesting challenges.

Since we forked the POPSUGAR CMS from an early version of the Drupal, it was fairly trivial to grab the locale module from the latest version, and backport it to the existing system. This added support for the GNU gettext import/export format along simple translation calls for text, plurals, counts, etc. The Drupal locale “source” schema used a simple storage module where the unique key was the original English text combined with an optional context string. The simplicity of that model means developers do not have to create or track ids for every unique text string in the code. The downside is that rows in the locale source table are easily orphaned whenever a text string is changed, with no automated mechanism to later discover them. While the wasted data storage and table growth rate are miniscule when compared to the rest of the system, there is still a need to export the complete source table for translation to a new language. Since human translators are used, it would preferable to not have them waste time on unused strings.

The first obvious solution is to include a “last accessed” column in the locale source table and update this whenever a text string is used. Unfortunately, that scales badly when a web server is generating millions of pages per day/hour/minute, all using the same strings. Avoiding SQL database access (not to mention writes vs. reads) is paramount to page generation performance, and the entire translation table is kept in memcache to avoid having to do any queries. Furthermore, some strings like “Next Slide” are used far more frequently than others, potentially requiring 100’s of updates to the same row when servicing a single web request.

My next thought was to store the last access time in a noSQL database and process the results in batch on a regular basis. We already had a generic Redis instance available for miscellaneous tasks and this turned out to be quite easy to implement. But once again, even sending messages to a Redis server takes time and I did not want to take that hit during page generation, nor send multiple messages for the same string from a single page. It occurred to me a simple solution this second problem was PHP’s register_shutdown_function() which allows you to specify a function to be called once the current PHP request has completed, but before the process “exits”, similar to a C++ object destructor. During the page generation, a simple key/value array is populated with the source table index pointing to a calling function with line number. This naturally becomes a unique list of table row pointers, which is then sent to the Redis server via Predis::hMSet() method, which updates or inserts each row in the array as a Redis key/value, thus de-deduping values from other page requests. This also allowed us to leverage Redis key expiration, such that deprecated source strings simply fall out of the dataset.

The next step was to build a batch script that collected all the unique keys from Redis every 2 weeks and then filtered the SQL table by items which weren’t found in Redis. These strings can be safely removed from source locale table, since they aren’t actually being used by the web server. With roughly 60MM unique visitors per month, we were fairly confident all the locale sources were being used, and the cost of a false negative is extremely small since the Drupal module will just recreate that row in the table. Furthermore, by including the calling function and line number of each translation, we are able to verify the string has changed as well as find simple programming errors where someone attempted to translate editorial content rather than user interface.

There are certainly more elegant solutions that could have been developed, perhaps moving the entire localization table into a noSQL database and leveraging key expiration. That of course would require re-writing all the existing Drupal admin and API interfaces to work with that data storage mechanism. Like many other media companies, we have limited resources and when applying them to a problem, we need to insure the ROI is greater than the effort applied. A complete rewrite of the Drupal localization module wouldn’t be justified for our use case.

This post originally appeared at

Image SEO with AWS Lambda@Edge

Like many publishers on the web, here at POPSUGAR we pay close attention to SEO traffic and performance. One of many tricks we learned long ago was including a name or description of an image in the image URL itself. For example, the following image is part of a Leonardo DiCaprio slideshow.×1024/filters:format_auto-!!-:strip_icc-!!-/2016/03/21/629/n/1922398/aee62ec258f74088_GettyImages-516700486/i/Funny-Leonardo-DiCaprio-Pictures.jpg

Including a small description in the URL can give the search engine a little more information about the photo.

Unfortunately, this can become problematic for a couple of reasons. First of all, you would need to know the information about the image before creating the file on S3. Also, if the editor later decided to change the description, we would have to find all the places that image is referenced, and change them to point to the new file or renamed file. If you’ve cached multiple cuts of that image for different devices, your headache has now increased further.

So one obvious solution is to have your webserver rewrite the file name, but that undermines image caching at your CDN edge point, unless of course, you could somehow add that logic at the edge. Enter AWS Lamda@Edge: With a very small chunk of Javascript code, we rewrite every image URL to match the underlying filename in S3, preserving our edge cache and allowing our CMS to generate whatever filename is appropriate for the image as decided by the editor. This allows us to keep and change our nice SEO image names without having to sacrifice edge cache performance.

In the future, as Lamda@Edge capabilities mature, we plan to move more image functionality to the edge, allowing us to serve more device optimized images. We also intend to use it to make intelligent caching decisions about HTML and other content types.

This post originally appeared at

RubyInline Tricks and Tips

Permutative Schedule Calculation

One of the most compute intensive tasks in the RSportz system is the scheduling code. It turns out that scheduling pool and cross pool games between a number of teams is actually a rather challenging computer science problem. Add in seeding, home/away venue requirements, and calendar limitations and you have yourself a pretty hairy set of variables to deal with. You end up creating a fairly classic permutation problem. So much so that the algorithm RSportz uses was derived from this paper: U. Schoning. A probabilistic algorithm for k -SAT and constraint satisfaction problems. Proc. 40th FOCS, 1999 (Still looking for a copy of this BTW)

The code was originally written in Java and then directly translated to Ruby by someone who didn’t really know either language that well. The first red flag I found were the hand coded bubble sorts, which were copied directly from the Java version. 🙁

But the core algorithm was pretty straight forward. Create an NxN matrix where N represents the number of teams. Rows are (arbitrarily) home games, Cozino Games and columns are away games. Walk through the matrix and block out all the games that can’t be played due to seeding or other restrictions, then start filling the matrix with games, insuring that the number of home and away games for each team are balanced. Once you have a complete matrix for the number of games each team should play, score that matrix according to the mix of cross-group play to insure a much randomization as possible, while still avoid top seed match ups when possible.

The real world comparison here is the NFL, where each team must play every team in their division twice, and then some mix of cross-division and cross-conference play. While I suspect the NFL primarily uses TV market size data to drive their out of division play, our algorithm wants to make sure every team plays as many different divisions as possible in it’s cross group play.

So we score each complete matrix, and then move on to another combination, until we either reach the maximum possible score, or exhaust all combinations of the matrix or hit a time limit.   Hopefully we have at least one matrix before hitting the time limit. Now those of you familiar with combinatorial math know this numbers get very large very quickly. For instance, for 20 teams playing 5 games each, the number of combinations is “400 choose 50” or 1.70359003 × 1064 (a really big number). Now we do eliminate some games since we know each team can’t play itself, and there may be seed, home and away game restrictions, but as you can see, it can be a time consuming problem to solve.

So when I first started playing with the system, I was a little shocked to find out that to schedule 5 cross-group games between 4 groups of 10 teams took 3 minutes (basically, it was hitting the built in timeout).     I first walked through the code replacing all the bubble sorts with native ruby array sorting calls, and attempting to reduce unnecessary object copying where possible.     Finally, I realized that the system was finding the best possible score for this scenario in 30 seconds, but was continuing to process until the time limit was exhausted.     So the first thing I did was calculate the highest possible score for the matrix, and then made the algorithm stop when it was hit.  (We still need to go back and consider the speed a perfect score vs. 60-80%.)   That got me down to 30 seconds for the 40 team scenario, but it still seemed way too slow for me, at which point I started investigating how to call out to C code in Ruby.

As an extra motivator, when this problem was described to a friend of mine, he felt compelled to tweet “Re-implementing your O(n!) algorithm in C (from Ruby) probably isn’t going to help”.   I have to thank him for the inspiration.


I cannot say enough good things about RubyInline.  It makes dropping into C code simply trivial since you just place your C code in the middle of your Ruby code.   No Makefiles or compiler options to worry about.    You simply include the gem

require "inline"

And then add your code as a lamda section within a inline do block:

  inline do |builder|
    builder.c '
    static VALUE assign_c_wrapper(VALUE total_teams, VALUE matrix, VALUE team_games, ....) {
       return ret ? Qtrue : Qfalse;

The argument to builder.c needs to be a string, so you do need to be careful with your single and double quotes and make sure they are escaped properly if needed in the code. Obviously, anything that returned a string would work here, so the C code could be in it’s own file at this point and returned by an IO method, but that undermines the whole point. A search for RubyInline will show you all sorts of examples.

Although RubyInline will do automatic argument conversion for you with simple data-types, I was dealing with 2D arrays so I chose not to take that route and converted them myself, hence the reason all my arguments are VALUEs, which are essentially just pointers to Ruby objects.

Now the folks at ZenSpider have taken this one step further and have another gem called RubyToC, which simply translates your Ruby code directly to C and then compiles it via RubyInline. Quite clever, but once again, only the simplest data types are supported.

A couple bookmarks that are handy to have while your writing the code: One is the Extending Ruby chapter from the Programming Ruby guide. This has a nice table which lays out the type conversions. The other one is the Yukihiro’s original README.ext which is in the Ruby source distribution, but is also kept online and nicely formatted in HTML by

So the code to convert a 2 dimensional array from Ruby to C ends up looking like this:

      // matrix is the Ruby NxN integer array.  Created with the code
      // {, 0) }

       char c_matrix[c_total_teams][c_total_teams];
       VALUE *matrix_row = RARRAY_PTR(matrix);
        for (i = 0; i < c_total_teams; i++) {
           VALUE *matrix_col = RARRAY_PTR(matrix_row[i]);
           for (j = 0; j < c_total_teams; j++) {
             c_matrix[i][j] = FIX2INT(matrix_col[j]);

The inline builder block also takes some useful arguments. Compiler flags when needed as well as a “prefix” section which basically get treated as a C include file for the block of code. Being an old C pre-processor hack, I found this very handy. More on that later.

inline(:C) do |builder|
       builder.add_compile_flags '-ggdb'
       builder.prefix '
// We need to play games in order to pass around 2D arrays of variable size
// Luckily, we know the size of the second dimension
#define MATRIX(_i, _j) matrix[((_i) * total_teams) + (_j)]

#define TPG_FACTOR 1.2
#define GPG_FACTOR 1.1
#define GPT_FACTOR 1

The one gotcha was the line numbering in gdb was off. RubyInline tries to add a #line directive to the C code it generates in your home directory: ~/.ruby_inline, but they are off. I tried to adjust them in the prefix section, but didn’t have much luck, but I was able to use gdb to step through my code after adjusting the line numbers manually.

16-156x Performance Improvement

So most users of Ruby Inline report a 10x performance improvements, but even given the factorial nature of my algorithm, I strongly suspected we could do even better. As I mentioned before, the code was originally written in Java, and I actually don’t expect you would see a significant performance increase between Java and C, but the algorithm was a worst case scenario for a duck typing language with no native datatypes like Ruby. For instance, take the following lines directly from the exiting ruby code: {, 0) }

Seems simple enough, all I want is an NxN array of integers, which each value initialized to 0. But in Ruby, just because they are integers a this moment in time, doesn’t mean someone can’t insert an ActiveRecord object into the middle of this matrix sometime in the future. So in this case we end up asking the memory management system for N arrays and N*N integer objects and initialize each one to 0. Now for the same code in C:

    char matrix[total_teams][total_teams];
    bzero(matrix, sizeof(matrix));

Optimized by the compiler, literally three instructions to multiply the values, bump the stack pointer, and then zero the memory range. And my Intel ASM knowledge is 15yrs old. I wouldn’t be surprised to find there’s now one instruction to do all of the above.

Of course, we don’t allocate the matrix O(n!) times, but we do execute the following harmless looking operation:

matrix[row][col] += 1

Once again, quite a bit of work in a duck type language where the interpreter has no idea what type of object is at that position in the matrix, while in C


Is once again optimized down to 1 instruction.

By the Numbers

So my test case (spec of course) was pretty simple.    It schedules games for a 4 group round robin league with 10 teams in each group.     Each group played 6 games within it’s own group and 4 cross group games.   Teams were seeded such that the top 20 seeded teams cannot play each other until the playoff start.    The algorithm is recursive by nature, so I counted the number of recursions and the number of times we scored a valid matrix and sent that to the debug log each time a better scoring matrix was found. In this case, I let the code run until the timeout was hit, so we could better compare the speed of the code.

# Games #Teams Recursions Maxtrices Tested Time
Ruby 30 10 662,063 2,521 16s
C 30 10 662,934 3,392 1s
Ruby 80 40 5,313,768 22,495 180s
C 80 40 138,884,345 3,508,954 180s

Note that in the 80 game case, we still hit the time limit before we reached the best possible score for the matrix, while in the case of the smaller ones, we found the base matrix prior to hitting the time limit. We do have some knowledge of which teams are going to be the most difficult to schedule, hence we can order the matrix to try those teams first, so the longer the algorithm runs, the less likely it is we’ll find a better scoring matrix. Since I was only measuring to second resolution, the 30 game case clearly showed a 16x performance improvement. But for the 80 game case, the C code was able to test 156 times more matrices than the Ruby code. Have to say I’m pretty happy with the results, though I’ve since done some further optimizations on our scoring algorithm, and deciding when a game matrix is “good enough”.

I suspect my theory-favoring friend will argue that a sample set of 1 doesn’t disprove his assertion, but my recommendation for optimizing code has been repeated by many others: profile, profile, profile.

Misc Tricks

The one thing that irritated me while developing the C is I found myself wanting a DEBUG wrapper which I’ve written many time before but didn’t have on hand. Since I had to recreate the macro from scratch, I figured I’d include the version here for myself and anyone else who needs it. In this case, I wanted to call the debug method on the Ruby logger from within C code, and make it easy enough to insert debug messages in the C code. I’ve been accused of abusing the C pre-processor in the past, so your mileage may vary:

#define DEBUG_LOG(_format, ...) \
    do { \
      VALUE _logger = rb_iv_get(self, "@logger"); \
      ID _debug = rb_intern("debug"); \
      char _message[2048]; \
      snprintf(_message, sizeof(_message), _format, ##__VA_ARGS__); \
      _message[sizeof(_message) - 1] = 0; \
      rb_funcall(_logger, _debug, 1, rb_str_new2(_message));\
    } while(0)

The “do {} while(0)” is an old trick to avoid having to worry about nested blocks or semi-colons. The use of the code is as follows:

      DEBUG_LOG("SCHED: assign(%ld) evaluate score increased from %f to %f after %ld"
                " attempts! teams: %d, games: %d, groups %d, %ld seconds elapsed",
                GET_MEMBER_LONG("@assign_calls"), best_score,  score, GET_MEMBER_LONG("@evaluation_attempts"),
                total_teams, games_scheduled, ngroups, time(0) - start_time);

Finally, I also found the following macros handy (all this was included in my builder.prefix section):

#define GET_MEMBER_LONG(_name) FIX2LONG(rb_iv_get(self, _name))
#define GET_MEMBER_DBL(_name) NUM2DBL(rb_iv_get(self, _name))
#define INCR_MEMBER(_name)  rb_iv_set(self, _name, LONG2FIX(GET_MEMBER_LONG(_name) + 1))

Finally, if you ever forget the syntax for passing a multidimensional array in C, it looks something like this:

     static int verify(char team_games[][2], int row, int col, int ngames, int max_games) {

You need to declare N-1 of the dimensions, or pass them in as variables and then do the indexing yourself.

RadioParadise HD Plugin for Windows Media Center

So I’ve become a big fan of RadioParadise.    It’s a listener supported, commercial free radio station which plays a great mix of new, old and eclectic rock, with a random mix of everything else when they feel like it.  Actually, it’s best explained if you just go there and listen.  I also have a shortcut on my phone and it’s the only music I listen to in the car now.

So I’m a fan of the music, but the cool thing they’ve added this year is an HTML 5 192K HD feed along with a photo slide show called RadioParadise HD.    The photo’s are all high resolution, meant to seen on the big screen, but the really cool thing is that they are uploaded by the community, so if you have some high quality 16×9 photos, you can upload them and potentially see your own pics there.

So, obviously, a Media Center Plugin is needed so you can use your remote to bring up the music and slide show.    A couple things you’ll need.

  1. IE9:   Since the HD player is implemented in HTML 5, you’ll need IE 9, Firefox or Chrome.    I tried all three and (surprisingly) had the best experience with IE 9 as far as running in kiosk mode and resizing correctly.    Chrome has security issues being launched from WMC and FF seems to crash after running the feed for a few hours.
  2. Autohotkey: An extremely cool and easy to use scripting utility.   Used to turn off the screen saver and hide the mouse.
  3. nomousy: A utility from the autohotkey community to hide and restore the mouse upon exit
  4. Media Center Studio: To build the plugin.

First, install Autohotkey and cut & paste the following script into a file called rplaunch.ahk (or you can just download my pre-built binary from here):

; Disable Screen Saver
DllCall(“SystemParametersInfo”, Int,17, Int,0, UInt,NULL, Int,2)
; Hide the mouse
Run, C:\bin\nomousy.exe /hide
; Run IE in kiosk mode pointing to the rphd stream
RunWait, “C:\Program Files\Internet Explorer\iexplore.exe” -k
; Show the mouse
Run, C:\bin\nomousy.exe
; Enable Screen Saver
DllCall(“SystemParametersInfo”, Int,17, Int,1, UInt,NULL, Int,2)

Obviously, you should change the file to point to where you installed nomousy, or just install it in C:\bin as I did.   Then just right click on the .ahk file and select “Compile”.    You should now have an rplaunch.exe binary.   Put this in C:\bin as well.

Now run Media Center Studio.    Warning, the UI here is a little obtuse, so just follow these steps:

  1. Once you start the app, click on the “Start Menu” icon on the main toolbar (my version has a blank icon)
  2. Now click on the Entry points expansion button in the lower left hand corner
  3. Now click on the “Start Menu” tab at the top, and you should see something like this:
  4. Now Click the “Application” icon, and fill it out as follows.   Note I put my rplaunch.exe in a location with no spaces in the directory names.    I can’t swear that a location with a space doesn’t work, but it was on of the variables I eliminated during my testing.
  5. To get the Back and MediaStop buttons to exit the app for you, press the green “+” button, and then press the keys on your keyboard/remote:
  6. Hit the disk icon in the upper left (Save), close the tab and you should be returned to the Start Menu.   The new app should show up in the Entry points list.
  7. Drag and Drop your new app from the Entry Points to the location on the Start Menu you desire.   Hint: The TV and Movies row is not editable by default, so put this in the Music row, or go read this thread.
  8. Hit Save again and restart Media Center.

BTW:  Here’s the icon I used as well.

If anyone is willing to package this all up into an installable (or even give me instructions) I’d be happy to provide a download site.


Cracking the Disney Blu-Ray Club for around $10 each

So as a general rule, I don’t like to pay more than $15 for a Blu-Ray Disc.    I get most my BDs off of Amazon eye (masks for sleeping) using points/coupons from my credit-card, so I’ll go ahead and spend a little more in that case for something I really want.   Also, I find that box sets can usually get you to that price point as well.    A few years ago, Fry’s had the entire Star Trek collection for $59, which worked out roughly to $10 per movie, and since I was reading at the Anders Fogh blog how good are these movies, I decided it was a good purchase.    If I find a BD for less than $10, and it’s anything remotely decent, I buy it Gorgeous and Stuff.    I had quite a party at Best Buy on Black Friday last year where prices were as low as $7.99.

So Disney’s consumer-behavior-big-brother-marketing-machine figured out I buy BDs and have kids, so they sent me an offer to join their Movie club.    Like most people,  there are quite a few Disney classics I’m a fan of and they are now just repeating the limited time from the vault game on BD.   They did this years ago with VCR and then DVD where the movie is only available for a limited time.  Throw in the ABC TV shows, Pixar, Marvel (future movies only) and Touchstone studios, and there’s quite a bit of high value content there.

Cracking the Value Code

The deal is very similar to the old BMG Music clubs that would send you 10 CDs for $.01 as long as you committed to buying X more at regular (full-retail + shipping) club prices over the next year.     The kicker was you had to return a post-card if you didn’t want that month’s selection, and inevitably you’d forget and get a CD you didn’t want at an obnoxious price.

Fast-forward 10 years, and the basic contract hasn’t changed that much, but the difference now is you have email and the Internet to make the monthly decision process much simpler to execute.

First off, the basic membership kit that Disney offers on their front facing Blu-Ray page is NOT the one you want.   This one is for folks really bad at math.   Here they ask you to buy 3 movies at $1.99 and one at $19.95  (with free shipping) but then you have to buy 5 more at the full price. $29.95, and pay for shipping & tax.     Note shipping is $3.95 for the first one, and $1.49 each addition title.    So assuming you’re going to finish your commitment in the first month, you end up paying:

(3 * 1.99) + 19.95 +  (5 * 29.95)   + (9% tax) + 3.95 + (4 * 1.49)  = 179.64/8  =  22.46/BD

So not a great deal by any means since the average price on Amazon for the same movies is around 19.95, and if you buy two of them, you can get free shipping.     But if you do a simple search for the Disney BD Coupon Code, you suddenly find something much better deal.   5 movies for $1 with just a 4 movie commitment!   But add a 6th for $11.95 and it counts towards your commitment, and you can then add a 7th for $8.95 (free shipping for the first 7).    Now this is looking interesting.    Doing the same math as above:

(5 * .20) +  $11.85 + 8.95 + (3 * 29.95)  + (9% tax) + 3.95 + (2 * 1.49)  = 128.74/10  =  12.87/BD

Almost half the cost of the first deal with a far smaller commitment.   But now you have to wonder,  is there any way to improve on that?

As it turns out, there’s always room for negotiation.    Back to my multi-movie set comment at the beginning, Disney will also give you multiple commitment credit for buying multi-packs.     You just need to call the 800 number to find out what commitment credit they’ll give you for a particular pack.   That number can be a little tricky to find, so I included it here.     So I filled my 3 disc commitment by buying Fantasia (2 movie pack which counted as 1) and Pirates of the Caribbean Trilogy (which counted as 2).     The difference here brought the average price down to $12.32.  If you’re an ABC TV fan, you might be able to get there with past season of Lost, or something similar and do even better than me.    As an added value,  I prefer packages which also come with a DVD which we can leave at Grandma’s house, so I created a spreadsheet which assigned half a BD value when it included a DVD version (see attached).    With that calculation, I’m down to $10 a disc, but even at $13, I’m still below my $15 limit for some pretty decent flicks.    If you walk through their catalog, and play with the spreadsheet, you might be able to get it lower than $12.

Should I Stay or Should I go now?

So once you’ve fulfilled your commitment, should you cancel your membership?   Keep in mind when you’re fulfilling your commitment, you have to pay the full retail price, which is usually $29.99 or more, but once you’re done, you have the option to buy at the discount price.    If you stay, my advice is to always comparison shop.   Sometimes you can find it cheaper on Amazon, and sometimes you can’t.   For instance, the Snow White Diamond Edition is only $15.98 for members while the same BD on Amazon is 26.99.    Even with Amazon not charging sales tax (how much longer can they keep that up?) and potentially free shipping, the club price still wins.

If it is your goal to collect the Disney classics, then they certainly make it easy to see when each one is coming out on Blu-Ray, and when they are going back to the vault. Snow White, Fantasia and Pinocchio are being pulled at the end of April.    They are also occasionally offer a $10 upgrade coupon if you own the VHS (or DVD).    That’s going to help keep the used market for Disney VHS movies alive.

On the other hand, you do have to be diligent about canceling the monthly title, and you will eventually forget.    Furthermore, it gives them the opportunity to up-sell you every month, and you’ll probably end up buying things you normally wouldn’t of considered.  My advice is to cancel the membership as soon as possible.


One thing to keep in mind is you are making a $130 commitment over the next year or two (check the agreement when you sign up).  Don’t do the crime if you can’t pay the fine.     Also, beware they also play the game of releasing the movie on BD alone, then following it up with the “Diamond” or “Platinum” Edition, so make sure you know what you’re buying.  If you do stay in the club, I recommend you keep the spreadsheet and track your cost/BD and quit when it hits your limit.

Ubuntu 10.10 VNC keyboard mapping nightmare.

My Amazon EC2 dev box crashed yesterday and I had to rebuild it from scratch. Installed Ubuntu 10.10, configured VNC and ran into a keyboard mapping problem. Every time I hit the letter “d”, all my windows would iconify. I went down many false paths as did this poor soul, applying solutions from old releases.   I was finally able to determine the problem was actually the window manager, in this case it was metacity.    I didn’t bother to try to change the window manager in the gnome settings, because they didn’t appear to be running correctly either.    The simple solution was to just specify fvwm in .vnc/xstartup:


xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &
#x-terminal-emulator -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
#x-window-manager &
# Fix to make GNOME work (doesn't do shat for me)
gnome-panel &

Hopefully this will save someone else the time I wasted on this.

Replacing the ATI Radeon 5450 with an NVidia GT 430

So I had reached the end of my rope with the MSI ATI Radeon card on a couple fronts. I had HDMI bitstreaming working with Arcsoft TMT (as long as AnyDVD was running), and I could even put up with the flaky Catalyst UI and drivers. But I was stymied trying to get the refresh rate set to 23.976, which the Kuro PRO-141FD will execute a 3:3 pulldown for picture perfect Blu-Ray playback at 71.928.   Given the cost of the graphics card vs the 60″ plasma, it was time to make a change.

So with $75 of Amazon coupons burning a hole in my pocket, I decided to give the Zotac NVidia GT430 a shot. Since it goes in the HTPC next to the TV, silent cooling was a must.   Gaming performance is a non-issue for me, and since the Kuro isn’t getting replaced anytime soon, 3D video support wasn’t important either, though the Arcsoft BD & 3D assistant gave the card a thumbs up on all accounts.

Upon opening the package, the Zotac NVidia card looks much bigger than the MSI ATI with the giant heat sink, but both cards take up two slots in the machine.  The Zotac actually has two brackets, which make it a nice secure installation. Also, the Zotac has an DVI, HDMI and DisplayPort connections, while the MSI had VGA, DVI and HDMI. What a difference 9 months makes.

MSI ATI Radeon HD5450
MSI ATI Radeon HD5450

Zotac ZONE GeForce GT430
Zotac ZONE GeForce GT430

Installation was painless, it was time to download the NVidia drivers. A couple things impressed me right off the bat:

  • Clean install option – The NVidia drivers will blow away all previous NVidia registry settings and configuration when checking this box.   Very nice when you’ve mucked around with one too many registry hacks.
  • No “crap-ware” in install.   Thank you, I don’t need a 2 week trial to LOTR online…
  • Windows performance index:   ATI 5450: 4.9,  NVidia 430: 6.7!   Very impressive for a fanless card still less than $100.

So the next thing to try was getting to 1920x1080p@23.976.     First of all, the NVidia Control Panel was so much easier to navigate than even the ATI Catalyst Beta (the old ATI UI was horrid.  The latest is bearable).   From there, getting to 23Hz couldn’t of been easier.     Although it’s not listed in the defaults, click Custom, and 23p, as well as 59p, are at the top of the list.

No need to dig into the “Create Custom Resolution” dialog (but I wish the ATI UI had that!)

So that was too easy. Hmm… what about my other Blu-Ray playback issues? While I’ve had HD bitstreaming working with the ATI card for a while, I’ve had two other problems with the Arcsoft TMT software. First off, for some reason the TMT player refuses to play ANY BD disc. No explanation given, and all the HDCP tests come out fine. This started happening with their 3.0.1-170 release, and continues through The only fix I’ve found is to install Slysoft AnyDVD .   Unfortunately, the problem wasn’t the ATI card in this case, and I still need AnyDVD to watch BD.    Not the end of the world since I already own it, but a little disappointing since the software has questionable DMCA legal status in the US.  (BTW: If you haven’t figured it out by now, do NOT use your HTPC as your sole Blu-Ray player, unless you want to spend twice the money for twice the headaches.)

The second problem I’ve encountered with TMT is during BD playback (with bitstreaming) the audio will get out of sync if I decide to pause, rew or fastfwd.   Fairly irritating.   Luckily, Arcsoft has created a hotfix for ATI cards if you encounter this problem, and that seemed to work, though I needed to re-apply it on the latest  version.    Now for the Nvidia… Change refresh rate to 23.967, pop in the Inception BD, press play and wait for the DTS-HD MSTR display on the SC-07…   DTS!?!?!   WTF!!!!!   Arrrggghhh!!!   After getting this far, I’m no longer bitstreaming the uncompressed HD audio track!

OK.  Off to Arcsoft Forums to see if anyone else is experiencing this.     Found one guy from back in Dec, but it’s not clear he knows what he’s doing….    Post my problem…. Next day check the forum (no email subscription!?).  Hmm… it seems Arcsoft has only certified the 260.99 driver, while I had downloaded 266.58.    Back to the NVidia site, archived drivers, 260.99, download.    Remove 266, install 260 (with the clean install option), reboot, play BD…. WHOOO HOOO!!! DTS-HD MSTR is back!    AND no problem with pause, ff, rew, etc all at true 1080p24!

So one thing I noticed is the 260 “Clean Install” check box didn’t do such a great job.   Even though I had removed the old driver and rebooted before installing, I was still prompted by numerous “Newer File Exists” messages during the install.  Furthermore more, the nice list of resolutions you see above all showed up blank with the older driver, but Windows Monitor properties still said I had 59Hz and 23Hz available .      I should probably go back to a restore point prior to installing 266 and then install the 260 version again but it’s working the way I want, so I’m not sweating it for now.   I may just wait for Arcsoft to support the 266 drivers, and then upgrade again.

So while not perfect, the NVidia still wins the day.     Time to not touch it if it ain’t broke.   We’ll see how long that lasts!  🙂

Hulu update and quick Media Center Studio fix

I just received a comment on an old Hulu/AutoHotKey fix I had done sometime ago to get a better resolution for Hulu streaming.    Seeing this, it occurred to me that since then, Microsoft made some changes that broke Media Center Studio.   Searching the  Australian Media Center Community (which has a couple interesting projects you won’t find on TGB) there are a number of work-arounds suggested, but I found this one the easiest to implement:

Edit C:\ProgramData\Microsoft\eHome\Packages\MCEClientUX\dSM\StartResources.dll with a binary editor (gvim) and replace these two references to dSM:

xmlns:Movies = “data://dSM!SM.Movies.xml”
xmlns:TV = “data://dSM!SM.TV.xml”


xmlns:Movies = “data://ehres!SM.Movies.xml”
xmlns:TV = “data://ehres!SM.TV.xml”

This should work fine until Microsoft update replaces the DLL, then you just need to make the change again. If editing a binary file is a little too much for you, you’re welcome to try my modified version, though your mileage may vary.   If Windows Update changes the file, shoot me a note and I’ll update the DLL on my site.

Finally, the reader was also nice enough to include two Hulu images to use for creating the icons.