PostHole
Compose Login
You are browsing eu.zone1 in read-only mode. Log in to participate.
rss-bridge 2011-08-24T00:00:00+00:00

Doing the right thing

The recent outage of SoundCloud was the result of everybody doing the right thing. This totally jives with John Allspaw’s message that…


Doing the right thing

August 24th, 2011 by Sean Treadway

The recent outage of SoundCloud was the result of everybody doing the right thing. This totally jives with John Allspaw’s message that looking for a root cause will lead to you finding smart people simply doing their jobs.

This is what happened.

The technologies at play

The number of interactions that escalated to the outage should be interesting for other Rails/MySQL shops out there. We’re using many idiomatic patterns within Rails that ended up having devastating consequences.

Cache keys that avoid invalidation

It’s best practice to form your cache keys in a way that doesn’t require you to issue cache key deletions, and rather let normal evictions reclaim garbage keys. To do this we use the ‘updated_at’ column on some of our models in the cache key so that if the model updates, we know we’ll get a new key.

ActiveRecord::Timestamp#touch

There is an innocuous method called ‘touch’ that will simply bump the updated_at and save that record. This is quite convenient to call on containers of has_many things like a forum topic is to its posts. With a recent ‘updated_at’ and consistent way to keep that recent via ‘touch’, there is a clean decoupling of cache strategy from business modeling when you intend to communicate the container’s version is as recent as the most recent addition.

ActiveRecord::Associations::BelongsToAssociation :counter_cache

In the absence of indexed expressions and multi index merges that some databases have, MySQL and InnoDB leaves it to the application to keep lookups of counts efficient. When dealing with tables of multiple millions of rows, a simple query like:

select count(*) from things

Could take tens of seconds as InnoDB actually needs to traverse all primary keys and literally count the rows that exist which are visible in the current transaction.

class Foo belongs_to :things, :counter_cache => true end

Is a simple and convenient workaround to avoid the ‘count(*)’ overhead where when a new Foo is created on a thing, that thing’s ‘foos_count’ would get a database increment by one. When it’s removed, the ‘foos_count’ would be decremented by one.

ActiveRecord::Associations::Association :dependent => :destroy

What better way to maintain all the business rules on deletion than to make sure your model’s callbacks fire when a container is destroyed. When your business consistency is maintained in the ORM, this is also the best place to ensure proper business rules on removal.

This simply does the following for every association:

thing.foos.each(&:destroy)

Before:

thing.destroy

It’s best to have the behavior declared where the association is declared and let the framework make sure it’s not forgotten when actually performing the destruction.

More than 140 characters

Some of the features of SoundCloud deserve more than 140 characters to do them justice. Even more than 255 characters. The tools that Rails gives you out of the box when you need more than 255 characters on a string in your data layer are limited; you’re left with the ‘text’ type in your schema definition. In MySQL this translates to a “TEXT” column.

A TEXT column with a maximum length of 65,535 (216 - 1) characters.

Well, in most cases we wouldn’t need more than 1000 characters for much on our site, but in the spirit of “deployed or it didn’t happen”, our early database schemas were mostly driven by the simplest option that ActiveRecord::Schema offered. (We also have way too many NULLable columns)

Trade space for time or time for space

The early days, SoundCloud ran on 3 servers. CPU was precious so for some of the HTML conversion tasks we traded space for time and are storing a cached version of the HTML for some of the longer text fields in the same row as the record. This was the right choice at the time for the request load and available CPU.

(time is usually easier to scale than space)

Tune your DBs for your workload

We separate reads from writes on our data layer, and we also have slightly different tunings on the DB slaves that accept reads. We also have experienced statement-based asynchronous slave replication breaking the replication thread due to these different tunings on different hardware.

We use row-based (actually mixed) replication between our masters and slaves because it’s as close as you’ll get to the storage engines speaking directly to each other, minimizing the risk of differences in hardware/tuning interfering with the replication thread.

Alert on disk space thresholds

We have a massive amount of Nagios checks on all our systems, including host-based partition %free checks. When any partition on a host reaches a threshold of free space, an alert is sent.

Separate data and write ahead log physical devices

Most OLTP databases have data that is bound by random read/writes, whereas binary logs are fundamentally sequential writes. You want these workloads on different spindles when using rotating disks because a transaction cannot complete without first being written to the binary log. If you need to move the drive head for your binlog, you’ve just added milliseconds to all your transactions.

Clean up after yourself

Periodically there are administrative tasks that need to be performed on the site like mass takedowns of inappropriate content. The Rails console is amazing when you need to work directly with your business domain. Fire it up, get it done. For one-off maintenance tasks this is a life saver.

Add Spam, Mix, Bake and we’ve been Served

This all adds up, reviewing the good parts:

  • Abstract away bookkeeping in your domain model
  • Leverage existing patterns to get the job done quickly
  • Tune and monitor your DBs
  • Hand administer your site via your domain model

If you haven’t noticed yet, there have been some incorrigible entrepreneurs using some groups to advertise their pharmaceuticals distribution businesses. They have very thorough descriptions (5-50KB worth), and unprecedented activity with tens to thousands of posts in their own groups.

At 1:00pm yesterday, we were working through our cleanups, and cranked open a console with a large list of confirmed groups to remove. With Rails this is as simple as:

Group.find(ids).each(&:destroy)

Looks innocent enough.

What the database sees

From the database perspective all of the automated bookkeeping and business domain extraction of individual destroys creates statements, this ended up looking something like this:

DELETE post
UPDATE group SET posts_count = posts_count - 1
UPDATE group SET updated_at = now()
... x 1-5000
DELETE track_group_contribution
UPDATE group SET tracks_count = tracks_count - 1
UPDATE group SET updated_at = now()
... x 1-5000
DELETE user_group_membership
UPDATE group SET members_count = members_count - 1
UPDATE group SET updated_at = now()
... x 1-5000

So we’re seeing 2N+1 number of updates on a group where N is the sum of associated objects.

What replication sees

When using row-based replication, any change to a row gets that entire row added to the binary log. Some of these groups had over 100k worth of text columns and hundreds of associated posts. When parsing a given binlog, these group updates were taking over 90% of the replication events being sent to the slaves.

What the binlog partition sees

This is what finally brought us down. We were producing over 3GB/min of binlogs to be replicated to our slaves from these many group updates. Our binlog partition filled up from 10GB to 100GB in a matter of 30 minutes.

[...]


Original source

Reply