
Most Ruby developers use Active Record for PostgreSQL database interactions. It provides a lot of magic and is simple to start with. However, data integrity problems can start manifesting once the codebase matures. In this tutorial, I will describe techniques for improving data integrity and validations in Ruby on Rails web apps. We’ll cover foreign key constraints, database-level validations, and more.
Rails app data integrity can get out of sync for various reasons. Active Record approach to database modeling encourages developers to keep most of the logic in the app’s layer Ruby code. Let’s list some example actions that could corrupt the data state:
- update object attribute using
update_column
method - delete a parent object of an association with
delete
- persist an object with
save(validate: false)
- change model validations without updating the current data
- concurrent data updates bypassing uniqueness validations
Rails makes it just too easy to corrupt the data. Data in an invalid state could be problematic, e.g., when running a migration that raises an exception in case of a validation error.
How to “Brute-force” detect validation errors
I want to share a dirty hack that I’ve recently come up with. Resorting to it may look like a signal of a poor codebase quality, but please bear with me.
It’s a simple Sidekiq worker:
class ModelDataIntegrityCheckerJob
include Sidekiq::Worker
sidekiq_options retry: false
def perform(klass_name)
klass = klass_name.constantize
invalid_objects = []
klass.find_each do |object|
unless object.valid?
invalid_objects << [object.id, object.errors.full_messages]
end
end
if invalid_objects.present?
raise "Invalid state for #{klass} objects: #{invalid_objects}"
end
invalid_objects
end
end
You can use it like that:
ModelDataIntegrityCheckerJob.perform_async(User.name)
ModelDataIntegrityCheckerJob.perform_async(Course.name)
It will raise a runtime error with IDs of invalid objects. If your dataset is big, you might want to run those jobs at night or scope the range of objects validated.
It could seem that the brute force approach is not a perfect solution to validating data integrity. But can you with 100% certainty tell that every single one of your ActiveRecord objects is currently valid
?
I sometimes run those brute force checks before doing more complex database migrations to ensure that some sneaky invalid objects won’t blow up during the process. I like to think of them as a periodical data validation health checks.
Let’s list a couple of more elegant techniques you can use to keep your app’s data in a more integrated and valid state.
NOT NULL constraints
“Non-binary logic” is a frequent guest in Ruby on Rails apps. I’ve seen many nasty bugs caused by a simple fact that:
false != nil
Sometimes the business logic itself was dependent on the fact nil
in a database meant something else than false
… (╯°□°)╯︵ ┻━┻
Adding a boolean field without a NOT NULL constraint is a ticking time bomb. Make sure to update all the boolean
fields in your database using the following migration:
change_column_default :users, :boolean_attribute, false
change_column_null :users, :boolean_attribute, false
Database foreign key constraints
Database level constraints are not a first-class citizen in Rails apps. You can create a database relation without adding a corresponding foreign key, and Active Record will work correctly. Building a database schema without enforcing it using built-in PostgreSQL mechanisms can lead to various issues down the road. It is usually a good idea to add a foreign key constraint when creating has_many/belongs_to
relation.
I always prefer to explicitly add foreign key, column, and indexes, instead of using add_reference
helper method. There’s enough magic in Active Record already.
Adding posts
relation to User
model can be done using the following migration:
add_column :posts, user_id: :integer, null: false
add_index :posts, :user_id
add_foreign_key :posts, :users, column: :user_id
With this database structure, it will be impossible to leave orphaned post records in the database. Trying to remove a user, who still has some posts will result in an ActiveRecord::InvalidForeignKey
error.
To delete cascade
or dependent: :destroy
?
We can ensure removing the children records when the parent is destroyed using Active Record callback (with dependent: :destroy
has_many
method option) or directly in the database. Adding on_delete: :cascade
to add_foreign_key
method will remove children using database level triggers. It will be much better in terms of performance but would not execute ActiveRecord callbacks for removed posts.
Despite all the hate those callbacks get they are still useful in certain cases. One example could be a well-known Carrierwave gem. It uses callbacks to remove associated files when an object is destroyed. Removing objects using on_delete: :cascade
approach would result in orphaned files stuck forever without any reference on S3 or wherever you keep them.
My approach would be to use foreign keys for ensuring integrity, but stick with dependent: :destroy
for removing objects. Skipping callbacks could lead to surprising behavior if you are using Active Record for PostgreSQL interactions.
Summary
This post just scratches the surface of data integrity and validation in Ruby on Rails web apps. Active Record is a powerful tool that makes creating complex database schemas just too easy, without encouraging developers to think about the underlying mechanisms. Moving at least some of the rules to database layer is a good starting point to keeping your Rails apps healthy in the long run.