

Some suggestions for writing good unit test in RSpec

- 1. Isolate Tests to Avoid Shared State
- 2. Ensure Proper Test Setup and Teardown
- 3. Avoid Timing Issues (Race Conditions)
- 4. Stabilize External Dependencies
- 5. Use Time-Dependent Tests Carefully
- 6. Ensure Proper Mocking and Stubbing
- 7. Run Tests in Isolation
- 8. Handle Database Transactions Properly
- 9. Improve Test Stability with CI/CD
- Conclusion
1. Isolate Tests to Avoid Shared State
Flaky tests often arise when tests rely on shared state, meaning the state of one test can affect the outcome of another. This can happen when objects or variables are not reset between tests.
Solution:
- Use let and let! for lazy and eager loading to ensure fresh objects are created for each test.
- Use before(:each) and after(:each) to reset shared state and prepare the environment for each test.
describe User do
let(:user) { create(:user) }
it 'is valid with a name and email' do
expect(user).to be_valid
end
end
Avoid using global variables or objects that persist between tests. Each test should be independent of others to avoid flaky behavior.
2. Ensure Proper Test Setup and Teardown
Failing to clean up after tests can result in issues, especially with database state or mock objects. You can use RSpec’s hooks to set up and clean up resources before and after tests.
Solution:
- Use before(:each) to set up state for tests.
- Use after(:each) or after(:all) to clean up any resources or mock objects
describe 'User creation' do
before(:each) do
@user = User.create!(name: 'John Doe', email: 'john@example.com')
end
after(:each) do
# Cleanup
@user.destroy
end
it 'should be valid' do
expect(@user).to be_valid
end
end
3. Avoid Timing Issues (Race Conditions)
Some flaky tests happen due to timing issues, especially when interacting with asynchronous code or external systems (e.g., web requests, file operations). For example, tests might fail because a background job hasn't finished executing by the time the test asserts the result.
Solution:
- Use RSpec’s wait methods or frameworks like Capybara to wait for asynchronous processes to complete before making assertions.
- For tests that depend on background jobs or external services, use tools like sidekiq testing or VCR to mock external services and avoid real-time delays.
Example with Capybara:
it 'loads the page after the background job completes' do
visit some_path
# Wait for a background job to finish and update the page
expect(page).to have_content('Job complete', wait: 10)
end
4. Stabilize External Dependencies
Flaky tests are often caused by external dependencies like APIs, databases, or services that are slow or unreliable.
Solution:
- Stub or mock external dependencies to avoid relying on live services during testing. Tools like WebMock, VCR, or RSpec mocks can intercept requests and simulate responses.
# Example of stubbing an external API call using WebMock
before do
stub_request(:get, "https://api.example.com/data").
to_return(status: 200, body: '{"key": "value"}', headers: {})
end
it 'fetches data from the API' do
result = MyClass.new.fetch_data
expect(result).to eq('{"key": "value"}')
end
- Use database cleaning tools like DatabaseCleaner to ensure a clean database between tests. This prevents tests from affecting one another by leaving behind stale data or state.
5. Use Time-Dependent Tests Carefully
Tests that rely on time (e.g., time-sensitive data, time-based expiration, or scheduled tasks) can be flaky because the time at which the test is run may vary.
Solution:
- Use time helpers or libraries like Timecop to mock and freeze time for tests involving time-dependent logic.
require 'timecop'
it 'tests time-dependent behavior' do
Timecop.freeze(Time.local(2025, 3, 2)) do
expect(Date.today).to eq(Date.new(2025, 3, 2))
end
end
- This approach ensures your tests are always using the same "frozen" time, making them predictable and not dependent on the actual time of day when they are executed.
6. Ensure Proper Mocking and Stubbing
When you use mocks and stubs, improper configurations can lead to flaky tests. For example, if you expect an object to return a value that isn’t properly mocked, your test may fail randomly.
Solution:
- Always make sure that the methods you are mocking or stubbing actually exist on the object you're mocking. Consider using Verifying Doubles to avoid incorrect mocks.
# Example with verifying doubles
describe User do
it 'returns the correct user greeting' do
user_double = instance_double(User, greet: 'Hello, Test User!')
expect(user_double.greet).to eq('Hello, Test User!')
end
end
Tip: Avoid using global mocks or stubs that affect multiple tests at once. Each test should control its own dependencies.
7. Run Tests in Isolation
Ensure that tests run in isolation and that previous tests do not affect the next ones. Avoid relying on global state or previous test setups that might carry over.
Solution:
- Use before(:each) and after(:each) hooks to ensure the environment is reset between tests.
- Avoid using shared state or modifying global variables across tests.
- Use RSpec’s --seed option to randomize test execution and catch order-dependent failures.
rspec --seed 12345
8. Handle Database Transactions Properly
If you're using a database in your tests, improper transaction handling can lead to flaky tests. For example, if database records aren't cleaned up correctly, the state of your database can affect subsequent tests.
Solution:
- Use DatabaseCleaner or the built-in transactional fixtures in Rails to ensure a clean state between tests.
- Be sure to clean up any external resources (e.g., file uploads, caches, etc.) that might persist after the test.
9. Improve Test Stability with CI/CD
Flaky tests can also arise from environment inconsistencies in continuous integration (CI) systems. Ensure that your CI environment is configured consistently (e.g., same Ruby version, dependencies, environment variables, and network access).
Solution:
- Use containerization (e.g., Docker) to ensure that the test environment is consistent across all runs.
- Run your tests on a clean environment each time (e.g., use Docker or a fresh VM for each CI run).
- If your tests are still flaky in CI, consider adding retry logic (e.g., retrying failed tests in CI to minimize noise).
Conclusion
By carefully managing state, time, external dependencies, and concurrency, you can greatly reduce flaky tests in RSpec and make it more reliable.
For more information, let's Like & Follow MFV sites for updating blog, best practices, career stories of Forwardians at:
Facebook: https://www.facebook.com/moneyforward.vn
Linkedin: https://www.linkedin.com/company/money-forward-vietnam/
Youtube: https://www.youtube.com/channel/UCtIsKEVyMceskd0YjCcfvPg


How to create your own gem in Ruby
