SlideShare a Scribd company logo
Microservices Minus the Macrocost
Lessons Learned While Building Microservices @ REA
@brentsnook
github.com/brentsnook
Friday, 20 September 13
MICROSERVICES
• digestable
• disposable
• demarcated
• decoupled
• defenestrable
Friday, 20 September 13
- has anyone heard of them before?
- some words that start with d
- disposable - if they interact using a common interface (like REST) you are free to implement them how you like. Use them to experiment
with different languages. If they get too complex, scrap and rewrite them
- digestable - James Lewis has an answer to “how big should a microservice be” - “as big as my head”.You should be able to load one into
your head and understand that part of the bigger system. Like zooming in.
- demarcated - use them to define bounded contexts around particular entities or concepts in a larger system.
- decoupled - only expose to clients exactly what they need to reduce coupling.
THE GOD DATABASE
Friday, 20 September 13
- hands up who hasn’t seen this before
- multiple clients coupled to a single database schema
- data warehousing, web based systems, invoicing and reconciliation systems - 10s of applications
- all depend on one or more chunks of data
- changes to the schema hard to manage and prone to error
- apps talking directly to the database make it hard to manage schema changes
- data in one lump == tragedy of the commons, nobody really owns the data, things can start to decay
INTRODUCE MICROSERVICES
Friday, 20 September 13
- long path to get to where we want
- first, give the data a definitive owner
- wedge microservices in between clients and the data they need
- encapsulate data and only expose what is required
- standardise communication on REST + JSON
- clients become adept at speaking REST and manipulating data
- data may not always live with a particular service but standardising this makes it easier to switch
- eventually clients become coupled to the service owning the data instead of the schema
- to make things better we had to first make them worse
PHASE OUT SCHEMA ACCESS
Friday, 20 September 13
PHASE OUT SCHEMA ACCESS
Friday, 20 September 13
SPLIT DATA
Friday, 20 September 13
- data can be partitioned into individual databases owned by the services
- data has a clear owner!
- because data is exchanged using HTTP and JSON, clients don’t actually know or care how the data is stored
- the god database can eventually be taken to the farm
- this was the journey we embarked on around a year ago
- I left early July but from all reports this is progressing well but still nowhere near this picture
LINEAR COSTS
•creating a new service
•provisioning a new set of environments
•configuring a new set of environments
•manual deployment
Friday, 20 September 13
- what did we learn along the way?
- microservices by definition are fine grained - they are small things that perform one job well
- building microservices using our default approach was going to be costly as the number of services grew
- we had a bunch of linear costs, the types of cost where the total cost increases steadily as more services are added
- created new services by ripping the parts we needed out of a similar service - time consuming to strip out the cruft from the old project
- provisioned the environments manually via the existing ops department - lead times and coordination costs
- deployments were not entirely manual - we automated a certain amount to begin with and automated more as it made sense.You only start
to feel the pain around this as you add more services
EXPONENTIAL COSTS
•integration testing new builds
Friday, 20 September 13
- integration testing components became an exponential cost
- we started by building our test environments on AWS
- instances were cheap so we tried automated certification builds with a set of environments per component
- this quickly became unwieldly as the number of components grew
- end to end testing and a copy of the certification environment per component quickly became unmanageable
COST OPPOSES
GRANULARITY
Friday, 20 September 13
- these costs were affecting how we designed our services
- when we looked at certain responsibilities in some services, they clearly belonged in their own service
- we were piggy backing endpoints and capabilities onto existing services to reduce deployment and provisioning costs
- we understood why microservices should be fine-grained but couldn’t wear the cost to achieve that goal
- answer was to reduce the pull of those forces
ECONOMIES OF SCALE
Friday, 20 September 13
- we decided to chase economies of scale with the creation and management of our microservices
- involves early and ongoing investment in reducing the costs I mentioned earlier
- growing our capability to spawn and manage new services more cheaply
- set up a team dedicated to building this capability through tooling and other means
- formally meet up with other teams several times a week but often informally several times a day
•moved production environments to EC2
•took on more ops responsibility
•built tools for:
• provisioning
• deployment
• network configuration
CHEAPER ENVIRONMENTS
Friday, 20 September 13
- move to EC2
- built command line tools to make provisioning and deployment trivial - setting up a new environment took a matter of minutes
- kept chipping away
MICROSERVICE STENCIL
Friday, 20 September 13
- git project with a vanilla skeleton for a service
- standard stack (Rack,Webmachine, Nagios, NGinx, Splunk)
- standard packaging and deployment (RPM)
- encoded best practice for project setup
- spawning a new service took seconds, clone project
- continually improved
CONSUMER DRIVEN
CONTRACTS
Friday, 20 September 13
- wanted to focus more on the point to point relationships between components rather than end to end testing
- consumer driven contracts
- consumer of a service specifying what it requires from the provider
- provider will offer a superset of that
- encode contracts as tests, run them as part of the producer build
- specific tests failing should tell you that you have broken specific consumers
- first implemented as unit tests that lived with the producer
- fell out of step with the reality of what the consumer was really interested in
github.com/uglyog/pact
Friday, 20 September 13
- why not just have contracts driven from real examples?
- stub out the producer and record what the consumer expected
- serialise contract, play it back in the producer build to ensure it is doing the right thing
- hacked this into the consumer project we had at the time
- pulled out into internal gem
- external gem
PACT FILES
Friday, 20 September 13
- consumer build, talks to stubbed out producer
- declare interactions, what we expect when we ask the producer for certain things - record mode
- record a pact file, JSON serialisation of interactions
- copy this to producer build and run producer tests to ensure it honours the pact - like playback mode
CONSUMER - RECORD PACT
# spec/time_consumer_spec.rb
require 'pact/consumer/rspec'
require 'httparty'
class TimeConsumer
include HTTParty
base_uri 'localhost:1234'
def get_time
time = JSON.parse(self.class.get('/time').body)
"the time is #{time['hour']}:#{time['minute']} ..."
end
end
Pact.service_consumer 'TimeConsumer' do
has_pact_with 'TimeProvider' do
mock_service :time_provider do
port 1234
end
end
end
describe TimeConsumer do
context 'when telling the time', :pact => true do
it 'formats the time with hours and minutes' do
time_provider.
upon_receiving('a request for the time').
with({ method: :get, path: '/time' }).
will_respond_with({status: 200, body: {'hour' => 10, 'minute' => 45}})
expect(TimeConsumer.new.get_time).to eql('the time is 10:45 ...')
end
end
end
https://github.com/brentsnook/pact_examples
Friday, 20 September 13
# spec/service_providers/pact_helper.rb
class TimeProvider
def call(env)
[
200,
{"Content-Type" => "application/json"},
[{hour: 10, minute: 45, second: 22}.to_json]
]
end
end
Pact.service_provider "Time Provider" do
app { TimeProvider.new }
honours_pact_with 'Time Consumer' do
pact_uri File.dirname(__FILE__) + '/../pacts/timeconsumer-timeprovider.json'
end
end
PRODUCER - PLAY BACK PACT
https://github.com/brentsnook/pact_examples
Friday, 20 September 13
WEB OF PACTS/CONTRACTS
Friday, 20 September 13
- web of contracts joining different consumers and producers
- can use a separate build to publish pact files between consumer and producer builds
- pretty fast feedback when a consumer expectation is unrealistic or the producer has a regression
- can replace a lot of automated end to end testing but we also supplement with manual exploratory end to end testing
SWITCH FROM PREVENTIONTO
DETECTION
Friday, 20 September 13
Fred George advocates replacing unit tests with monitoring transactions and responding.
This still makes me uncomfortable, I’d do both.
We didn’t get this far
•invest in building economies of scale
•automating the crap out of things is generally a
good way to reduce costs
•standardise service architecture to save on creation
and maintenance costs
BUT
•don’t forget to use new services/rewrites to
experiment with different technologies and
approaches
SO...
Friday, 20 September 13
Microservice Architecture (Fred George)
http://www.youtube.com/watch?v=2rKEveL55TY
Microservices - Java the Unix Way (James Lewis)
http://www.infoq.com/presentations/Micro-Services
How Big Should a Micro-Service Be? (James Lewis)
http://bovon.org/index.php/archives/350
Consumer Driven Contracts (Ian Robinson)
http://martinfowler.com/articles/consumerDrivenContracts.html
github.com/uglyog/pact
WOULDYOU LIKETO KNOW MORE?
Friday, 20 September 13

More Related Content

PPT
We All Wii
KEY
Bumps - Live Features with Google Wave and Cucumber
PDF
Agile/UX: Making the Marriage Work
PPT
História da Internet
PDF
Trio of Gems
PDF
Building the Right Thing
PDF
Microservices (msa) insights with comments
PDF
Reactive by example - at Reversim Summit 2015
We All Wii
Bumps - Live Features with Google Wave and Cucumber
Agile/UX: Making the Marriage Work
História da Internet
Trio of Gems
Building the Right Thing
Microservices (msa) insights with comments
Reactive by example - at Reversim Summit 2015

Similar to Microservices Without the Macrocost (20)

PDF
Microservices - Hitchhiker's guide to cloud native applications
PDF
Microservices for Java Architects (Madison-Milwaukee, April 28-9, 2015)
PDF
Intro to Microservices
PPTX
Melbourne Microservices Meetup: Agenda for a new Architecture
PPTX
Architecting Microservices in .Net
PPTX
Effective Microservices In a Data-centric World
PDF
#ATAGTR2020 Presentation - Microservices – Explored
PDF
Testing Microservices Architectures
PDF
Atmosphere 2014: Switching from monolithic approach to modular cloud computin...
PDF
Microservice pitfalls
PDF
Microservices for java architects schamburg-2015-05-19
PPTX
.Net Microservices with Event Sourcing, CQRS, Docker and... Windows Server 20...
PPTX
Service Architectures At Scale - QCon London 2015
PDF
Microservices Practitioner Summit Jan '15 - Microservice Ecosystems At Scale ...
PPTX
Service Architectures at Scale
PDF
Microservices for Java Architects (Chicago, April 21, 2015)
PPTX
Microservices with .Net - NDC Sydney, 2016
PDF
Microservices: State of the Union
PDF
Microservices Architecture
PDF
Delivering with Microservices - How to Iterate Towards Sophistication
Microservices - Hitchhiker's guide to cloud native applications
Microservices for Java Architects (Madison-Milwaukee, April 28-9, 2015)
Intro to Microservices
Melbourne Microservices Meetup: Agenda for a new Architecture
Architecting Microservices in .Net
Effective Microservices In a Data-centric World
#ATAGTR2020 Presentation - Microservices – Explored
Testing Microservices Architectures
Atmosphere 2014: Switching from monolithic approach to modular cloud computin...
Microservice pitfalls
Microservices for java architects schamburg-2015-05-19
.Net Microservices with Event Sourcing, CQRS, Docker and... Windows Server 20...
Service Architectures At Scale - QCon London 2015
Microservices Practitioner Summit Jan '15 - Microservice Ecosystems At Scale ...
Service Architectures at Scale
Microservices for Java Architects (Chicago, April 21, 2015)
Microservices with .Net - NDC Sydney, 2016
Microservices: State of the Union
Microservices Architecture
Delivering with Microservices - How to Iterate Towards Sophistication
Ad

Recently uploaded (20)

PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PPTX
Big Data Technologies - Introduction.pptx
PPT
Teaching material agriculture food technology
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Electronic commerce courselecture one. Pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Modernizing your data center with Dell and AMD
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PPTX
A Presentation on Artificial Intelligence
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Big Data Technologies - Introduction.pptx
Teaching material agriculture food technology
MYSQL Presentation for SQL database connectivity
Chapter 3 Spatial Domain Image Processing.pdf
Electronic commerce courselecture one. Pdf
“AI and Expert System Decision Support & Business Intelligence Systems”
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Spectral efficient network and resource selection model in 5G networks
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Modernizing your data center with Dell and AMD
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Encapsulation_ Review paper, used for researhc scholars
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
A Presentation on Artificial Intelligence
Dropbox Q2 2025 Financial Results & Investor Presentation
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
NewMind AI Weekly Chronicles - August'25 Week I
20250228 LYD VKU AI Blended-Learning.pptx
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Ad

Microservices Without the Macrocost

  • 1. Microservices Minus the Macrocost Lessons Learned While Building Microservices @ REA @brentsnook github.com/brentsnook Friday, 20 September 13
  • 2. MICROSERVICES • digestable • disposable • demarcated • decoupled • defenestrable Friday, 20 September 13 - has anyone heard of them before? - some words that start with d - disposable - if they interact using a common interface (like REST) you are free to implement them how you like. Use them to experiment with different languages. If they get too complex, scrap and rewrite them - digestable - James Lewis has an answer to “how big should a microservice be” - “as big as my head”.You should be able to load one into your head and understand that part of the bigger system. Like zooming in. - demarcated - use them to define bounded contexts around particular entities or concepts in a larger system. - decoupled - only expose to clients exactly what they need to reduce coupling.
  • 3. THE GOD DATABASE Friday, 20 September 13 - hands up who hasn’t seen this before - multiple clients coupled to a single database schema - data warehousing, web based systems, invoicing and reconciliation systems - 10s of applications - all depend on one or more chunks of data - changes to the schema hard to manage and prone to error - apps talking directly to the database make it hard to manage schema changes - data in one lump == tragedy of the commons, nobody really owns the data, things can start to decay
  • 4. INTRODUCE MICROSERVICES Friday, 20 September 13 - long path to get to where we want - first, give the data a definitive owner - wedge microservices in between clients and the data they need - encapsulate data and only expose what is required - standardise communication on REST + JSON - clients become adept at speaking REST and manipulating data - data may not always live with a particular service but standardising this makes it easier to switch - eventually clients become coupled to the service owning the data instead of the schema - to make things better we had to first make them worse
  • 5. PHASE OUT SCHEMA ACCESS Friday, 20 September 13
  • 6. PHASE OUT SCHEMA ACCESS Friday, 20 September 13
  • 7. SPLIT DATA Friday, 20 September 13 - data can be partitioned into individual databases owned by the services - data has a clear owner! - because data is exchanged using HTTP and JSON, clients don’t actually know or care how the data is stored - the god database can eventually be taken to the farm - this was the journey we embarked on around a year ago - I left early July but from all reports this is progressing well but still nowhere near this picture
  • 8. LINEAR COSTS •creating a new service •provisioning a new set of environments •configuring a new set of environments •manual deployment Friday, 20 September 13 - what did we learn along the way? - microservices by definition are fine grained - they are small things that perform one job well - building microservices using our default approach was going to be costly as the number of services grew - we had a bunch of linear costs, the types of cost where the total cost increases steadily as more services are added - created new services by ripping the parts we needed out of a similar service - time consuming to strip out the cruft from the old project - provisioned the environments manually via the existing ops department - lead times and coordination costs - deployments were not entirely manual - we automated a certain amount to begin with and automated more as it made sense.You only start to feel the pain around this as you add more services
  • 9. EXPONENTIAL COSTS •integration testing new builds Friday, 20 September 13 - integration testing components became an exponential cost - we started by building our test environments on AWS - instances were cheap so we tried automated certification builds with a set of environments per component - this quickly became unwieldly as the number of components grew - end to end testing and a copy of the certification environment per component quickly became unmanageable
  • 10. COST OPPOSES GRANULARITY Friday, 20 September 13 - these costs were affecting how we designed our services - when we looked at certain responsibilities in some services, they clearly belonged in their own service - we were piggy backing endpoints and capabilities onto existing services to reduce deployment and provisioning costs - we understood why microservices should be fine-grained but couldn’t wear the cost to achieve that goal - answer was to reduce the pull of those forces
  • 11. ECONOMIES OF SCALE Friday, 20 September 13 - we decided to chase economies of scale with the creation and management of our microservices - involves early and ongoing investment in reducing the costs I mentioned earlier - growing our capability to spawn and manage new services more cheaply - set up a team dedicated to building this capability through tooling and other means - formally meet up with other teams several times a week but often informally several times a day
  • 12. •moved production environments to EC2 •took on more ops responsibility •built tools for: • provisioning • deployment • network configuration CHEAPER ENVIRONMENTS Friday, 20 September 13 - move to EC2 - built command line tools to make provisioning and deployment trivial - setting up a new environment took a matter of minutes - kept chipping away
  • 13. MICROSERVICE STENCIL Friday, 20 September 13 - git project with a vanilla skeleton for a service - standard stack (Rack,Webmachine, Nagios, NGinx, Splunk) - standard packaging and deployment (RPM) - encoded best practice for project setup - spawning a new service took seconds, clone project - continually improved
  • 14. CONSUMER DRIVEN CONTRACTS Friday, 20 September 13 - wanted to focus more on the point to point relationships between components rather than end to end testing - consumer driven contracts - consumer of a service specifying what it requires from the provider - provider will offer a superset of that - encode contracts as tests, run them as part of the producer build - specific tests failing should tell you that you have broken specific consumers - first implemented as unit tests that lived with the producer - fell out of step with the reality of what the consumer was really interested in
  • 15. github.com/uglyog/pact Friday, 20 September 13 - why not just have contracts driven from real examples? - stub out the producer and record what the consumer expected - serialise contract, play it back in the producer build to ensure it is doing the right thing - hacked this into the consumer project we had at the time - pulled out into internal gem - external gem
  • 16. PACT FILES Friday, 20 September 13 - consumer build, talks to stubbed out producer - declare interactions, what we expect when we ask the producer for certain things - record mode - record a pact file, JSON serialisation of interactions - copy this to producer build and run producer tests to ensure it honours the pact - like playback mode
  • 17. CONSUMER - RECORD PACT # spec/time_consumer_spec.rb require 'pact/consumer/rspec' require 'httparty' class TimeConsumer include HTTParty base_uri 'localhost:1234' def get_time time = JSON.parse(self.class.get('/time').body) "the time is #{time['hour']}:#{time['minute']} ..." end end Pact.service_consumer 'TimeConsumer' do has_pact_with 'TimeProvider' do mock_service :time_provider do port 1234 end end end describe TimeConsumer do context 'when telling the time', :pact => true do it 'formats the time with hours and minutes' do time_provider. upon_receiving('a request for the time'). with({ method: :get, path: '/time' }). will_respond_with({status: 200, body: {'hour' => 10, 'minute' => 45}}) expect(TimeConsumer.new.get_time).to eql('the time is 10:45 ...') end end end https://github.com/brentsnook/pact_examples Friday, 20 September 13
  • 18. # spec/service_providers/pact_helper.rb class TimeProvider def call(env) [ 200, {"Content-Type" => "application/json"}, [{hour: 10, minute: 45, second: 22}.to_json] ] end end Pact.service_provider "Time Provider" do app { TimeProvider.new } honours_pact_with 'Time Consumer' do pact_uri File.dirname(__FILE__) + '/../pacts/timeconsumer-timeprovider.json' end end PRODUCER - PLAY BACK PACT https://github.com/brentsnook/pact_examples Friday, 20 September 13
  • 19. WEB OF PACTS/CONTRACTS Friday, 20 September 13 - web of contracts joining different consumers and producers - can use a separate build to publish pact files between consumer and producer builds - pretty fast feedback when a consumer expectation is unrealistic or the producer has a regression - can replace a lot of automated end to end testing but we also supplement with manual exploratory end to end testing
  • 20. SWITCH FROM PREVENTIONTO DETECTION Friday, 20 September 13 Fred George advocates replacing unit tests with monitoring transactions and responding. This still makes me uncomfortable, I’d do both. We didn’t get this far
  • 21. •invest in building economies of scale •automating the crap out of things is generally a good way to reduce costs •standardise service architecture to save on creation and maintenance costs BUT •don’t forget to use new services/rewrites to experiment with different technologies and approaches SO... Friday, 20 September 13
  • 22. Microservice Architecture (Fred George) http://www.youtube.com/watch?v=2rKEveL55TY Microservices - Java the Unix Way (James Lewis) http://www.infoq.com/presentations/Micro-Services How Big Should a Micro-Service Be? (James Lewis) http://bovon.org/index.php/archives/350 Consumer Driven Contracts (Ian Robinson) http://martinfowler.com/articles/consumerDrivenContracts.html github.com/uglyog/pact WOULDYOU LIKETO KNOW MORE? Friday, 20 September 13

Editor's Notes

  • #3: - has anyone heard of them before? - some words that start with d - disposable - if they interact using a common interface (like REST) you are free to implement them how you like. Use them to experiment with different languages. If they get too complex, scrap and rewrite them - digestable - James Lewis has an answer to “how big should a microservice be” - “as big as my head”. You should be able to load one into your head and understand that part of the bigger system. Like zooming in. - demarcated - use them to define bounded contexts around particular entities or concepts in a larger system. - decoupled - only expose to clients exactly what they need to reduce coupling.
  • #4: - hands up who hasn’t seen this before - multiple clients coupled to a single database schema - data warehousing, web based systems, invoicing and reconciliation systems - 10s of applications - all depend on one or more chunks of data - changes to the schema hard to manage and prone to error - apps talking directly to the database make it hard to manage schema changes - data in one lump == tragedy of the commons, nobody really owns the data, things can start to decay
  • #5: - long path to get to where we want - first, give the data a definitive owner - wedge microservices in between clients and the data they need - encapsulate data and only expose what is required - standardise communication on REST + JSON - clients become adept at speaking REST and manipulating data - data may not always live with a particular service but standardising this makes it easier to switch - eventually clients become coupled to the service owning the data instead of the schema - to make things better we had to first make them worse
  • #8: - data can be partitioned into individual databases owned by the services - data has a clear owner! - because data is exchanged using HTTP and JSON, clients don’t actually know or care how the data is stored - the god database can eventually be taken to the farm - this was the journey we embarked on around a year ago - I left early July but from all reports this is progressing well but still nowhere near this picture
  • #9: - what did we learn along the way? - microservices by definition are fine grained - they are small things that perform one job well - building microservices using our default approach was going to be costly as the number of services grew - we had a bunch of linear costs, the types of cost where the total cost increases steadily as more services are added - created new services by ripping the parts we needed out of a similar service - time consuming to strip out the cruft from the old project - provisioned the environments manually via the existing ops department - lead times and coordination costs - deployments were not entirely manual - we automated a certain amount to begin with and automated more as it made sense. You only start to feel the pain around this as you add more services
  • #10: - integration testing components became an exponential cost - we started by building our test environments on AWS - instances were cheap so we tried automated certification builds with a set of environments per component - this quickly became unwieldly as the number of components grew - end to end testing and a copy of the certification environment per component quickly became unmanageable
  • #11: - these costs were affecting how we designed our services - when we looked at certain responsibilities in some services, they clearly belonged in their own service - we were piggy backing endpoints and capabilities onto existing services to reduce deployment and provisioning costs - we understood why microservices should be fine-grained but couldn’t wear the cost to achieve that goal - answer was to reduce the pull of those forces
  • #12: - we decided to chase economies of scale with the creation and management of our microservices - involves early and ongoing investment in reducing the costs I mentioned earlier - growing our capability to spawn and manage new services more cheaply - set up a team dedicated to building this capability through tooling and other means - formally meet up with other teams several times a week but often informally several times a day
  • #13: - move to EC2 - built command line tools to make provisioning and deployment trivial - setting up a new environment took a matter of minutes - kept chipping away
  • #14: - git project with a vanilla skeleton for a service - standard stack (Rack, Webmachine, Nagios, NGinx, Splunk) - standard packaging and deployment (RPM) - encoded best practice for project setup - spawning a new service took seconds, clone project - continually improved
  • #15: - wanted to focus more on the point to point relationships between components rather than end to end testing - consumer driven contracts - consumer of a service specifying what it requires from the provider - provider will offer a superset of that - encode contracts as tests, run them as part of the producer build - specific tests failing should tell you that you have broken specific consumers - first implemented as unit tests that lived with the producer - fell out of step with the reality of what the consumer was really interested in
  • #16: - why not just have contracts driven from real examples? - stub out the producer and record what the consumer expected - serialise contract, play it back in the producer build to ensure it is doing the right thing - hacked this into the consumer project we had at the time - pulled out into internal gem - external gem
  • #17: - consumer build, talks to stubbed out producer - declare interactions, what we expect when we ask the producer for certain things - record mode - record a pact file, JSON serialisation of interactions - copy this to producer build and run producer tests to ensure it honours the pact - like playback mode
  • #20: - web of contracts joining different consumers and producers - can use a separate build to publish pact files between consumer and producer builds - pretty fast feedback when a consumer expectation is unrealistic or the producer has a regression - can replace a lot of automated end to end testing but we also supplement with manual exploratory end to end testing
  • #21: Fred George advocates replacing unit tests with monitoring transactions and responding. This still makes me uncomfortable, I’d do both. We didn’t get this far