SlideShare a Scribd company logo
PAGE
1
DEVOPS INDONESIA
PAGE
1
DEVOPS INDONESIA
Eriawan Kusumawardhono
DevOps Community in Indonesia
Jakarta, 5 Mei 2021
Feature Scoring in Green Field
Application Development and DevOps
Click to edit Master title style
2
Feature Scoring in
Application
Development and
DevOps
P r e s e n t e d b y E r i a w a n K u s u m a w a r d h o n o
Click to edit Master title style
3
About Eriawan
3
• Based on Indonesia
• MVP since 2012, focusing on Developer
Technologies (F#/C#/VB, .NET Core,
Azure DevOps, opensource)
• LinkedIn:
https://www.linkedin.com/in/eriawan-
kusumawardhono/
• Github: eriawan
• Member of .NET Foundation’s OSS
Project onboarding committee. Yes,
please ping me for support for OSS
.NET project on GitHub
Click to edit Master title style
4
Main course today
4
1. Introduction of Feature scoring
2. Elements of feature scoring
3. Best practices
Click to edit Master title style
5
Introduction to Feature
scoring
. . a n d I t ’ s r e l a t i o n t o G r e e n f i e l d s o f t w a r e d e v e l o p m e n t
5
Click to edit Master title style
6
What is feature scoring?
6
• A metric to measure the relevance, usability, and perception of
features of an application, from development to the operational
stages of the application
• Each feature of the application must be measurable in a sense it
must be easily understood and also must not have ambiguous
perspective for the developers and the rest of stakeholders (users,
operation/infra departments, and other optional but may take
decisive part such as project owners)
• This measurement takes more than one point in time, as we
measure the metric in terms of how it performs
Click to edit Master title style
7
“
“Start with a brand new language and you essentially start
with minus 1,000 points. And now, you’ve got to win back your
1,000 points before we’re even talking. Lots of languages
never get to more than minus 500. Yeah, they add value but
they didn’t add enough value over what was there before.”
- Anders Hejlsberg, Microsoft Technical Fellow
7
Click to edit Master title style
8
A feature related to feature scoring
8
• A general but quickly understandable feature of a software, as this is
one of the requirements
• This means a feature must be drilled down from business use case
to at least to a technical use case, and both development party and
other parties stakeholders must be informed
• For development, this means all features of the application in
development
• For infrastructure or operation department, this can be focusing on
how the feature is put into metrics
Click to edit Master title style
9
This is a good reason for Green field software development
9
1. Green field development start from “0”, analog with everyday
sample as opening new field for plantation
2. All features defined are starting from “0” or minus. This depends
on your actual needs explained next
3. All features are having the global (general) overview, therefore a
business use case and technical use case must be defined first
Click to edit Master title style
10
Feature scoring is used for (from the start of development)
10
• Measure the performance of the feature in terms of how it meets the matrix of
requirements (business use case and technical use case), test result (e.g. SIT
and UAT), actual feedback and performance after deployed in production
• The performance starts early in the development. If the feature has defined as
done in the requirement, development, and in the testing in SIT and UAT is still
having bad user experience and bugs, then the score will be minus and but this
should not be the main concern
• The feature performance is recommended to be checked against what
categorized within MosCow method in the beginning. For example: if a feature
was categorized within “Should have” but if it performs badly because users are
not using it much, then this feature will have minus points
• Therefore, a prioritization of feature score weight must be defined
Click to edit Master title style
11
MosCow method
11
https://www.productplan.com/glossary/moscow-prioritization/
Click to edit Master title style
12
Elements of Feature scoring
S u b t i t l e
12
Click to edit Master title style
13
Fundamental elements of feature scoring
13
• Feature performance score are combinations of matrix of
requirements (they are met or not), test result in SIT/UAT, bug
reports from users, and usability in production
• Gathering usability from production comes from reports from actual
users by examining telemetries. For example: application log
examination
• Prioritizations
• Weigh scale factor
Click to edit Master title style
14
Feature weight prioritization
14
• General software architecture must come first as features
• Categorize each features into MosCow methods, therefore “Must
have” features must come first to be measured
• On any bugs related to any features, any minus point gets larger
they are part of lesser priority in MosCow method, especially those
that categorized as “Could have” and below.
• Any Must have and Should have solved bugs will be considered as
positive, equal to the number of related bug so it will have total net
score of “0”.
• Further weighing are open for customization
Click to edit Master title style
15
Standard weighing basic considerations
15
1. All bugs must meet basic MosCow Methods category, at least Must Have,
Should Have, Could Have.
2. Combined with above MosCow, it is recommended that all bugs should be at
least tagged with two categories: “Showstopper” (or Critical) or Functional in
addition to MosCow methods. Showstopper/critical means the feature doesn’t
work at all caused by error at first place shown or app goes hung. The
showstopper bugs must be given highest priority and large negative score.
3. Bugs that comes from telemetry/app logging must be further categorized as
“Showstopper” and functional, with additional category of external. An
example: abrupt app timeout logs when app runs and try to communicate with
third party server
4. Bugs that comes from automated UI test must be categorized as functional
bugs first, because it is unattended. Automated UI test always have lowest
score.
Click to edit Master title style
16
Sample basic weighing (MosCow and criticality/severity)
16
MosCow Method Score
Must Have -10
Should Have -15
Could Have -25
Telemetry (Log of error) -20
Solved Must Have 10
Solved Should Have 15
Solved Could Have 15
Telemetry of Could Have has
shown consistent frequent
usages
10
Solved Telemetry error 20
Criticality Score
Showstopper -30
Functional -10
Solved Showstopper 30
Solved Functional 10
Click to edit Master title style
17
Demo using Azure DevOps
17
Click to edit Master title style
18
Relevance in software development
18
• Is feature scoring is important? For product based application
development, it is important, because an application software as
product have features defined. All of these features must have
histories as bugs, test results, and also actual feedback in
production
Click to edit Master title style
19
Best practices of Feature scoring
S u b t i t l e
19
Click to edit Master title style
20
Known best practices 1/2
20
• It is highly usable in product based in software development, especially in
DevOps as the software must be improved continuously and at the same
time the software must be tracked in not just the functionality, but also the
total value of its features starting from development to deployment to
usage in productions
• Scoring for features must at least have MosCow method in place. This
would help the prioritization of works (especially when fixing bugs) and
also gauge the real value of each features.
• Therefore, it can be that once a “should have” feature gets promoted to
“must have”. Especially if it’s proven to have high value in terms of
combination of high usage, easy to use, and has positive feedback from
actual users after deployment to production
Click to edit Master title style
21
Known best practices 2/2
21
• Why lesser MosCow priorities are punished more than must have?
Because priorities are often related to the actual work that should be
prioritized first, and speed of delivery is equal to delivering what
value matters most! Therefore, it is common such in some teams
such as .NET development team at Microsoft to start each feature in
new software development to have -1000 points instead of 0 point,
as new development will always relate to workforce allocation,
prioritization, schedule and also allocated budget.
Click to edit Master title style
22
Cultural best practices for Feature Scoring
22
1. Always say no to any features that part of “will not have” when prioritizing work on current
sprint/phase.
2. Any exclusion of of a feature caused by scoring-based demotion from “could have” to “will not
have” (based on MosCow methods) must be considered as part of documented change
request. Because any promotion/demotion will always change prioritization in the
development
3. Architecture of a software must be always included as features. For example: supporting
current OS based on the company that create the OS is part of “must have”, whereas
supporting for OS that is almost out of support should not be considered as “must have”. For
example, in 2021 supporting Windows 7 will add more burden and risks, as it’s not supported
by Microsoft. Supporting Windows Server 2012 and later in 2021 is a must have.
4. Feature scoring can be used for Brown Field development, but every current features must be
inventarized from 0, including documentations of each features. Therefore, software
requirement gathering must be restarted and adjusted to provide feature scoring. Otherwise
feature scoring will be no longer usable even when finishing defining MosCow methods.
Click to edit Master title style
23
Thank You! ☺
PAGE
24
DEVOPS INDONESIA
Stay Connected
@devopsindonesia
http://www.devopsindonesia.com
@IDDevOps
@DevOpsIndonesia
@IDDevOps DevOps Indonesia
PAGE
25
DEVOPS INDONESIA
Alone We are smart, together We are brilliant
THANK YOU !
Quote by Steve Anderson

More Related Content

PDF
Breaking DevOps Illusion
PDF
The Death and Rise of Enterprise DevOps
PDF
A sustainable DevOps Transformation
PDF
Email Notifications in Jenkins | Edureka
PDF
Api Lifecycle Operation with Open Source Products
PDF
Introduction to DevOps | Edureka
PPTX
Get rid of obstacles with DevOps Mindset - IT Tech Talk #2 XL AXIATA
PDF
Threat Modelling in DevSecOps Cultures
Breaking DevOps Illusion
The Death and Rise of Enterprise DevOps
A sustainable DevOps Transformation
Email Notifications in Jenkins | Edureka
Api Lifecycle Operation with Open Source Products
Introduction to DevOps | Edureka
Get rid of obstacles with DevOps Mindset - IT Tech Talk #2 XL AXIATA
Threat Modelling in DevSecOps Cultures

What's hot (19)

PDF
How DevOps works in MOKA
PDF
Building and Delivering Software in a Faster and More Consistent Way
PDF
Tech Mahindra ADOPT©: Accelerate DevOps Transformation
PDF
Accelerating Digital Transformation With API Lifecycle & Test Automation
PDF
Application Archaeology: Accelerating App Modernization at DICK’S Sporting Goods
PDF
Salesforce CI/CD - A strategy for success
PDF
DevOps Patterns Distilled: Implementing The Needed Practices In Practical Steps
PPTX
Best practices for implementing CI/CD on Salesforce
PDF
A systemic approach to shaping a DevOps culture
PDF
XebiaLabs Enterprise DevOps
PDF
Continuous Delivery - The Next 10 years
PDF
Blame DevOps: Shifting Left the Wrong Way
PPTX
Tailoring your SDLC for DevOps, Agile and more
PDF
A day in the life with devops automation
PPTX
Devops online training ppt
PDF
DevOps_Automation White Paper
PDF
What are the Cool Kids Doing With Continuous Delivery?
PDF
DevOps, A path to Enterprises to Adopt [Decoding DevOps Conference - InfoSep...
PDF
Is BDD Worth It? Considerations for Advanced Test Automation
How DevOps works in MOKA
Building and Delivering Software in a Faster and More Consistent Way
Tech Mahindra ADOPT©: Accelerate DevOps Transformation
Accelerating Digital Transformation With API Lifecycle & Test Automation
Application Archaeology: Accelerating App Modernization at DICK’S Sporting Goods
Salesforce CI/CD - A strategy for success
DevOps Patterns Distilled: Implementing The Needed Practices In Practical Steps
Best practices for implementing CI/CD on Salesforce
A systemic approach to shaping a DevOps culture
XebiaLabs Enterprise DevOps
Continuous Delivery - The Next 10 years
Blame DevOps: Shifting Left the Wrong Way
Tailoring your SDLC for DevOps, Agile and more
A day in the life with devops automation
Devops online training ppt
DevOps_Automation White Paper
What are the Cool Kids Doing With Continuous Delivery?
DevOps, A path to Enterprises to Adopt [Decoding DevOps Conference - InfoSep...
Is BDD Worth It? Considerations for Advanced Test Automation
Ad

Similar to Feature Scoring in Green Field Application Development and DevOps (20)

PPTX
prod-dev-management.pptx
PDF
Webinar app testing and distribution
PPTX
Code campiasi scm-project-gabriel-cristescu-ditech
PDF
Master Your MarTech Migration: A Guide for Switching Web-Based Marketing and ...
DOCX
Mi0033 software engineering
DOCX
PDF
L5555555555555555555555 Agile Scrum Framework.pdf
PDF
Software Product Development for Startups.pdf
PPTX
How to develop a project or application
PDF
Trouble with Performance Debugging? Not Anymore with Choreo, the AI-Assisted ...
PDF
Software Development Standard Operating Procedure
PPTX
All you need is fast feedback loop, fast feedback loop, fast feedback loop is...
DOC
4+_ExperienceCV_Testing_NikhanjNawal
PDF
Odoo Implementation Methodology
PDF
Odoo implementation
PPTX
Critical Capabilities to Shifting Left the Right Way
DOC
Amit_Resume
PDF
How Salesforce built a Scalable, World-Class, Performance Engineering Team
PPTX
PM TEMPLATE_ PRODUCT ROADMAP.pptx
PDF
Minimum Testable Features—A Different Approach to Agile Software Development
prod-dev-management.pptx
Webinar app testing and distribution
Code campiasi scm-project-gabriel-cristescu-ditech
Master Your MarTech Migration: A Guide for Switching Web-Based Marketing and ...
Mi0033 software engineering
L5555555555555555555555 Agile Scrum Framework.pdf
Software Product Development for Startups.pdf
How to develop a project or application
Trouble with Performance Debugging? Not Anymore with Choreo, the AI-Assisted ...
Software Development Standard Operating Procedure
All you need is fast feedback loop, fast feedback loop, fast feedback loop is...
4+_ExperienceCV_Testing_NikhanjNawal
Odoo Implementation Methodology
Odoo implementation
Critical Capabilities to Shifting Left the Right Way
Amit_Resume
How Salesforce built a Scalable, World-Class, Performance Engineering Team
PM TEMPLATE_ PRODUCT ROADMAP.pptx
Minimum Testable Features—A Different Approach to Agile Software Development
Ad

More from DevOps Indonesia (20)

PDF
DevSecOps Implementation Journey
PDF
DevOps Indonesia X Palo Alto and Dkatalis Roadshow to DevOpsDays Jakarta 2022
PDF
Securing an NGINX deployment for K8s
PDF
DevOps Indonesia Meetup #52 - announcement
PDF
Dev ops meetup 51 : Securing DevOps Lifecycle - Announcement
PDF
Securing DevOps Lifecycle
PDF
DevOps Meetup 50 : Securing your Application - Announcement
PDF
Secure your Application with Google cloud armor
PDF
DevOps Meetup 49 Aws Copilot and Gitops - announcement by DevOps Indonesia
PDF
Operate Containers with AWS Copilot
PDF
Continuously Deploy Your CDK Application by Petra novandi barus
PDF
DevOps indonesia (online) meetup 46 aws with payfazz in devops indonesia - a...
PDF
Securing Your Database Dynamic DB Credentials
PDF
DevOps Indonesia (online) meetup 45 - Announcement
PDF
API Security Webinar - Credential Stuffing
PDF
API Security Webinar - Security Guidelines for Providing and Consuming APIs
PDF
API Security Webinar - Hendra Tanto
PDF
API Security Webinar : Credential Stuffing
PDF
API Security Webinar : Security Guidelines for Providing and Consuming APIs
PDF
DevOps indonesia (Online) Meetup #44 - Announcement
DevSecOps Implementation Journey
DevOps Indonesia X Palo Alto and Dkatalis Roadshow to DevOpsDays Jakarta 2022
Securing an NGINX deployment for K8s
DevOps Indonesia Meetup #52 - announcement
Dev ops meetup 51 : Securing DevOps Lifecycle - Announcement
Securing DevOps Lifecycle
DevOps Meetup 50 : Securing your Application - Announcement
Secure your Application with Google cloud armor
DevOps Meetup 49 Aws Copilot and Gitops - announcement by DevOps Indonesia
Operate Containers with AWS Copilot
Continuously Deploy Your CDK Application by Petra novandi barus
DevOps indonesia (online) meetup 46 aws with payfazz in devops indonesia - a...
Securing Your Database Dynamic DB Credentials
DevOps Indonesia (online) meetup 45 - Announcement
API Security Webinar - Credential Stuffing
API Security Webinar - Security Guidelines for Providing and Consuming APIs
API Security Webinar - Hendra Tanto
API Security Webinar : Credential Stuffing
API Security Webinar : Security Guidelines for Providing and Consuming APIs
DevOps indonesia (Online) Meetup #44 - Announcement

Recently uploaded (20)

PDF
Empathic Computing: Creating Shared Understanding
PDF
Machine learning based COVID-19 study performance prediction
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
GamePlan Trading System Review: Professional Trader's Honest Take
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Advanced Soft Computing BINUS July 2025.pdf
Empathic Computing: Creating Shared Understanding
Machine learning based COVID-19 study performance prediction
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Dropbox Q2 2025 Financial Results & Investor Presentation
Advanced methodologies resolving dimensionality complications for autism neur...
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
“AI and Expert System Decision Support & Business Intelligence Systems”
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
NewMind AI Monthly Chronicles - July 2025
GamePlan Trading System Review: Professional Trader's Honest Take
Reach Out and Touch Someone: Haptics and Empathic Computing
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Network Security Unit 5.pdf for BCA BBA.
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Mobile App Security Testing_ A Comprehensive Guide.pdf
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Diabetes mellitus diagnosis method based random forest with bat algorithm
Advanced Soft Computing BINUS July 2025.pdf

Feature Scoring in Green Field Application Development and DevOps

  • 1. PAGE 1 DEVOPS INDONESIA PAGE 1 DEVOPS INDONESIA Eriawan Kusumawardhono DevOps Community in Indonesia Jakarta, 5 Mei 2021 Feature Scoring in Green Field Application Development and DevOps
  • 2. Click to edit Master title style 2 Feature Scoring in Application Development and DevOps P r e s e n t e d b y E r i a w a n K u s u m a w a r d h o n o
  • 3. Click to edit Master title style 3 About Eriawan 3 • Based on Indonesia • MVP since 2012, focusing on Developer Technologies (F#/C#/VB, .NET Core, Azure DevOps, opensource) • LinkedIn: https://www.linkedin.com/in/eriawan- kusumawardhono/ • Github: eriawan • Member of .NET Foundation’s OSS Project onboarding committee. Yes, please ping me for support for OSS .NET project on GitHub
  • 4. Click to edit Master title style 4 Main course today 4 1. Introduction of Feature scoring 2. Elements of feature scoring 3. Best practices
  • 5. Click to edit Master title style 5 Introduction to Feature scoring . . a n d I t ’ s r e l a t i o n t o G r e e n f i e l d s o f t w a r e d e v e l o p m e n t 5
  • 6. Click to edit Master title style 6 What is feature scoring? 6 • A metric to measure the relevance, usability, and perception of features of an application, from development to the operational stages of the application • Each feature of the application must be measurable in a sense it must be easily understood and also must not have ambiguous perspective for the developers and the rest of stakeholders (users, operation/infra departments, and other optional but may take decisive part such as project owners) • This measurement takes more than one point in time, as we measure the metric in terms of how it performs
  • 7. Click to edit Master title style 7 “ “Start with a brand new language and you essentially start with minus 1,000 points. And now, you’ve got to win back your 1,000 points before we’re even talking. Lots of languages never get to more than minus 500. Yeah, they add value but they didn’t add enough value over what was there before.” - Anders Hejlsberg, Microsoft Technical Fellow 7
  • 8. Click to edit Master title style 8 A feature related to feature scoring 8 • A general but quickly understandable feature of a software, as this is one of the requirements • This means a feature must be drilled down from business use case to at least to a technical use case, and both development party and other parties stakeholders must be informed • For development, this means all features of the application in development • For infrastructure or operation department, this can be focusing on how the feature is put into metrics
  • 9. Click to edit Master title style 9 This is a good reason for Green field software development 9 1. Green field development start from “0”, analog with everyday sample as opening new field for plantation 2. All features defined are starting from “0” or minus. This depends on your actual needs explained next 3. All features are having the global (general) overview, therefore a business use case and technical use case must be defined first
  • 10. Click to edit Master title style 10 Feature scoring is used for (from the start of development) 10 • Measure the performance of the feature in terms of how it meets the matrix of requirements (business use case and technical use case), test result (e.g. SIT and UAT), actual feedback and performance after deployed in production • The performance starts early in the development. If the feature has defined as done in the requirement, development, and in the testing in SIT and UAT is still having bad user experience and bugs, then the score will be minus and but this should not be the main concern • The feature performance is recommended to be checked against what categorized within MosCow method in the beginning. For example: if a feature was categorized within “Should have” but if it performs badly because users are not using it much, then this feature will have minus points • Therefore, a prioritization of feature score weight must be defined
  • 11. Click to edit Master title style 11 MosCow method 11 https://www.productplan.com/glossary/moscow-prioritization/
  • 12. Click to edit Master title style 12 Elements of Feature scoring S u b t i t l e 12
  • 13. Click to edit Master title style 13 Fundamental elements of feature scoring 13 • Feature performance score are combinations of matrix of requirements (they are met or not), test result in SIT/UAT, bug reports from users, and usability in production • Gathering usability from production comes from reports from actual users by examining telemetries. For example: application log examination • Prioritizations • Weigh scale factor
  • 14. Click to edit Master title style 14 Feature weight prioritization 14 • General software architecture must come first as features • Categorize each features into MosCow methods, therefore “Must have” features must come first to be measured • On any bugs related to any features, any minus point gets larger they are part of lesser priority in MosCow method, especially those that categorized as “Could have” and below. • Any Must have and Should have solved bugs will be considered as positive, equal to the number of related bug so it will have total net score of “0”. • Further weighing are open for customization
  • 15. Click to edit Master title style 15 Standard weighing basic considerations 15 1. All bugs must meet basic MosCow Methods category, at least Must Have, Should Have, Could Have. 2. Combined with above MosCow, it is recommended that all bugs should be at least tagged with two categories: “Showstopper” (or Critical) or Functional in addition to MosCow methods. Showstopper/critical means the feature doesn’t work at all caused by error at first place shown or app goes hung. The showstopper bugs must be given highest priority and large negative score. 3. Bugs that comes from telemetry/app logging must be further categorized as “Showstopper” and functional, with additional category of external. An example: abrupt app timeout logs when app runs and try to communicate with third party server 4. Bugs that comes from automated UI test must be categorized as functional bugs first, because it is unattended. Automated UI test always have lowest score.
  • 16. Click to edit Master title style 16 Sample basic weighing (MosCow and criticality/severity) 16 MosCow Method Score Must Have -10 Should Have -15 Could Have -25 Telemetry (Log of error) -20 Solved Must Have 10 Solved Should Have 15 Solved Could Have 15 Telemetry of Could Have has shown consistent frequent usages 10 Solved Telemetry error 20 Criticality Score Showstopper -30 Functional -10 Solved Showstopper 30 Solved Functional 10
  • 17. Click to edit Master title style 17 Demo using Azure DevOps 17
  • 18. Click to edit Master title style 18 Relevance in software development 18 • Is feature scoring is important? For product based application development, it is important, because an application software as product have features defined. All of these features must have histories as bugs, test results, and also actual feedback in production
  • 19. Click to edit Master title style 19 Best practices of Feature scoring S u b t i t l e 19
  • 20. Click to edit Master title style 20 Known best practices 1/2 20 • It is highly usable in product based in software development, especially in DevOps as the software must be improved continuously and at the same time the software must be tracked in not just the functionality, but also the total value of its features starting from development to deployment to usage in productions • Scoring for features must at least have MosCow method in place. This would help the prioritization of works (especially when fixing bugs) and also gauge the real value of each features. • Therefore, it can be that once a “should have” feature gets promoted to “must have”. Especially if it’s proven to have high value in terms of combination of high usage, easy to use, and has positive feedback from actual users after deployment to production
  • 21. Click to edit Master title style 21 Known best practices 2/2 21 • Why lesser MosCow priorities are punished more than must have? Because priorities are often related to the actual work that should be prioritized first, and speed of delivery is equal to delivering what value matters most! Therefore, it is common such in some teams such as .NET development team at Microsoft to start each feature in new software development to have -1000 points instead of 0 point, as new development will always relate to workforce allocation, prioritization, schedule and also allocated budget.
  • 22. Click to edit Master title style 22 Cultural best practices for Feature Scoring 22 1. Always say no to any features that part of “will not have” when prioritizing work on current sprint/phase. 2. Any exclusion of of a feature caused by scoring-based demotion from “could have” to “will not have” (based on MosCow methods) must be considered as part of documented change request. Because any promotion/demotion will always change prioritization in the development 3. Architecture of a software must be always included as features. For example: supporting current OS based on the company that create the OS is part of “must have”, whereas supporting for OS that is almost out of support should not be considered as “must have”. For example, in 2021 supporting Windows 7 will add more burden and risks, as it’s not supported by Microsoft. Supporting Windows Server 2012 and later in 2021 is a must have. 4. Feature scoring can be used for Brown Field development, but every current features must be inventarized from 0, including documentations of each features. Therefore, software requirement gathering must be restarted and adjusted to provide feature scoring. Otherwise feature scoring will be no longer usable even when finishing defining MosCow methods.
  • 23. Click to edit Master title style 23 Thank You! ☺
  • 25. PAGE 25 DEVOPS INDONESIA Alone We are smart, together We are brilliant THANK YOU ! Quote by Steve Anderson