SlideShare a Scribd company logo
Sympathy for the
Developer
Sarah Gibson
Allow me to introduce myself
So, I work at Veracode.
We scan a lot of applications.
I’ve been working with scan results, of one kind of another, for
the past five year.
Also about a year ago
I needed a research topic for Ming Chow’s security class at
Tufts University.
Thought about “fixing” WebGoat.
He asked me to think about something else.
State of Software Security
Veracode has a lot of data
Mostly annual report on trends across applications we scan.
Some findings are surprisingly consistent over time.
https://www.veracode.com/sites/default/files/Resources/Reports/state-of-
software-security-volume-7-veracode-report.pdf
SQL Injection
Flaw type I wanted to fix in WebGoat.
Fairly prevalent.
Veracode finds it in static scans, and can detect fixes.
I had made some assumptions
Looked at flaw prevalence data over previous SOSS reports.
The number didn’t change very much.
Why was it so flat? For years?
Measuring the prevalence of SQLi
The measurement refers to the presence of at least one SQL
injection flaw on first static scan.
It is one of the standard measurements that gets reported in
the SOSS.
Everyone Poops
Flaws happen.
We know bugs happen.
Why pretend that security flaws are different?
SQLi prevalence on first scan
All first static scans between
2013 and first half of 2017.
Mean: 31.9%
SD: 0.36%
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
2013 2014 2015 2016 2017
With SQLi Without SQLi
SQLi prevalence rate across orgs
Same data, but looking across
17 randomly selected
organizations.
Fix rate by application on 3rd scan
App fix rate
Some/All fixed flaws 44%
No net-fixed flaws 56%
Good News
The overall flaw fix rate is
greater than the introduction
rate.
Room to Play
As scans move to sandboxes,
the prevalence of SQLi in
policy scans on first scan goes
down.
Prevalence for all first scans
remains the same.
Fix rates in context
Study published by O’Reilly and SIG.
Survey questions about use of code quality tools.
~32% of respondents reported fixing 80%+ of issues found
using code quality tools.
Only 11% of respondents reported fixing less than 20% of the
issues.
https://www.sig.eu/insight/improving-code-quality/
Does anything affect fix rates?
2016 State of Software Security did report on two factors
that appear to influence fix rates.
1. Remediation Coaching
2. eLearning subscriptions
https://www.veracode.com/sites/default/files/Resources/Reports/state-of-
software-security-volume-7-veracode-report.pdf
It's like we're helping.
Developers fix flaws when they’re found.
When developers have access to consequence
free scans, learning tools and help, fix rates get
even better.
Security working in conjunction with
development can allow both teams to succeed.
Conclusions
Flaws happen.
Devs will work to fix findings, they do even better with
friendly assistance.
Look for flaws. Please.
Thanks! Any Questions?

More Related Content

PDF
Sonatype's 2013 OSS Software Survey
PPTX
Live 2014 Survey Results: Open Source Development and Application Security Su...
PDF
Presentation
PDF
03 學校網絡安全與防衛
PPT
On the Link Between Mobile App Quality and User Reviews
PPT
Project Data Incorporating Qualitative Factors for Improved Software Defect P...
PPTX
The State of Open Source Vulnerabilities - A WhiteSource Webinar
PPTX
Need Of security in DevOps
Sonatype's 2013 OSS Software Survey
Live 2014 Survey Results: Open Source Development and Application Security Su...
Presentation
03 學校網絡安全與防衛
On the Link Between Mobile App Quality and User Reviews
Project Data Incorporating Qualitative Factors for Improved Software Defect P...
The State of Open Source Vulnerabilities - A WhiteSource Webinar
Need Of security in DevOps

What's hot (20)

PPTX
Supporting Change Impact Analysis Using a Recommendation System - An Industri...
PDF
Penetration Testing
PDF
INAIL e la cultura cybersecurity: Sonatype Advanced Development Pack
PDF
Microsoft: Open Source at Scale
PPTX
Prioritizing the Devices to Test Your App On: A Case Study of Android Game Apps
PDF
The Hidden Risk of Component Based Software Development
PPT
Make the Most of Your Time: How Should the Analyst Work with Automated Tracea...
PPT
Invited paper: A Trust Behavior based Recommender System for Software Usage b...
PDF
Knowledge and Data Engineering IEEE 2015 Projects
PDF
Knowledge and Data Engineering IEEE 2015 Projects
PDF
Webinar: Systems Failures Fuel Security-Focused Design Practices
PPTX
Fortner_OSCARPresentation
PPTX
Software Testing Principal
PPTX
3 Reasons to Swap Your Next Pen Test With a Bug Bounty Program
PDF
The State of Open Source Vulnerabilities Management
PDF
Infograph
PDF
Celebrating 30 years of ISSRE
PDF
Celebrating 30 years of ISSRE
PPTX
Survey: IT is Everywhere (End Users’ Perspective, Hong Kong)
PDF
BlueHat Seattle 2019 || Are We There Yet: Why Does Application Security Take ...
Supporting Change Impact Analysis Using a Recommendation System - An Industri...
Penetration Testing
INAIL e la cultura cybersecurity: Sonatype Advanced Development Pack
Microsoft: Open Source at Scale
Prioritizing the Devices to Test Your App On: A Case Study of Android Game Apps
The Hidden Risk of Component Based Software Development
Make the Most of Your Time: How Should the Analyst Work with Automated Tracea...
Invited paper: A Trust Behavior based Recommender System for Software Usage b...
Knowledge and Data Engineering IEEE 2015 Projects
Knowledge and Data Engineering IEEE 2015 Projects
Webinar: Systems Failures Fuel Security-Focused Design Practices
Fortner_OSCARPresentation
Software Testing Principal
3 Reasons to Swap Your Next Pen Test With a Bug Bounty Program
The State of Open Source Vulnerabilities Management
Infograph
Celebrating 30 years of ISSRE
Celebrating 30 years of ISSRE
Survey: IT is Everywhere (End Users’ Perspective, Hong Kong)
BlueHat Seattle 2019 || Are We There Yet: Why Does Application Security Take ...
Ad

Similar to Sympathy for the Developer (20)

PDF
We are excited to announce that our new State of Software Security (SOSS) rep...
PDF
The State of Software Security 2022 SOSS - Solution
PPTX
EXTENT-2016: The Future of Software Testing
PPTX
When do software issues get reported in large open source software
PDF
When do software issues get reported in large open source software - Rakesh Rana
PDF
DevSecOps: Minimizing Risk, Improving Security
PDF
RSA 2015 Blending the Automated and the Manual: Making Application Vulnerabil...
DOCX
Road ahead for performance testing
PDF
Avcomparatives Survey 2011
PDF
JDO 2019: Data Science for Developers - Matthew Renze
PDF
10 Software Testing Trends 2019
PDF
Security Metrics Rehab: Breaking Free from Top ‘X’ Lists, Cultivating Organic...
PPT
Nii shonan-meeting-gsrm-20141021 - コピー
PDF
Web app penetration testing best methods tools used
PPTX
How do YOU compare to others in Mobile DevOps Performance, Productivity, and ...
PDF
Quality Assurance and mobile applications!
PPTX
Evento - Fintech Districht - Pierguido Iezzi - SWASCAN
PPTX
API Security Survey
PPTX
2019 04-18 -DevSecOps-software supply chain
PPTX
Software Testing ppt
We are excited to announce that our new State of Software Security (SOSS) rep...
The State of Software Security 2022 SOSS - Solution
EXTENT-2016: The Future of Software Testing
When do software issues get reported in large open source software
When do software issues get reported in large open source software - Rakesh Rana
DevSecOps: Minimizing Risk, Improving Security
RSA 2015 Blending the Automated and the Manual: Making Application Vulnerabil...
Road ahead for performance testing
Avcomparatives Survey 2011
JDO 2019: Data Science for Developers - Matthew Renze
10 Software Testing Trends 2019
Security Metrics Rehab: Breaking Free from Top ‘X’ Lists, Cultivating Organic...
Nii shonan-meeting-gsrm-20141021 - コピー
Web app penetration testing best methods tools used
How do YOU compare to others in Mobile DevOps Performance, Productivity, and ...
Quality Assurance and mobile applications!
Evento - Fintech Districht - Pierguido Iezzi - SWASCAN
API Security Survey
2019 04-18 -DevSecOps-software supply chain
Software Testing ppt
Ad

Recently uploaded (20)

PDF
Raksha Bandhan Grocery Pricing Trends in India 2025.pdf
PDF
Understanding Forklifts - TECH EHS Solution
PDF
System and Network Administraation Chapter 3
PPTX
Introduction to Artificial Intelligence
PPTX
Reimagine Home Health with the Power of Agentic AI​
PDF
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PPTX
Embracing Complexity in Serverless! GOTO Serverless Bengaluru
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
PDF
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
PDF
Upgrade and Innovation Strategies for SAP ERP Customers
PPTX
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
PDF
Nekopoi APK 2025 free lastest update
PDF
Digital Systems & Binary Numbers (comprehensive )
PPTX
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
PDF
Softaken Excel to vCard Converter Software.pdf
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 41
PPTX
Transform Your Business with a Software ERP System
PPTX
CHAPTER 2 - PM Management and IT Context
PDF
How to Choose the Right IT Partner for Your Business in Malaysia
Raksha Bandhan Grocery Pricing Trends in India 2025.pdf
Understanding Forklifts - TECH EHS Solution
System and Network Administraation Chapter 3
Introduction to Artificial Intelligence
Reimagine Home Health with the Power of Agentic AI​
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
Embracing Complexity in Serverless! GOTO Serverless Bengaluru
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
Upgrade and Innovation Strategies for SAP ERP Customers
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
Nekopoi APK 2025 free lastest update
Digital Systems & Binary Numbers (comprehensive )
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
Softaken Excel to vCard Converter Software.pdf
Internet Downloader Manager (IDM) Crack 6.42 Build 41
Transform Your Business with a Software ERP System
CHAPTER 2 - PM Management and IT Context
How to Choose the Right IT Partner for Your Business in Malaysia

Sympathy for the Developer

  • 2. Allow me to introduce myself So, I work at Veracode. We scan a lot of applications. I’ve been working with scan results, of one kind of another, for the past five year.
  • 3. Also about a year ago I needed a research topic for Ming Chow’s security class at Tufts University. Thought about “fixing” WebGoat. He asked me to think about something else.
  • 4. State of Software Security Veracode has a lot of data Mostly annual report on trends across applications we scan. Some findings are surprisingly consistent over time. https://www.veracode.com/sites/default/files/Resources/Reports/state-of- software-security-volume-7-veracode-report.pdf
  • 5. SQL Injection Flaw type I wanted to fix in WebGoat. Fairly prevalent. Veracode finds it in static scans, and can detect fixes.
  • 6. I had made some assumptions Looked at flaw prevalence data over previous SOSS reports. The number didn’t change very much. Why was it so flat? For years?
  • 7. Measuring the prevalence of SQLi The measurement refers to the presence of at least one SQL injection flaw on first static scan. It is one of the standard measurements that gets reported in the SOSS.
  • 8. Everyone Poops Flaws happen. We know bugs happen. Why pretend that security flaws are different?
  • 9. SQLi prevalence on first scan All first static scans between 2013 and first half of 2017. Mean: 31.9% SD: 0.36% 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 2013 2014 2015 2016 2017 With SQLi Without SQLi
  • 10. SQLi prevalence rate across orgs Same data, but looking across 17 randomly selected organizations.
  • 11. Fix rate by application on 3rd scan App fix rate Some/All fixed flaws 44% No net-fixed flaws 56%
  • 12. Good News The overall flaw fix rate is greater than the introduction rate.
  • 13. Room to Play As scans move to sandboxes, the prevalence of SQLi in policy scans on first scan goes down. Prevalence for all first scans remains the same.
  • 14. Fix rates in context Study published by O’Reilly and SIG. Survey questions about use of code quality tools. ~32% of respondents reported fixing 80%+ of issues found using code quality tools. Only 11% of respondents reported fixing less than 20% of the issues. https://www.sig.eu/insight/improving-code-quality/
  • 15. Does anything affect fix rates? 2016 State of Software Security did report on two factors that appear to influence fix rates. 1. Remediation Coaching 2. eLearning subscriptions https://www.veracode.com/sites/default/files/Resources/Reports/state-of- software-security-volume-7-veracode-report.pdf
  • 16. It's like we're helping. Developers fix flaws when they’re found. When developers have access to consequence free scans, learning tools and help, fix rates get even better. Security working in conjunction with development can allow both teams to succeed.
  • 17. Conclusions Flaws happen. Devs will work to fix findings, they do even better with friendly assistance. Look for flaws. Please.

Editor's Notes

  • #3: Hi, I’m Sarah I work at Veracode, if you’re not familiar with what we do, we offer application security testing and services to business. Our main testing solutions are static binary analysis, and dynamic web application scanning. We scan a lot. Doing hundreds of thousands of scans in the last year alone (get number). I’ve done a couple different things at Veracode over the past five years, bouncing back and forth between services, operations, and engineering. Getting different perspectives on how people scan, and what we find. This story starts about a year ago, the last time I changed jobs. I moved back to a customer facing role, helping developers understand and fix the flaws we find.
  • #4: New job and last class of my program at Tufts. I knew Ming “assigned” a research paper each semester, and you just needed to pick an interesting topic. I was starting to help people fix what they find, but didn’t have a lot of experience actually doing that myself. I thought about fixing a known vulnerable web app as a way to get into the mind of a developer having to fix a legacy application. Ming’s review of my proposal asked if I could instead take a look at Veracode’s SOSS reports over the past couple years and see if there was anything interesting in the data over time. (I guess he wasn’t into an auto-ethnographic study of developers receiving security reports)
  • #5: What is the SOSS? ? Turns out there was something in there. I went looking for changes in fix rates, what I found was a lack of change in flaw introduction rates.
  • #6: Quick detour.
  • #7: I was still interested in fix rates. If I wasn’t going to better understand developers by fixing flaws myself, I wanted to see if I could get an understanding through the data. In order to look at fix rates I needed to get an idea about what flaw introduction rates looked like. The flaw prevalence number was really steady, I wasn’t sure how it was measured. At first I assumed it was across all scans in a given year. My first hypothesis was that while flaw introduction rates were going down, flaw fix rates were non-existent. This would suck. I was wrong. Flaw introduction rates were not going down.
  • #8: So this is why. The prevalence metric is only looking at first scan. It does not look at subsequent scans, and contains no fix data. At all. If it was flat, then developers weren’t reducing flaw introduction rates over time. We spend a lot of time thinking about prevention, and there are organizations that are very proactive in working with their developers on security, was it really flat?
  • #9: Off by one errors. Etc
  • #10: I expected all of what has happened in the last five years would have affected the rate of flaw introduction, and it hasn’t. IT’s the same. 150,000 scans SQLi prevalence on first scan is the same as it was in 2013. Kind of huge. Avg Flaw count 44.10363 SD 456.2882 Median 5
  • #11: Orgs are kept anonymous in data. Pulled first 50 scan entries, removed duplicates, removed those orgs with less than 100 applications. Trend line data? Mean? Clustering around 30%, which is what we would expect if previous graph is representative of the greater population and not just an artifact of something.
  • #12: Sample description (three scans or more of similar analysis size)* *similar defined as being defined as either a 90% similar set, or those values within one SD of mean.
  • #13: Individual flaw fix rate 48% Of applications in sample, 44% had a decrease in SQLi flaw count after 3 scans. When including customer mitigations, 51% had a decrease in SQLi flaw count after 3 scans
  • #14: “Consequence Free” What is the Sandbox feature? What does it allow devs and security to do? This looks like it is affecting policy scan SQLi prevalence, why? Room to experiment, to scan whatever, to learn. To do better.
  • #15: What is my actual 80% fix rate by third scan number? 23% all fix 34% of the sample looked at had 80% or more of their SQLi flaws fixed by the third scan of similar size. That lines up really nice. That 11% compares to more than a 50% slice of the sample that had less than 20% fixed, no change, or introduced new flaws.
  • #16: This will change.
  • #17: Totally recent survey by ESG and Veracode 53% of respondents said that they felt that security and dev were working collaboratively in their org.
  • #18: “Have some Sympathy” Or some other rolling stones pun. None of this stuff changes the rate at which we make mistakes, however giving access to these tools and resources to developers makes all the difference in the world. Take aways, automated tools to find their own stuff, they will use them and fix things! If you give them coaching, they will fix them faster cause they know what they’re doing. Neither of these things stops mistakes from being written in the first place.