From the course: Tech on the Go: Building a Software Test Department
A practical case for quality
From the course: Tech on the Go: Building a Software Test Department
A practical case for quality
- A practical case for quality. Most people don't get up in the morning and think to themselves, "Today, I'm going to make a mistake at work." And yet, despite our best intentions, the inevitable will happen. In software engineering, mistakes take many forms, anything from a misplaced form element, overlapping images, debugging output, to crashes, and lost data. So with this in mind, what's quality? Software quality in its purest form describes the desirable attributes to the software, what we want, a system that does what we want it to. The Merriam-Webster definition of equality is a degree of excellence. Excellence isn't perfection. It's being outstanding or very good. Humans are imperfect by nature, and we've adapted to that. We make inferences based on incomplete information and fill in the blanks. If you see a misspelled word, you'll probably still understand the intent based on the context, unless the misspelling changes the semantic meaning. When visiting a website, you've probably encountered small little mistakes that didn't directly impact you to the point that you didn't notice them. Your brain cheerfully fixed it for you and you blissfully completed your task. But then you tried to buy something and got a weird warning. You tried again, and that time it worked. Awkward. But whatever. Then when you check out, you get an error and you cannot complete the transaction. If an experience is bad enough, problems can exceed the pain tolerance of a user. At this point, would you trust this site? Through repeated failures, trust and confidence have eroded leaving the user frustrated. With web and mobile applications, sometimes an error will cause people to leave immediately and never return. Those first impressions do matter. There are times when perfection is required. A compromise can have devastating consequences if the system involves finances, life and death, and so forth. A rounding error in a transaction calculation repeated millions of times could potentially cost exponential amounts of money. What if there's a buffer overflow in a car's braking system that causes them to fail after driving for a certain amount of time, an error in medication dosage calculation? Regardless of the consequences, there should be a degree of excellence in the software we build. There are some incredibly formal definitions of ways to measure software quality with metrics. There are international standards for quality and frameworks for representing it. This course won't cover them in any depth as I intend to focus on the practical, less formal approaches. I'm going to start with the basics. And I would recommend that you do so as well. Add more formalization and process as you go along when more structure is necessary to scale. How can we determine the quality of our system?
Contents
-
-
-
A practical case for quality3m 5s
-
(Locked)
Why should you test software?3m 52s
-
(Locked)
Does the difference between bugs and defects matter?2m 19s
-
(Locked)
What is a test case?3m 4s
-
(Locked)
Measuring code and test coverage3m 44s
-
(Locked)
What kinds of tests are there?5m 31s
-
(Locked)
Manual testing for correctness3m 21s
-
(Locked)
Automated software testing for rapid feedback4m 36s
-
(Locked)
Regression testing for confidence3m 10s
-
(Locked)
The practice of exploratory testing4m 35s
-
(Locked)
Session-based testing with a group3m 53s
-
(Locked)
Linting and detecting bad code smells5m 32s
-
(Locked)
The role of security testing3m 44s
-
(Locked)
Effective bug reporting5m 14s
-
(Locked)
Building a partnership with engineering and product5m 51s
-
(Locked)
The first test engineer4m 7s
-
(Locked)
Testing as part of a CI/CD pipeline3m 26s
-
(Locked)
Scaling a test department2m 28s
-
-