SlideShare a Scribd company logo
test 
You Can't Evaluate a Test Tool by Reading a Data Sheet 
All data sheets look virtually alike. The buzzwords are identical: "Industry Leader", "Unique 
Technology", "Automated Testing", and "Advanced Techniques". The screen shots are similar: "Bar 
Charts", "Flow Charts", "HTML reports" and "Status percentages". It is mind numbing. 
What is Software Testing? 
All of us who have done software testing recognize that testing will come in many flavors. For 
simplicity, we'll use three terms in this paper: 
System Testing 
Integration Testing 
Unit Testing 
Everyone does some volume of system testing where they are doing some in the same things by it 
that the clients will do with it. Notice that we said "some" and not "all." One in the most common 
reasons behind applications being fielded with bugs is the fact that unexpected, and therefore 
untested, combinations of inputs are encountered with the application while in the field. 
Not as many folks do integration testing, and in many cases fewer do unit testing. If you have done 
integration or unit testing, you are probably painfully aware of the level of test code that has got to 
be generated to isolate one particular file or group of files from your rest with the application. At the 
most stringent amounts of testing, it's not at all uncommon for the volume of test code written being 
larger than the quantity of application code being tested. As a result, these amounts of testing this 
are likely to be applied to mission and safety critical applications in markets including aviation, 
medical device, and railway. 
What Does "Automated Testing" Mean? 
It is well known that the process of unit and integration testing manually is very expensive and time-consuming; 
therefore every tool that is certainly being sold into forex will trumpet "Automated 
Testing" as their benefit. But what exactly is "automated testing"? Automation means various things 
to different people. To many engineers the promise of "automated testing" signifies that they can 
press control button and they'll either obtain a "green check" indicating that their code is correct, or 
a "red x" indicating failure. 
Unfortunately this tool doesn't exist. More importantly, if the tool did exist, would you want to work 
with it? Think about it. What would it mean for the tool to see you that your particular code is "Ok"? 
Would it mean the code is formatted nicely? Maybe. Would it imply that it conforms for your coding 
standards? Maybe. Would it mean that your code is correct? Emphatically No! 
Completely automated testing is not attainable nor would it be desirable. Automation should address
those elements of the testing procedure that are algorithmic naturally and labor intensive. This frees 
the software engineer to accomplish higher value testing work like designing better plus much more 
complete tests. 
The logical question being asked when looking at tools is: "How much automation creates this 
change tool provide?" This could be the large gray area as well as the primary area of uncertainty 
when a company attempts to calculate an ROI for tool investment. 
Anatomy of Test Tools 
Test Tools generally give a variety of functionality. The names vendors use changes for different 
tools, and several functionality might be missing from some tools. For a common frame of reference, 
we've got chosen the next names for that "modules" that could exist in the test tools you're 
evaluating: 
Parser: The parser module allows the tool to comprehend your code. It reads the code, and helps to 
create an intermediate representation to the code (usually in a tree structure). Basically the identical 
to the compiler does. The output, or "parse data" is generally saved in an intermediate language (IL) 
file. 
CodeGen: The code generator module uses the "parse data" to construct quality harness source 
code. 
Test Harness: While the test harness is just not specifically section of the tool; the decisions made in 
the exam harness architecture affect other features with the tool. So the harness architecture is very 
important when evaluating a tool. 
Compiler: The compiler module allows the exam tool to invoke the compiler to compile and link test 
harness components. 
Target: The target module allows tests to be easily run in a various runtime environments including 
support for emulators, simulators, embedded debuggers, and commercial RTOS. 
Test Editor: The test editor allows the person to use the scripting language or a sophisticated 
graphical user interface (GUI) to create preconditions and expected values (pass/fail criteria) for test 
cases. 
Coverage: The coverage module allows an individual to get reports on what elements of the code are 
executed by each test. 
Reporting: The reporting module allows the different captured data to get compiled into project 
documentation. 
CLI: A command line interface (CLI) allows further automation in the use of the tool, allowing the 
tool to be invoked from scripts, make, etc. 
Regression: The regression module allows tests which are created against one version in the 
application being re-run against new versions. 
Integrations: Integrations with third-party tools can be an interesting strategy to leverage your 
investment in a test tool. Common integrations are with configuration management, requirements
management tools, and static analysis tools. 
Later sections will elaborate how you should evaluate these modules inside your candidate tools. 
Classes of Test Tools / Levels of Automation 
Since all tools do not include all functionality or modules described above as well as because there is 
a broad difference between tools inside the level of automation provided, we've got created the 
subsequent broad classes of test tools. Candidate test tools will get into one of those categories. 
"Manual" tools generally create a clear chair framework to the test harness, and require you to 
hand-code the exam data and logic needed to implement the exam cases. Often, they will give you a 
scripting language and/or a set of library functions that might be used to do common things like test 
assertions or create formatted reports for test documentation. 
"Semi-Automated" tools may put a graphical interface on some Automated functionality furnished by 
a "manual" tool, but will still require hand-coding and/or scripting in-order to test more complex 
constructs. Additionally, a "semi-automated" tool might be missing some of the modules an 
"automated" tool has. Built in support for target deployment as an example. 
"Automated" tools will address each in the functional areas or modules listed inside previous section. 
Tools with this class is not going to require manual hand coding and will support all language 
constructs as well a variety of target deployments. 
Subtle Tool Differences 
In addition to comparing tool features and automation levels, it is usually important to guage and 
compare the test approach used. This may hide latent defects inside the tool, so it is crucial that you 
not just load your code in to the tool, but to also try to build some simple test cases for each and 
every method within the class that you might be testing. Does the tool build a complete test harness? 
Are all stubs created automatically? Can you utilize GUI to define parameters and global data for 
that test cases or are you required to write code because you would had you been testing manually? 
In an identical way target support differs between tools. Be wary in case a vendor says: "We support 
all compilers and all sorts of targets out of the box". These are code words for: "You do every one of 
the work to make our tool work within your environment". 
How to Evaluate Test Tools 
The following few sections will describe, in more detail, information that you must investigate 
through the evaluation of the software testing tool. Ideally you should confirm these records with 
hands-on testing of every tool being considered. 
Since the entire content of this paper is rather technical, we wish to explain a few of the conventions 
used. For each section, we have a title that describes an issue to get considered, an explanation of 
why the thing is important, and a "Key Points" section in summary concrete items being considered. 
Also, while we are talking about conventions, we have to also make note of terminology. The term 
"function" refers to sometimes a C function or possibly a C++ class method, "unit" refers to a C file 
or a C++ class. Finally, please remember, virtually every tool can somehow secure the items 
mentioned inside the "Key Points" sections, your task is to gauge how automated, easy to work with,
and finish the support is. 
Parser and Code Generator 
It is comparatively easy to create a parser for C; however it is incredibly difficult to create a 
complete parser for C++. One with the questions being answered during tool evaluation should be: 
"How robust and mature will be the parser technology"? Some tool vendors use commercial parser 
technology that they license from parser technology companies and a few have homegrown parsers 
that they have built themselves. The robustness of the parser and code generator could be verified 
by evaluating the tool with complex code constructs which might be representative with the code 
being used for your project. 
Key Points: 
- Is the parser technology commercial or homegrown? 
- What languages are supported? 
- Are tool versions for C and C++ exactly the same tool or different? 
- Is the complete C++ language implemented, or are their restrictions? 
- Does the tool help our most complicated code? 
The Test Driver 
The Test Driver may be the "main program" that controls test. Here is often a simple example of 
your driver that will test the sine function through the standard C library: 
#include 
#include 
int main () 
float local; 
local = sin (90.0); 
if (local == 1.0) printf ("My Test Passed!n"); 
else printf ("My Test Failed!n"); 
return 0; 
Although it is a pretty simple example, a "manual" tool might ask you to type (and debug) this little 
snippet of code by hand, a "semi-automated" tool might present you with some sort of scripting 
language or simple GUI to get in the stimulus value for sine. An "automated" tool could have a full-featured 
GUI for building test cases, integrated code coverage analysis, an internal debugger, and a
built-in target deployment. 
I wonder should you noticed that this driver includes a bug. The bug is how the sin function actually 
uses radians not degrees for the input angle. 
Key Points 
- Is the driver automatically generated or do I write the code? 
- Can I test the next without writing any code: 
- Testing over a range of values 
- Combinatorial Testing 
- Data Partition Testing (Equivalence Sets) 
- Lists of input values 
- Lists of expected values 
- Exceptions as expected values 
- Signal handling 
- Can I create a sequence of calls to several methods inside same test? 
Stubbing Dependent Functions 
Building replacements for dependent functions is essential when you want to control the values 
which a dependent function returns within a test. Stubbing is often a really important section of 
integration and unit testing, because it allows one to isolate the code under test from other elements 
of your application, plus much more easily stimulate the execution with the unit or sub-system of 
curiosity. 
Many tools require the manual generation of the exam code to make a stub do anything more than 
return a static scalar value (return 0;) 
Key Points 
- Arestubs automatically generated, or would you write code for the children? 
- Are complex outputs supported automatically (structures, classes)? 
- Can each call in the stub return a different value? 
- Does the stub keep an eye on how many times it had been called? 
- Does the stub keep track with the input parameters over multiple calls? 
- Can you stub calls on the standard C library functions like malloc?
Test Data 
There are two basic approaches that "semi-automated" and "automated" tools use to implement test 
cases. One is really a "data-driven" architecture, along with the other is a "single-test" architecture. 
For a data-driven architecture, the exam harness is 
done for all with the units under test and supports all in 
the functions defined in those units. When the test is to 
be run, the tool simply supplies the stimulus data across 
a data stream like a file handle or possibly a physical 
interface being a UART. 
For a "single-test" architecture, every time a test 
operates, the tool will build the exam driver with the 
test, and compile and link it into an executable. A couple 
of points on this; first, every one of the extra code generation required through the single-test 
method, and compiling and linking will take more time at test execution time; second, you 
http://softwaretestingfundamentals.com/unit-testing/ wind up building a separate test harness for 
each and every test case. 
This means that a candidate tool might appear to dedicate yourself some nominal cases but probably 
won't work correctly for more advanced tests. 
Key Points 
- Is quality harness data driven? 
- How long will it take to perform test case (including any code generation and compiling time)? 
- Can the test cases be edited outside of the exam tool IDE? 
- If not, have I done enough free have fun with the tool with complex code examples to comprehend 
any limitations? 
Automated Generation of Test Data 
Some "automated" tools give you a degree of automated test case creation. Different approaches are 
used to do this. The following paragraphs describe some approaches: 
Min-Mid-Max (MMM) Test Cases tests will stress a function on the bounds in the input data types. C 
and C++ code often will not protect itself against out-of-bound inputs. The engineer has some 
functional range in their mind and they often usually do not protect themselves against beyond 
range inputs. 
Equivalence Classes (EC) tests create "partitions" per data type and select a sample of values from 
each partition. The assumption is values in the same partition will stimulate the application inside a 
similar way. 
Random Values (RV) tests set combinations of random values for each with the parameters of an
function. 
Basic Paths (BP) tests use the basis path analysis to analyze the unique paths available through a 
procedure. BP tests can automatically develop a high a higher level branch coverage. 
The key thing to keep in mind when contemplating automatic test case construction may be the 
purpose that it serves. Automated tests are ideal for testing the robustness in the application code, 
although not the correctness. For correctness, you have to create tests which are based on the the 
application is supposed to do, not what it really does do. 
Compiler Integration 
The point in the compiler integration is two-fold. One this point is usually to allow the test harness 
components to be compiled and linked automatically, without the user having to discover the 
compiler options needed. The other point is to allow quality tool to honor any language extensions 
which can be unique for the compiler getting used. Especially with cross-compilers, it is quite 
common for the compiler to provide extensions which can be not the main C/C++ language 
standards. Some tools utilize approach of #defining these extension to null strings. This very crude 
approach is particularly bad given it changes the thing code that this compiler produces. For 
example, consider the following global extern which has a GCC attribute: 
extern int MyGlobal __attribute__ ((aligned (16))); 
If your candidate tool will not maintain the attribute when defining the global object MyGlobal, then 
code will behave differently during testing of computer will when deployed because the memory 
won't be aligned exactly the same. 
Key Points 
- Does the tool automatically compile and link the test harness? 
- Does the tool honor and implement compiler-specific language extension? 
- What form of interface is there to the compiler (IDE, CLI, etc.)? 
- Does the tool have an interface to import project settings from a development environment, or must 
they be manually imported? 
- If the tool does import project settings, is import feature general purpose or limited to specific 
compiler, or compiler families? 
- Is the tool integrated along with your debugger to allow you to definitely debug tests? 
Support for Testing on an Embedded Target 
In this section we'll use the term "Tool Chain" to refer on the total cross development environment 
such as the cross-compiler, debug interface (emulator), target board, and Real-Time Operating 
System (RTOS). It is imperative that you consider if your candidate tools have robust target 
integrations to your tool chain, and to know what inside the tool must change in the event you 
migrate to another tool chain.
Additionally, it is important to understand the automation level and robustness from the target 
integration. As mentioned earlier: If a vendor says: "we support all compilers and many types of 
targets out of the box." They mean: "You do all the work to produce our tool work inside your 
environment." 
Ideally, the tool that you select will allow for "push button" test execution where all with the 
complexity of downloading towards the target and capturing test results back on the host is 
abstracted in to the "Test Execution" feature in order that no special user actions are required. 
An additional complication with embedded target tests are hardware availability. Often, the 
hardware is being developed in parallel with the application, or there is certainly limited hardware 
availability. A key feature may be the ability to start testing inside a native environment and later on 
transition towards the actual hardware. Ideally, the tool artifacts are hardware independent. 
Key Points 
- Is my tool chain supported? If not, can it be supported? What does "supported" mean? 
- Can I build tests on the host system and later on use them for target testing? 
- How does quality harness get downloaded for the target? 
- How are the exam results captured back towards the host? 
- What targets, cross compilers, and RTOS are supported off-the-shelf? 
- Who builds the support for the new tool chain? 
- Is any the main tool chain integration user configurable? 
Test Case Editor 
Obviously, the test case editor is where you will spend most of your interactive time by using a test 
tool. If there is certainly true automation from the previous items mentioned with this paper, then 
the quantity of time owing to setting up test environment, and also the target connection ought to be 
minimal. Remember what we said on the start, you wish to use the engineer's time for you to design 
better and more complete tests. 
The key factor to evaluate is the place where hard could it be to setup test input and expected values 
for non-trivial constructs. All tools within this market provide some easy way to setup scalar values. 
For example, does your candidate tool supply a simple and intuitive way to make a class? How about 
an abstract way to arrange an STL container; being a vector or even a map? These would be the 
things to evaluate in test case editor. 
As with the rest of this paper there is "support" and then there is certainly "automated support". 
Take this into account when looking for constructs that could possibly be of interest for you. 
Key Points 
- Are allowed ranges for scalar values shown
- Are array sizes shown? 
- Is it all to easy to set Min and Max values with tags in lieu of values? This is crucial that you 
maintain the integrity of test if a type changes. 
- Are special floating point numbers supported (e.g. NaN, +/- Infinity) 
- Can you do combinatorial tests (vary 5 parameters on the range and enjoy the tool do all 
combinations of people values)? 
- Is the editor "base aware" so which you can easily enter values in alternate bases like hex, octal, 
and binary? 
- For expected results, are you able to easily enter absolute tolerances (e.g. +/- 0.05) and relative 
tolerances (e.g. +/- 1%) for floating point values? 
- Can test data be imported business sources like Excel? 
Code Coverage 
Most "semi-automated" tools and all sorts of "automated" tools have some code coverage facility 
integrated that allows that you see metrics which show the portion with the application that is 
certainly executed from your test cases. Some tools present these details in table form. Some show 
flow graphs, and a few show annotated source listings. While tables are good as a summary, if you 
happen to be trying to realize 100% code coverage, an annotated source listing is the best. Such a 
listing can have the original source code file with colorations for covered, partially covered, and 
uncovered constructs. This allows you to easily begin to see the additional test cases that are needed 
to succeed in 100% coverage. 
It is important http://www.brothersoft.com/windows/home_education/teaching_and_testing/ to know 
the impact of instrumentation the additional instrumentation on the application. There are two 
considerations: one is the increase in size of the object code, along with the other may be the run-time 
overhead. It is important to be aware of if the job is memory or real-time limited (or both). This 
will help you focus on which item is most important for the application. 
Key Points 
-What will be the code size increase per type of instrumentation? 
- What could be the run-time increase for each and every type of instrumentation? 
- Can instrumentation be built-into your "make" or "build" system? 
- How will be the coverage results presented to the user? Are there annotated listings with a 
graphical coverage browser, or simply tables of metrics? 
- How will be the coverage information retrieved through the target? Is the process flexible? Can 
data be buffered in RAM? 
- Are statement, branch (or decision) and MC/DC coverage supported?
- Can multiple coverage types be captured in a single execution? 
- Can coverage data be shared across multiple test environments (e.g. can some coverage be 
captured during system testing and stay combined while using coverage from unit and integration 
testing)? 
- Can you step through test execution using the coverage data to understand the flow of control 
through the job without utilizing a debugger? 
- Can you obtain aggregate coverage for all test runs in an individual report? 
- Can the tool be qualified for DO-178B as well as Medical Device intended use? 
Regression Testing 
There ought to be two basic goals for adopting the test tool. The primary goal is usually to save time 
testing. If you've check this out far, we imagine that you just agree with that! The secondary goal is 
always to allow the created tests to get leveraged over the life cycle of the application. This means 
that how the time and money purchased building tests should cause tests that are re-usable as the 
application form changes over time and simple to configuration manage. The major thing to evaluate 
with your candidate tool is the thing that specific things need to be "saved" to be able to run the 
same tests inside future and how the re-running of tests is controlled. 
Key Points 
> What file or files need to be configuration been able to regression test? 
> Does the tool have a complete and documented Command Line Interface (CLI)? 
> Are these files plain text or binary? This affects your ability to make use of a diff utility to judge 
changes over time. 
> Do the harness files generated by the tool have to become configuration managed? 
> Is there integration with configuration management tools? 
> Create a test to get a unit, now change the name of the parameter, and re-construct your test 
environment. How long can this take? Is it complicated? 
> Does the tool support database technology and statistical graphs to allow for trend analysis of test 
execution and code coverage over time? 
> Can you test multiple baselines of code with a similar set of test cases automatically? 
> Is distributed testing supported to allow portions in the tests to get run on different physical 
machines to speed up testing? 
Reporting 
Most tools will provide similar reporting. Minimally, they must create an easy to comprehend report 
showing the inputs, expected outputs, actual outputs and a comparison of the expected and actual
values. 
Key Points 
> What output formats are supported? HTML? Text? CSV? XML? 
> Is it simple to have both a advanced level (project-wide) report too as a detailed report for one 
particular function? 
> Is the report content user configurable? 
> Is the report format user configurable? 
Integration with Other Tools 
Regardless in the quality or usefulness associated with a particular tool, all tools need to operate in 
a very multi-vendor environment. A lot of time any money has been spent by big companies buying 
little companies with an idea of offering "the tool" that is going to do everything for all. The 
interesting thing is the fact that most often using these mega tool suites, the whole is a lot below the 
sum in the parts. It seems that companies often take 4-5 pretty cool small tools and integrate them 
into one bulky and unusable tool. 
Key Points 
> Which tools does your candidate tool integrate with out-of-the-box, and may the end-user add 
integrations? 
Additional Desirable Features for a Testing Tool 
The previous sections all describe functionality that needs to be in any tool that's considered an 
automatic test tool. In the subsequent few sections we are going to list some desirable features, 
along with a rationale for that importance with the feature. These features may have varying levels 
of applicability in your particular project. 
True Integration Testing / Multiple Units Under Test 
Integration testing is an extension of unit testing. It is utilized to check interfaces between units and 
requires you to definitely combine units that define some functional process. Many tools claim they 
can support integration testing by linking the object code are the real deal units with test harness. 
This method builds multiple files within the test harness executable but provides no ability to 
stimulate the functions within these additional units. Ideally, selecting able to stimulate any function 
within any unit, in a order within one particular test case. Testing the interfaces between units will 
generally uncover a lot of hidden assumptions and bugs in the application. In fact, integration 
testing could possibly be a good 1st step for those projects which have no history of unit testing. 
Key Points 
> Can I include multiple units in the exam environment? 
> Can I create complex test scenarios of those classes where we stimulate a sequence of functions 
across multiple units within one test case?
> Can I capture code coverage metrics for multiple units? 
Dynamic Stubbing 
Dynamic stubbing implies that you can turn individual function stubs on and off dynamically. This 
allows one to create an exam for an individual function with all other functions stubbed (even though 
they happens to the same unit because function under test). For very complicated code, this is the 
great feature also it makes testing much easier to implement. 
Key Points 
> Can stubs be chosen in the function level, or only the unit level? 
> Can function stubs be turned on an off per test case? 
> Are the function stubs automatically generated (see components of previous section)? 
Library and Application Level Thread Testing (System Testing) 
One in the challenges of system testing is that the test stimulus provided towards the fully 
integrated application might require a user pushing buttons, flipping switches, or typing with a 
console. If the applying is embedded the inputs may be even more complicated to manipulate. 
Suppose you can stimulate your fully integrated application on the function level, comparable to how 
integration exams are done. This would allow one to build complex test scenarios that rely only 
around the API with the application. 
Some of the more modern tools allow you to check this way. An additional benefit of the mode of 
testing is always that you tend not to need the source code to check the application. You simply 
require the definition from the API (generally the header files). This methodology allows testers an 
automated and scriptable strategy to perform system testing. 
Agile Testing and Test Driven Development (TDD) 
Test Driven Development offers to bring testing to the development process prior to ever before. 
Instead of writing application code first then your unit tests as an afterthought, you make your tests 
before the job code. This is really a popular new approach to development and enforces an exam 
first and test often approach. Your automated tool should support this technique of testing in case 
you plan to make use of an Agile Development methodology. 
Bi-directional Integration with Requirements Tools 
If you care about associating requirements with test cases, then its desirable for a test tool to 
integrate which has a requirements management tool. If you happen to be interested within this 
feature, it is important how the interface be bi-directional, so that when requirements are tagged to 
try cases, quality case information including test name and pass / fail status could be pushed back 
for your requirements database. This will allow you to have a sense with the completeness of the 
needs you have testing. 
Tool Qualification 
If you happen to be operating in a regulated environment such as commercial aviation or Class III
medical devices then you might be obligated to "qualify" the growth tools used to build and test your 
application. 
The qualification involves documenting what are the tool is supposed to complete and tests that 
prove the tool operates in accordance with those requirements. Ideally a vendor will have these 
materials off-the-shelf and a history of customers that have used the qualification data to your 
industry. 
Key Points 
> Does the tool vendor offer qualification materials which are produced to your exact target 
environment and tool chain? 
> What projects have proven to work these materials? 
> How are the materials licensed? 
> How would be the materials customized and approved for a particular project? 
> If this can be an FAA project possess the qualification materials been successfully utilized to 
certify to DO-178B Level A? 
> If it is surely an FDA project, have the tools been qualified for "intended use"? 
Conclusion 
Hopefully this paper provides useful information that helps you to navigate the offerings of test tool 
vendors. The relative importance of each from the items raised will be different for different 
projects. Our final suggestions are: 
> Evaluate the candidate tools on code which is representative with the complexity in the code with 
your application 
> Evaluate the candidate tools with the identical tool chain that may be used to your project 
> Talk to long-term customers in the vendor and enquire of them a few of the questions raised 
within this paper 
> Ask about the tool tech support team team. Try them out by submitting some questions straight to 
their support (instead of to their sales representative) 
Finally, understand that most every tool can somehow secure the items mentioned inside "Key 
Points" sections. Your job is usually to evaluate how automated, easy to make use of, and handle the 
support is. 
About Vector Software 
Vector Software, Inc., will be the leading independent provider of automated software testing tools 
for developers of safety critical embedded applications. Vector Software's VectorCAST distinctive 
line of products, automate and manage the complex tasks connected with unit, integration, and 
system level testing. VectorCAST products offer the C, C++, and Ada programming languages.
test

More Related Content

PPT
Ensuring code quality
PDF
Testing parallel programs
PPT
QTP/UFT latest interview questions 2014
PPT
Automated Testing vs Manual Testing
PDF
FaSaT An Interoperable Test Automation Solution
PDF
YuryMakedonov_GUI_TestAutomation_QAI_Canada_2007_14h
PDF
Regular use of static code analysis in team development
Ensuring code quality
Testing parallel programs
QTP/UFT latest interview questions 2014
Automated Testing vs Manual Testing
FaSaT An Interoperable Test Automation Solution
YuryMakedonov_GUI_TestAutomation_QAI_Canada_2007_14h
Regular use of static code analysis in team development

What's hot (19)

PDF
Regular use of static code analysis in team development
PDF
Regular use of static code analysis in team development
PPT
Testing Options in Java
PDF
How we test the code analyzer
PPTX
Practical Software Testing Tools
DOCX
summary
PPTX
Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.
PDF
PVS-Studio advertisement - static analysis of C/C++ code
DOC
Getting started with test complete 7
PPT
Automation testing material by Durgasoft,hyderabad
PPT
Software coding & testing, software engineering
PPTX
Software testing (2)
PPTX
White box & black box testing
PPSX
Test Complete
PDF
Front Cover:
PDF
Verification Challenges and Methodologies
PDF
Automated testing-whitepaper
PDF
Advanced Rational Performance Tester reports
PPTX
Unit testing with visual studio 2012
Regular use of static code analysis in team development
Regular use of static code analysis in team development
Testing Options in Java
How we test the code analyzer
Practical Software Testing Tools
summary
Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.
PVS-Studio advertisement - static analysis of C/C++ code
Getting started with test complete 7
Automation testing material by Durgasoft,hyderabad
Software coding & testing, software engineering
Software testing (2)
White box & black box testing
Test Complete
Front Cover:
Verification Challenges and Methodologies
Automated testing-whitepaper
Advanced Rational Performance Tester reports
Unit testing with visual studio 2012
Ad

Similar to test (20)

PPT
Testing fundamentals
PDF
Getting Started With QA Automation
PPT
Sd Revision
PPT
Susan windsor soft test 16th november 2005
PDF
Test automation: Are Enterprises ready to bite the bullet?
PDF
An ideal static analyzer, or why ideals are unachievable
PDF
0136 ideal static_analyzer
PDF
Different Methodologies For Testing Web Application Testing
PDF
Getting started with_testcomplete
PDF
A Complete Guide to Codeless Testing.pdf
PPT
Automation testing
PPT
Learn software testing with tech partnerz 3
PPTX
Mule testing
PDF
Qualidade de Software em zOS usando IBM Debug Tool e RDz
PDF
Software design.edited (1)
PPTX
Top 5 Code Coverage Tools in DevOps
PDF
Software Development Standard Operating Procedure
PDF
What's the Difference Between Static Analysis and Compiler Warnings?
PDF
Scriptless Test Automation_ A Complete Guide.pdf
Testing fundamentals
Getting Started With QA Automation
Sd Revision
Susan windsor soft test 16th november 2005
Test automation: Are Enterprises ready to bite the bullet?
An ideal static analyzer, or why ideals are unachievable
0136 ideal static_analyzer
Different Methodologies For Testing Web Application Testing
Getting started with_testcomplete
A Complete Guide to Codeless Testing.pdf
Automation testing
Learn software testing with tech partnerz 3
Mule testing
Qualidade de Software em zOS usando IBM Debug Tool e RDz
Software design.edited (1)
Top 5 Code Coverage Tools in DevOps
Software Development Standard Operating Procedure
What's the Difference Between Static Analysis and Compiler Warnings?
Scriptless Test Automation_ A Complete Guide.pdf
Ad

test

  • 1. test You Can't Evaluate a Test Tool by Reading a Data Sheet All data sheets look virtually alike. The buzzwords are identical: "Industry Leader", "Unique Technology", "Automated Testing", and "Advanced Techniques". The screen shots are similar: "Bar Charts", "Flow Charts", "HTML reports" and "Status percentages". It is mind numbing. What is Software Testing? All of us who have done software testing recognize that testing will come in many flavors. For simplicity, we'll use three terms in this paper: System Testing Integration Testing Unit Testing Everyone does some volume of system testing where they are doing some in the same things by it that the clients will do with it. Notice that we said "some" and not "all." One in the most common reasons behind applications being fielded with bugs is the fact that unexpected, and therefore untested, combinations of inputs are encountered with the application while in the field. Not as many folks do integration testing, and in many cases fewer do unit testing. If you have done integration or unit testing, you are probably painfully aware of the level of test code that has got to be generated to isolate one particular file or group of files from your rest with the application. At the most stringent amounts of testing, it's not at all uncommon for the volume of test code written being larger than the quantity of application code being tested. As a result, these amounts of testing this are likely to be applied to mission and safety critical applications in markets including aviation, medical device, and railway. What Does "Automated Testing" Mean? It is well known that the process of unit and integration testing manually is very expensive and time-consuming; therefore every tool that is certainly being sold into forex will trumpet "Automated Testing" as their benefit. But what exactly is "automated testing"? Automation means various things to different people. To many engineers the promise of "automated testing" signifies that they can press control button and they'll either obtain a "green check" indicating that their code is correct, or a "red x" indicating failure. Unfortunately this tool doesn't exist. More importantly, if the tool did exist, would you want to work with it? Think about it. What would it mean for the tool to see you that your particular code is "Ok"? Would it mean the code is formatted nicely? Maybe. Would it imply that it conforms for your coding standards? Maybe. Would it mean that your code is correct? Emphatically No! Completely automated testing is not attainable nor would it be desirable. Automation should address
  • 2. those elements of the testing procedure that are algorithmic naturally and labor intensive. This frees the software engineer to accomplish higher value testing work like designing better plus much more complete tests. The logical question being asked when looking at tools is: "How much automation creates this change tool provide?" This could be the large gray area as well as the primary area of uncertainty when a company attempts to calculate an ROI for tool investment. Anatomy of Test Tools Test Tools generally give a variety of functionality. The names vendors use changes for different tools, and several functionality might be missing from some tools. For a common frame of reference, we've got chosen the next names for that "modules" that could exist in the test tools you're evaluating: Parser: The parser module allows the tool to comprehend your code. It reads the code, and helps to create an intermediate representation to the code (usually in a tree structure). Basically the identical to the compiler does. The output, or "parse data" is generally saved in an intermediate language (IL) file. CodeGen: The code generator module uses the "parse data" to construct quality harness source code. Test Harness: While the test harness is just not specifically section of the tool; the decisions made in the exam harness architecture affect other features with the tool. So the harness architecture is very important when evaluating a tool. Compiler: The compiler module allows the exam tool to invoke the compiler to compile and link test harness components. Target: The target module allows tests to be easily run in a various runtime environments including support for emulators, simulators, embedded debuggers, and commercial RTOS. Test Editor: The test editor allows the person to use the scripting language or a sophisticated graphical user interface (GUI) to create preconditions and expected values (pass/fail criteria) for test cases. Coverage: The coverage module allows an individual to get reports on what elements of the code are executed by each test. Reporting: The reporting module allows the different captured data to get compiled into project documentation. CLI: A command line interface (CLI) allows further automation in the use of the tool, allowing the tool to be invoked from scripts, make, etc. Regression: The regression module allows tests which are created against one version in the application being re-run against new versions. Integrations: Integrations with third-party tools can be an interesting strategy to leverage your investment in a test tool. Common integrations are with configuration management, requirements
  • 3. management tools, and static analysis tools. Later sections will elaborate how you should evaluate these modules inside your candidate tools. Classes of Test Tools / Levels of Automation Since all tools do not include all functionality or modules described above as well as because there is a broad difference between tools inside the level of automation provided, we've got created the subsequent broad classes of test tools. Candidate test tools will get into one of those categories. "Manual" tools generally create a clear chair framework to the test harness, and require you to hand-code the exam data and logic needed to implement the exam cases. Often, they will give you a scripting language and/or a set of library functions that might be used to do common things like test assertions or create formatted reports for test documentation. "Semi-Automated" tools may put a graphical interface on some Automated functionality furnished by a "manual" tool, but will still require hand-coding and/or scripting in-order to test more complex constructs. Additionally, a "semi-automated" tool might be missing some of the modules an "automated" tool has. Built in support for target deployment as an example. "Automated" tools will address each in the functional areas or modules listed inside previous section. Tools with this class is not going to require manual hand coding and will support all language constructs as well a variety of target deployments. Subtle Tool Differences In addition to comparing tool features and automation levels, it is usually important to guage and compare the test approach used. This may hide latent defects inside the tool, so it is crucial that you not just load your code in to the tool, but to also try to build some simple test cases for each and every method within the class that you might be testing. Does the tool build a complete test harness? Are all stubs created automatically? Can you utilize GUI to define parameters and global data for that test cases or are you required to write code because you would had you been testing manually? In an identical way target support differs between tools. Be wary in case a vendor says: "We support all compilers and all sorts of targets out of the box". These are code words for: "You do every one of the work to make our tool work within your environment". How to Evaluate Test Tools The following few sections will describe, in more detail, information that you must investigate through the evaluation of the software testing tool. Ideally you should confirm these records with hands-on testing of every tool being considered. Since the entire content of this paper is rather technical, we wish to explain a few of the conventions used. For each section, we have a title that describes an issue to get considered, an explanation of why the thing is important, and a "Key Points" section in summary concrete items being considered. Also, while we are talking about conventions, we have to also make note of terminology. The term "function" refers to sometimes a C function or possibly a C++ class method, "unit" refers to a C file or a C++ class. Finally, please remember, virtually every tool can somehow secure the items mentioned inside the "Key Points" sections, your task is to gauge how automated, easy to work with,
  • 4. and finish the support is. Parser and Code Generator It is comparatively easy to create a parser for C; however it is incredibly difficult to create a complete parser for C++. One with the questions being answered during tool evaluation should be: "How robust and mature will be the parser technology"? Some tool vendors use commercial parser technology that they license from parser technology companies and a few have homegrown parsers that they have built themselves. The robustness of the parser and code generator could be verified by evaluating the tool with complex code constructs which might be representative with the code being used for your project. Key Points: - Is the parser technology commercial or homegrown? - What languages are supported? - Are tool versions for C and C++ exactly the same tool or different? - Is the complete C++ language implemented, or are their restrictions? - Does the tool help our most complicated code? The Test Driver The Test Driver may be the "main program" that controls test. Here is often a simple example of your driver that will test the sine function through the standard C library: #include #include int main () float local; local = sin (90.0); if (local == 1.0) printf ("My Test Passed!n"); else printf ("My Test Failed!n"); return 0; Although it is a pretty simple example, a "manual" tool might ask you to type (and debug) this little snippet of code by hand, a "semi-automated" tool might present you with some sort of scripting language or simple GUI to get in the stimulus value for sine. An "automated" tool could have a full-featured GUI for building test cases, integrated code coverage analysis, an internal debugger, and a
  • 5. built-in target deployment. I wonder should you noticed that this driver includes a bug. The bug is how the sin function actually uses radians not degrees for the input angle. Key Points - Is the driver automatically generated or do I write the code? - Can I test the next without writing any code: - Testing over a range of values - Combinatorial Testing - Data Partition Testing (Equivalence Sets) - Lists of input values - Lists of expected values - Exceptions as expected values - Signal handling - Can I create a sequence of calls to several methods inside same test? Stubbing Dependent Functions Building replacements for dependent functions is essential when you want to control the values which a dependent function returns within a test. Stubbing is often a really important section of integration and unit testing, because it allows one to isolate the code under test from other elements of your application, plus much more easily stimulate the execution with the unit or sub-system of curiosity. Many tools require the manual generation of the exam code to make a stub do anything more than return a static scalar value (return 0;) Key Points - Arestubs automatically generated, or would you write code for the children? - Are complex outputs supported automatically (structures, classes)? - Can each call in the stub return a different value? - Does the stub keep an eye on how many times it had been called? - Does the stub keep track with the input parameters over multiple calls? - Can you stub calls on the standard C library functions like malloc?
  • 6. Test Data There are two basic approaches that "semi-automated" and "automated" tools use to implement test cases. One is really a "data-driven" architecture, along with the other is a "single-test" architecture. For a data-driven architecture, the exam harness is done for all with the units under test and supports all in the functions defined in those units. When the test is to be run, the tool simply supplies the stimulus data across a data stream like a file handle or possibly a physical interface being a UART. For a "single-test" architecture, every time a test operates, the tool will build the exam driver with the test, and compile and link it into an executable. A couple of points on this; first, every one of the extra code generation required through the single-test method, and compiling and linking will take more time at test execution time; second, you http://softwaretestingfundamentals.com/unit-testing/ wind up building a separate test harness for each and every test case. This means that a candidate tool might appear to dedicate yourself some nominal cases but probably won't work correctly for more advanced tests. Key Points - Is quality harness data driven? - How long will it take to perform test case (including any code generation and compiling time)? - Can the test cases be edited outside of the exam tool IDE? - If not, have I done enough free have fun with the tool with complex code examples to comprehend any limitations? Automated Generation of Test Data Some "automated" tools give you a degree of automated test case creation. Different approaches are used to do this. The following paragraphs describe some approaches: Min-Mid-Max (MMM) Test Cases tests will stress a function on the bounds in the input data types. C and C++ code often will not protect itself against out-of-bound inputs. The engineer has some functional range in their mind and they often usually do not protect themselves against beyond range inputs. Equivalence Classes (EC) tests create "partitions" per data type and select a sample of values from each partition. The assumption is values in the same partition will stimulate the application inside a similar way. Random Values (RV) tests set combinations of random values for each with the parameters of an
  • 7. function. Basic Paths (BP) tests use the basis path analysis to analyze the unique paths available through a procedure. BP tests can automatically develop a high a higher level branch coverage. The key thing to keep in mind when contemplating automatic test case construction may be the purpose that it serves. Automated tests are ideal for testing the robustness in the application code, although not the correctness. For correctness, you have to create tests which are based on the the application is supposed to do, not what it really does do. Compiler Integration The point in the compiler integration is two-fold. One this point is usually to allow the test harness components to be compiled and linked automatically, without the user having to discover the compiler options needed. The other point is to allow quality tool to honor any language extensions which can be unique for the compiler getting used. Especially with cross-compilers, it is quite common for the compiler to provide extensions which can be not the main C/C++ language standards. Some tools utilize approach of #defining these extension to null strings. This very crude approach is particularly bad given it changes the thing code that this compiler produces. For example, consider the following global extern which has a GCC attribute: extern int MyGlobal __attribute__ ((aligned (16))); If your candidate tool will not maintain the attribute when defining the global object MyGlobal, then code will behave differently during testing of computer will when deployed because the memory won't be aligned exactly the same. Key Points - Does the tool automatically compile and link the test harness? - Does the tool honor and implement compiler-specific language extension? - What form of interface is there to the compiler (IDE, CLI, etc.)? - Does the tool have an interface to import project settings from a development environment, or must they be manually imported? - If the tool does import project settings, is import feature general purpose or limited to specific compiler, or compiler families? - Is the tool integrated along with your debugger to allow you to definitely debug tests? Support for Testing on an Embedded Target In this section we'll use the term "Tool Chain" to refer on the total cross development environment such as the cross-compiler, debug interface (emulator), target board, and Real-Time Operating System (RTOS). It is imperative that you consider if your candidate tools have robust target integrations to your tool chain, and to know what inside the tool must change in the event you migrate to another tool chain.
  • 8. Additionally, it is important to understand the automation level and robustness from the target integration. As mentioned earlier: If a vendor says: "we support all compilers and many types of targets out of the box." They mean: "You do all the work to produce our tool work inside your environment." Ideally, the tool that you select will allow for "push button" test execution where all with the complexity of downloading towards the target and capturing test results back on the host is abstracted in to the "Test Execution" feature in order that no special user actions are required. An additional complication with embedded target tests are hardware availability. Often, the hardware is being developed in parallel with the application, or there is certainly limited hardware availability. A key feature may be the ability to start testing inside a native environment and later on transition towards the actual hardware. Ideally, the tool artifacts are hardware independent. Key Points - Is my tool chain supported? If not, can it be supported? What does "supported" mean? - Can I build tests on the host system and later on use them for target testing? - How does quality harness get downloaded for the target? - How are the exam results captured back towards the host? - What targets, cross compilers, and RTOS are supported off-the-shelf? - Who builds the support for the new tool chain? - Is any the main tool chain integration user configurable? Test Case Editor Obviously, the test case editor is where you will spend most of your interactive time by using a test tool. If there is certainly true automation from the previous items mentioned with this paper, then the quantity of time owing to setting up test environment, and also the target connection ought to be minimal. Remember what we said on the start, you wish to use the engineer's time for you to design better and more complete tests. The key factor to evaluate is the place where hard could it be to setup test input and expected values for non-trivial constructs. All tools within this market provide some easy way to setup scalar values. For example, does your candidate tool supply a simple and intuitive way to make a class? How about an abstract way to arrange an STL container; being a vector or even a map? These would be the things to evaluate in test case editor. As with the rest of this paper there is "support" and then there is certainly "automated support". Take this into account when looking for constructs that could possibly be of interest for you. Key Points - Are allowed ranges for scalar values shown
  • 9. - Are array sizes shown? - Is it all to easy to set Min and Max values with tags in lieu of values? This is crucial that you maintain the integrity of test if a type changes. - Are special floating point numbers supported (e.g. NaN, +/- Infinity) - Can you do combinatorial tests (vary 5 parameters on the range and enjoy the tool do all combinations of people values)? - Is the editor "base aware" so which you can easily enter values in alternate bases like hex, octal, and binary? - For expected results, are you able to easily enter absolute tolerances (e.g. +/- 0.05) and relative tolerances (e.g. +/- 1%) for floating point values? - Can test data be imported business sources like Excel? Code Coverage Most "semi-automated" tools and all sorts of "automated" tools have some code coverage facility integrated that allows that you see metrics which show the portion with the application that is certainly executed from your test cases. Some tools present these details in table form. Some show flow graphs, and a few show annotated source listings. While tables are good as a summary, if you happen to be trying to realize 100% code coverage, an annotated source listing is the best. Such a listing can have the original source code file with colorations for covered, partially covered, and uncovered constructs. This allows you to easily begin to see the additional test cases that are needed to succeed in 100% coverage. It is important http://www.brothersoft.com/windows/home_education/teaching_and_testing/ to know the impact of instrumentation the additional instrumentation on the application. There are two considerations: one is the increase in size of the object code, along with the other may be the run-time overhead. It is important to be aware of if the job is memory or real-time limited (or both). This will help you focus on which item is most important for the application. Key Points -What will be the code size increase per type of instrumentation? - What could be the run-time increase for each and every type of instrumentation? - Can instrumentation be built-into your "make" or "build" system? - How will be the coverage results presented to the user? Are there annotated listings with a graphical coverage browser, or simply tables of metrics? - How will be the coverage information retrieved through the target? Is the process flexible? Can data be buffered in RAM? - Are statement, branch (or decision) and MC/DC coverage supported?
  • 10. - Can multiple coverage types be captured in a single execution? - Can coverage data be shared across multiple test environments (e.g. can some coverage be captured during system testing and stay combined while using coverage from unit and integration testing)? - Can you step through test execution using the coverage data to understand the flow of control through the job without utilizing a debugger? - Can you obtain aggregate coverage for all test runs in an individual report? - Can the tool be qualified for DO-178B as well as Medical Device intended use? Regression Testing There ought to be two basic goals for adopting the test tool. The primary goal is usually to save time testing. If you've check this out far, we imagine that you just agree with that! The secondary goal is always to allow the created tests to get leveraged over the life cycle of the application. This means that how the time and money purchased building tests should cause tests that are re-usable as the application form changes over time and simple to configuration manage. The major thing to evaluate with your candidate tool is the thing that specific things need to be "saved" to be able to run the same tests inside future and how the re-running of tests is controlled. Key Points > What file or files need to be configuration been able to regression test? > Does the tool have a complete and documented Command Line Interface (CLI)? > Are these files plain text or binary? This affects your ability to make use of a diff utility to judge changes over time. > Do the harness files generated by the tool have to become configuration managed? > Is there integration with configuration management tools? > Create a test to get a unit, now change the name of the parameter, and re-construct your test environment. How long can this take? Is it complicated? > Does the tool support database technology and statistical graphs to allow for trend analysis of test execution and code coverage over time? > Can you test multiple baselines of code with a similar set of test cases automatically? > Is distributed testing supported to allow portions in the tests to get run on different physical machines to speed up testing? Reporting Most tools will provide similar reporting. Minimally, they must create an easy to comprehend report showing the inputs, expected outputs, actual outputs and a comparison of the expected and actual
  • 11. values. Key Points > What output formats are supported? HTML? Text? CSV? XML? > Is it simple to have both a advanced level (project-wide) report too as a detailed report for one particular function? > Is the report content user configurable? > Is the report format user configurable? Integration with Other Tools Regardless in the quality or usefulness associated with a particular tool, all tools need to operate in a very multi-vendor environment. A lot of time any money has been spent by big companies buying little companies with an idea of offering "the tool" that is going to do everything for all. The interesting thing is the fact that most often using these mega tool suites, the whole is a lot below the sum in the parts. It seems that companies often take 4-5 pretty cool small tools and integrate them into one bulky and unusable tool. Key Points > Which tools does your candidate tool integrate with out-of-the-box, and may the end-user add integrations? Additional Desirable Features for a Testing Tool The previous sections all describe functionality that needs to be in any tool that's considered an automatic test tool. In the subsequent few sections we are going to list some desirable features, along with a rationale for that importance with the feature. These features may have varying levels of applicability in your particular project. True Integration Testing / Multiple Units Under Test Integration testing is an extension of unit testing. It is utilized to check interfaces between units and requires you to definitely combine units that define some functional process. Many tools claim they can support integration testing by linking the object code are the real deal units with test harness. This method builds multiple files within the test harness executable but provides no ability to stimulate the functions within these additional units. Ideally, selecting able to stimulate any function within any unit, in a order within one particular test case. Testing the interfaces between units will generally uncover a lot of hidden assumptions and bugs in the application. In fact, integration testing could possibly be a good 1st step for those projects which have no history of unit testing. Key Points > Can I include multiple units in the exam environment? > Can I create complex test scenarios of those classes where we stimulate a sequence of functions across multiple units within one test case?
  • 12. > Can I capture code coverage metrics for multiple units? Dynamic Stubbing Dynamic stubbing implies that you can turn individual function stubs on and off dynamically. This allows one to create an exam for an individual function with all other functions stubbed (even though they happens to the same unit because function under test). For very complicated code, this is the great feature also it makes testing much easier to implement. Key Points > Can stubs be chosen in the function level, or only the unit level? > Can function stubs be turned on an off per test case? > Are the function stubs automatically generated (see components of previous section)? Library and Application Level Thread Testing (System Testing) One in the challenges of system testing is that the test stimulus provided towards the fully integrated application might require a user pushing buttons, flipping switches, or typing with a console. If the applying is embedded the inputs may be even more complicated to manipulate. Suppose you can stimulate your fully integrated application on the function level, comparable to how integration exams are done. This would allow one to build complex test scenarios that rely only around the API with the application. Some of the more modern tools allow you to check this way. An additional benefit of the mode of testing is always that you tend not to need the source code to check the application. You simply require the definition from the API (generally the header files). This methodology allows testers an automated and scriptable strategy to perform system testing. Agile Testing and Test Driven Development (TDD) Test Driven Development offers to bring testing to the development process prior to ever before. Instead of writing application code first then your unit tests as an afterthought, you make your tests before the job code. This is really a popular new approach to development and enforces an exam first and test often approach. Your automated tool should support this technique of testing in case you plan to make use of an Agile Development methodology. Bi-directional Integration with Requirements Tools If you care about associating requirements with test cases, then its desirable for a test tool to integrate which has a requirements management tool. If you happen to be interested within this feature, it is important how the interface be bi-directional, so that when requirements are tagged to try cases, quality case information including test name and pass / fail status could be pushed back for your requirements database. This will allow you to have a sense with the completeness of the needs you have testing. Tool Qualification If you happen to be operating in a regulated environment such as commercial aviation or Class III
  • 13. medical devices then you might be obligated to "qualify" the growth tools used to build and test your application. The qualification involves documenting what are the tool is supposed to complete and tests that prove the tool operates in accordance with those requirements. Ideally a vendor will have these materials off-the-shelf and a history of customers that have used the qualification data to your industry. Key Points > Does the tool vendor offer qualification materials which are produced to your exact target environment and tool chain? > What projects have proven to work these materials? > How are the materials licensed? > How would be the materials customized and approved for a particular project? > If this can be an FAA project possess the qualification materials been successfully utilized to certify to DO-178B Level A? > If it is surely an FDA project, have the tools been qualified for "intended use"? Conclusion Hopefully this paper provides useful information that helps you to navigate the offerings of test tool vendors. The relative importance of each from the items raised will be different for different projects. Our final suggestions are: > Evaluate the candidate tools on code which is representative with the complexity in the code with your application > Evaluate the candidate tools with the identical tool chain that may be used to your project > Talk to long-term customers in the vendor and enquire of them a few of the questions raised within this paper > Ask about the tool tech support team team. Try them out by submitting some questions straight to their support (instead of to their sales representative) Finally, understand that most every tool can somehow secure the items mentioned inside "Key Points" sections. Your job is usually to evaluate how automated, easy to make use of, and handle the support is. About Vector Software Vector Software, Inc., will be the leading independent provider of automated software testing tools for developers of safety critical embedded applications. Vector Software's VectorCAST distinctive line of products, automate and manage the complex tasks connected with unit, integration, and system level testing. VectorCAST products offer the C, C++, and Ada programming languages.