TESTING COMMON INTERVIEW QUESTIONS AND ANSWERS

1. what is contained in srs?,give a sample srs.what is cohesive testing and span
control?

2. What is Difference Between QA plan and Test Plan?
QA is more over prevention thing which works towards non occurance of error- were as tesplan
come s in testin i.e in quality control which works towards how to identify defects/errors

3. what is the Test server

4. What are all the key factors to write system test plan?
keyfactors

5. How to perform integration testing on a web application? What are the considerations?
Detailed pls.

6. If you have an application, but you do not have any requiremnts available, then how
would you perform
With out a requirements documents how can u develop an application .if it is developed without
any requirements then the application is made with assumptions .Then testing is done depending
on the assumptions made through application.In this case, if you are going to work for some
company,
7. How can you know if a test case is necessary?

8. What is peer review in practical terms?
Test cases written by a QA engineer will be reviewed (for correctness) by fellow QA Engineer.
9. How do you know when you have enough test cases to adequately test a software
system or module?
10. Who approved your test cases?
It depends on the organization. QA Lead, if present, will approve the test cases. Otherwise, Peer
Reviews are a good way of evaluating the test cases.
49
11. What will you when you find a bug?
1)Execute some more tests, to make clear what the bug EXCATLY is. Suppose, the test case
failed when State=NY and Class=Business. Tester has to exceute some more tests to find out
whether the problem is with Just 'NY' state or with just 'Business' class or with both of them
together.2) Report the bug
12. What test plans have you written?
Master Test plan is usually prepared by QA Lead. Testers write Test Cases, which in some
organizations are called as Test Plans.
13. What is QA? What is Testing? Are they both same or different?
Testing is subset of QA. Testing is just a phase that comes after coding. But QA is the one that
should be incorporated into the entire Software Development Life Cycle.
14. How to write Negative Testcase?Give ex.
Negative test cases are written based on thinking abt AUT in a destructive manner in the
sense,what happens if i test the application with irrelevant inputs.
15. In an application currently in production, one module of code is being modified. Is it
necessary to re-
1) Test the modified module2) Test all the other modules/areas of the application which will have
direct/indirect interaction with the modified module.
16. What is included in test strategy?What is overall process of testing step by step and
what are various
Test strategy is creating a procedure of how to test the software and creating a strategy what all
to be tested(screens,process,modules,..)and time limts for testing process(automated or manual)
.So everything has to be planned and implemented.Testing overall procedure isThe duties of
software test
17. What is the most challenging situation you had during testing
18. what are you going to do if there is no Functional Spec or any documents related to
the system and developer
First of all, when a developer left then another one in or someone assigned to take care of the
responsibilities.Most of the functional testing needs more knowledge about the product then the
50
code. Be familiarize with the code. Research similar product in the market. Increase
communication with related
19. What is the major problem did you resolve during testing process
20. What are the types of functional testing?
There are followingtypes of functional testing.1. Functionality testing.2. Input domain
testing.3.Error handling testing.about 90% of the functional testing will be covered with teh
completion of above three.4. Recovery testing.5.Compatibility testing6.Configuration
testing7.Intersystems testing8.Installation
21. 1.how will u write integration test cases2.how will u track bugs from winrunner3.how u
customise the
A use case is a description of how end-users will use a software code. It describes a task or a
series of tasks that users will accomplish using the software, and includes the responses of the
software to user actions. Use cases may be included in the Software Requirements Document
(SRD) as a way of
22. what is the difference between smoke testing and sanity testing
smoke testing is conducted by development people according to the clients requirements.the first
test conducted by testing people when build is received is called sanity testing.in sanity testing
testing people check the basic functionality i.e whether all buttons are working or not etc
23. What is Random Testing?
Random data tests confort the application under test with input data generated at
random.Typically,testers pay no attention to expect data types.They feed a random sequence of
numbers,letters & characters into nummeric data field.
24. What is smoke testing?
during this test test engineer reject build with reason, when that build is not working before testing
process
25. What is stage containment in testing?
26. Security testing and Performance testing on Communication interface
51
27. what are the steps in volved in sanity testing?
Sanity testing is same as smoke testing. It involves intial testing of the application or module just
make sure wether it is stable enough to start testing. Mostly used as a bench mark to gather the
readiness of the application for automated testing
28. How do we do calculation testing in banking ferm?
29. What is the Difference Between Rational Robot & WinRunner ?
-> Winrunner is just a functional Tool where as Robot, we can use it for both functional (GUI) and
performance(VU).-> WR has 4 check points where as Robot has 13 verification points.
30. What is the testing process?
Verifying that an input data produce the expected output.
31. What is the difference between testing and quality assurance (QA)?
This question is surprisingly popular. However, the answer is quite simple. The goals of both are
different: The goal of testing is to find the errors. The goal of QA is to prevent the errors in the
program.
32. Difference between QA and QC?
simple definitions are: QA:assurance for process control.here we r going to follow certain quality
standards and strive for process improvement.we r not going to deal with product.the intension is
to follow good quality standards.if we follow these automatically we are going to produce
better/best
33. what is the difference between retest and regression testing?
hello friends regarding retesting and regression testing this is very important interview question
which is asked for every one of us.so as far as my knowledge.retesting:if any modifications r done
in the application then testing that particular unit is retesting.regression testing
34. What is the difference between bug priority & bug severity?
HiPrority : Urgency Of the BugSeverity : Impact of the Bug
35. What kinds of testing do you know? What is it system testing? What is it integration
testing? What is
You theoretical background and home work may shine in this question. System testing is a
testing of the entire system as a whole. This is what user see and feels about the product you
52
provide. Integration testing is the testing of integration of different modules of the system. Usually,
the integration
36. What is a bug? What types of bugs do you know?
Bug is a error during execution of the program. There are two types of bugs: syntax and logical.
37. What is the difference between structural and functional testing?
Structural is a "white box" testing and based on the algorithm or code. Functional testing is a
"black box" (behavioral) testing where the tester verifies the functional specification.
38. What is defect density?
defect density = Total number of defects/LOCHere the Total number of defects include the
defects from Review and from the customer also
39. How would you test a mug (chair/table/gas station etc.)?
First of all you must demand requirements and functional specification and design document of
the mug. There will find requirements like ability to hold hot water, waterproof, stability, break
ability and so on. Then you should test the mug according to all documents.
40. What is considered a successful test?
A test that discovered more errors. The whole purpose of testing process is to discover as many
bugs and errors as possible. Test that covers more functionality and discovers more errors in
your software product, therefore considered more successful.
41. What bug tracking system did you use?
Again and again, it does not matter what bug tracking system did you use if you made your
homework and invented the name of such or mentioned a standard one. You may say you've
used proprietary bug tracking system (works especially well if you previous company was this
way or another dealing with databases)
42. When does testing begin - requirements, plan, design, code / testing phase?
Obviously Testing will begins in requirement phase.
43. Could you test a program 100%? 90%? Why?
Definitely not! The major problem with testing that you cannot calculate how many error are in the
code, functioning etc. There are many factors involved such as experience of programmer,
complexity of the system etc.
53
44. What is the difference between testing and debugging?
Big difference is that debugging is conducted by a programmer and the programmer fix the errors
during debugging phase. Tester never fixes the errors, but rather find them and return to
programmer.
45. How would you conduct your test?
Each test is based on the technical requirements of the software product.
46. Have you used automatic testing tools. Which ones?
If you never have seen automation tools before, do not try to fool around the interviewer. You
produce a bad impression when "caught" on lying to the interviewer. However, if you ever used
the automation tools, it would be a huge advantage for us to mention them even if those tools
were proprietary automation
47. How would you build a test with WinRunner? Rational Visual Test?
First of all, see the comments to the previous question. Then, all automation testing tools I ever
heard of have a GUI recorder which allows you to record the basic user interactions with the
software underneath. Then, you manually update your initial script to suit your needs. You must
know scripting
48. What is considered a good test?
Good test is a test covering most of the object's functionality.
49. How would you conduct a test: top-down or down-top? What is it? Which one is
better?
Down-Top: unit -> interface -> system. Top-Down is a vice versa. You may use both, but downtop
allows to discover malfunctioning at earlier phases of development and cheaper to fix than in
the case of top-down.
50. How to develop a test plan ? How to develop a test case?
Test plan consists of test cases. Test cases you develop according to requirement and design
documents of the unit, system etc. You may be asked what would you do if you are not provided
with requirements documents. Then, you start creating your test cases based on functionality of
the system. You should
54
51. How do you see a QA role in the product development life cycle?
QA should be involved in early stages of the development process in order to create an adequate
test cases and better general understanding of the system. QA, however, must be separated from
the development team to ensure that there is no influence of developers on QA engineers. As a
last resort before
52. What is the size of your executable?
10MB. Who cares? You should demonstrate that you can't be caught with unexpected questions.
This question is one of the dumbest, but you must react accordingly. Tell any reasonable number
you want, but be careful not to exaggerate!
53. What version of Oracle database did you use?
Homework. Tell any version number you want - not many interviewers know the difference at
version level. However, do not tell any numbers if you never worked with Oracle!
54. How would you execute a SQL query in Oracle 8?
Again, if you ever worked with Oracle, this question should be trivial for you to answer (from
command prompt, of course) If you never worked with Oracle, note politely that you did not touch
an Oracle database on your career path.
55. What version of OS were you using?
Tell whatever you want - you can't be caught here. Popular answers are Windows 95/98,
Windows 2000 (make sure you know various flavors) and various Unix flavors (AIX, Solaris,
SunOS, HPUX etc.)
56. Have you tested front-end of back-end?
In other word you are asked if you tested GUI part of the application or server part of your
application.
57. What was the most difficult problem you ever found while testing?
This is homework. Think about one and give it as an example.
58. What were you responsible to test in your previous company?
This is homework for you. Actually, this question is a test of the knowledge of your own resume.
You must know your real or fake resume as a bible. Practice in front of mirror or ask you
55
59. Why do you like to test?
You enjoy bug hunting process, feel great being between developers and customers, your
background and experience are targeting the testing techniques enhancements and you feel
proud of your contribution to the whole development process.
60. What role do you see yourself in 2-3 years from now? Would you want to become a
developer?
You should not concentrate the attention of the interviewer on your wish to become a developer.
You are being hired for testing role and you should demonstrate reliability. Team lead of QA team
is OK, but do not answer yes when asked if you are willing to become a developer.
1. What is testing?
Software Testing can be defined as: Testing is an activity that helps in finding out
bugs/defects/errors in a software system under development, in order to provide a bug free and
reliable system/solution to the customer.
In other words, you can consider an example as: suppose you are a good cook and are expecting
some guests at dinner. You start making dinner; you make few very very very delicious dishes
(off-course, those which you already know how to make). And finally, when you are about to finish
making the dishes, you ask someone (or you yourself) to check if everything is fine and there is
no extra salt/chili/anything, which if is not in balance, can ruin your evening (This is what called
'TESTING').
This procedure you follow in order to make it sure that you do not serve your guests something
that is not tasty! Otherwise your collar will go down and you will regret over your failure!
2. Why we go for testing?
Well, while making food, its ok to have something extra, people might understand and eat the
things you made and may well appreciate your work. But this isn't the case with Software Project
Development. If you fail to deliver a reliable, good and problem free software solution, you fail in
your project and probably you may loose your client. This can get even worse!
So in order to make it sure, that you provide your client a proper software solution, you go for
TESTING. You check out if there is any problem, any error in the system, which can make
software unusable by the client. You make software testers test the system and help in finding out
56
the bugs in the system to fix them on time. You find out the problems and fix them and again try
to find out all the potential problems.
3. Why there is need of testing?
OR
Why there is a need of 'independent/separate testing'?
This is a right question because, prior to the concept of TESTING software as a ‘Testing Project’,
the testing process existed, but the developer(s) did that at the time of development.
But you must know the fact that, if you make something, you hardly feel that there can be
something wrong with what you have developed. It's a common trait of human nature, we feel
that there is no problem in our designed system as we have developed it and it is perfectly
functional and fully working. So the hidden bugs or errors or problems of the system remain
hidden and they raise their head when the system goes into production.
On the other hand, its a fact that, when one person starts checking something which is made by
some other person, there are 99% chances that checker/observer will find some problem with the
system (even if the problem is with some spelling that by mistake has been written in wrong
way.). Really weird, isn't it? But that’s a truth!
Even though its wrong in terms of human behavior, this thing has been used for the benefit of
software projects (or you may say, any type of project). When you develop something, you give it
to get checked (TEST) and to find out any problem, which never aroused while development of
the system. Because, after all, if you could minimize the problems with the system you
developed, it’s beneficial for yourself. Your client will be happy if your system works without any
problem and will generate more revenues for you.
4. What is the role of "a tester"?
A tester is a person who tries to find out all possible errors/bugs in the system with the help of
various inputs to it. A tester plays an important part in finding out the problems with system and
helps in improving its quality.
If you could find all the bugs and fix them all, your system becomes more and more reliable.
57
A tester has to understand the limits, which can make the system break and work abruptly. The
more number of VALID BUGS tester finds out, the better tester he/she is!
As a tester tests an application and if he/she finds any defect, the life cycle of the defect starts
and it becomes very important to communicate the defect to the developers in order to get it
fixed, keep track of current status of the defect, find out if any such defect (similar defect) was
ever found in last attempts of testing etc. For this purpose, previously manually created
documents were used, which were circulated to everyone associated with the software project
(developers and testers), now a days many Bug Reporting Tools are available, which help in
tracking and managing bugs in an effective way.
How to report a bug?
It’s a good practice to take screen shots of execution of every step during software testing. If any
test case fails during execution, it needs to be failed in the bug-reporting tool and a bug has to be
reported/logged for the same. The tester can choose to first report a bug and then fail the test
case in the bug-reporting tool or fail a test case and report a bug. In any case, the Bug ID that is
generated for the reported bug should be attached to the test case that is failed.
At the time of reporting a bug, all the mandatory fields from the contents of bug (such as Project,
Summary, Description, Status, Detected By, Assigned To, Date Detected, Test Lead, Detected in
Version, Closed in Version, Expected Date of Closure, Actual Date of Closure, Severity, Priority
and Bug ID etc.) are filled and detailed description of the bug is given along with the expected
and actual results. The screen-shots taken at the time of execution of test case are attached to
the bug for reference by the developer.
After reporting a bug, a unique Bug ID is generated by the bug-reporting tool, which is then
associated with the failed test case. This Bug ID helps in associating the bug with the failed test
case.
After the bug is reported, it is assigned a status of ‘New’, which goes on changing as the bug
fixing process progresses.
If more than one tester are testing the software application, it becomes a possibility that some
other tester might already have reported a bug for the same defect found in the application. In
such situation, it becomes very important for the tester to find out if any bug has been reported for
similar type of defect. If yes, then the test case has to be blocked with the previously raised bug
58
(in this case, the test case has to be executed once the bug is fixed). And if there is no such bug
reported previously, the tester can report a new bug and fail the test case for the newly raised
bug.
If no bug-reporting tool is used, then in that case, the test case is written in a tabular manner in a
file with four columns containing Test Step No, Test Step Description, Expected Result and
Actual Result. The expected and actual results are written for each step and the test case is failed
for the step at which the test case fails.
This file containing test case and the screen shots taken are sent to the developers for reference.
As the tracking process is not automated, it becomes important keep updated information of the
bug that was raised till the time it is closed.
(Please Note: The above given procedure of reporting a bug is general and not based on any
particular project. Most of the times, the bug reporting procedures, values used for the various
fields used at the time of reporting a bug and bug tracking system etc. may change as par the
software testing project and company requirements.)
What makes a good Software Test engineer?
A good test engineer has a 'test to break' attitude, an ability to take the point of view of the
customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in
maintaining a cooperative relationship with developers, and an ability to communicate with both
technical (developers) and non-technical (customers, management) people is useful. Previous
software development experience can be helpful as it provides a deeper understanding of the
software development process, gives the tester an appreciation for the developers' point of view,
and reduce the learning curve in automated test tool programming. Judgement skills are needed
to assess high-risk areas of an application on which to focus testing efforts when time is limited.
What makes a good Software QA engineer?
The same qualities a good tester has are useful for a QA engineer. Additionally, they must be
able to understand the entire software development process and how it can fit into the business
approach and goals of the organization. Communication skills and the ability to understand
various sides of issues are important. In organizations in the early stages of implementing QA
processes, patience and diplomacy are especially needed. An ability to find problems as well as
to see 'what's missing' is important for inspections and reviews.
What makes a good QA or Test manager?
A good QA, test, or QA/Test(combined) manager should:
59
• be familiar with the software development process
• be able to maintain enthusiasm of their team and promote a positive atmosphere, despite
what is a somewhat 'negative' process (e.g., looking for or preventing problems)
• be able to promote teamwork to increase productivity
• be able to promote cooperation between software, test, and QA engineers
• have the diplomatic skills needed to promote improvements in QA processes
• have the ability to withstand pressures and say 'no' to other managers when quality is
insufficient or QA processes are not being adhered to
• have people judgement skills for hiring and keeping skilled personnel
• be able to communicate with technical and non-technical people, engineers, managers,
and customers.
• be able to run meetings and keep them focused
What's the role of documentation in QA?
Critical. (Note that documentation can be electronic, not necessarily paper, may be embedded in
code comments, etc.) QA practices should be documented such that they are repeatable.
Specifications, designs, business rules, inspection reports, configurations, code changes, test
plans, test cases, bug reports, user manuals, etc. should all be documented in some form. There
should ideally be a system for easily finding and obtaining information and determining what
documentation will have a particular piece of information. Change management for
documentation should be used if possible.
What's the big deal about 'requirements'?
One of the most reliable methods of ensuring problems, or failure, in a large, complex software
project is to have poorly documented requirements specifications. Requirements are the details
describing an application's externally-perceived functionality and properties. Requirements should
be clear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable
requirement would be, for example, 'user-friendly' (too subjective). A testable requirement would
be something like 'the user must enter their previously-assigned password to access the
application'. Determining and organizing requirements details in a useful and efficient way can be
a difficult effort; different methods are available depending on the particular project. Many books
are available that describe various approaches to this task.
Care should be taken to involve ALL of a project's significant 'customers' in the requirements
process. 'Customers' could be in-house personnel or out, and could include end-users, customer
acceptance testers, customer contract officers, customer management, future software
maintenance engineers, salespeople, etc. Anyone who could later derail the project if their
expectations aren't met should be included if possible.
Organizations vary considerably in their handling of requirements specifications. Ideally, the
requirements are spelled out in a document with statements such as 'The product shall.....'.
60
'Design' specifications should not be confused with 'requirements'; design specifications should
be traceable back to the requirements.
In some organizations requirements may end up in high level project plans, functional
specification documents, in design documents, or in other documents at various levels of detail.
No matter what they are called, some type of documentation with detailed requirements will be
needed by testers in order to properly plan and execute tests. Without such documentation, there
will be no clear-cut way to determine if a software application is performing correctly.
'Agile' methods such as XP use methods requiring close interaction and cooperation between
programmers and customers/end-users to iteratively develop requirements. In the XP 'test first'
approach developmers create automated unit testing code before the application code, and these
automated unit tests essentially embody the requirements.
What steps are needed to develop and run software tests?
The following are some of the steps to consider:
• Obtain requirements, functional design, and internal design specifications and other
necessary documents
• Obtain budget and schedule requirements
• Determine project-related personnel and their responsibilities, reporting requirements,
required standards and processes (such as release processes, change processes, etc.)
• Determine project context, relative to the existing quality culture of the organization and
business, and how it might impact testing scope, aproaches, and methods.
• Identify application's higher-risk aspects, set priorities, and determine scope and
limitations of tests
• Determine test approaches and methods - unit, integration, functional, system, load,
usability tests, etc.
• Determine test environment requirements (hardware, software, communications, etc.)
• Determine testware requirements (record/playback tools, coverage analyzers, test
tracking, problem/bug tracking, etc.)
• Determine test input data requirements
• Identify tasks, those responsible for tasks, and labor requirements
• Set schedule estimates, timelines, milestones
• Determine input equivalence classes, boundary value analyses, error classes
• Prepare test plan document and have needed reviews/approvals
• Write test cases
• Have needed reviews/inspections/approvals of test cases
• Prepare test environment and testware, obtain needed user manuals/reference
documents/configuration guides/installation guides, set up test tracking processes, set up
logging and archiving processes, set up or obtain test input data
• Obtain and install software releases
61
• Perform tests
• Evaluate and report results
• Track problems/bugs and fixes
• Retest as needed
• Maintain and update test plans, test cases, test environment, and testware through life
cycle
What's a 'test plan'?
A software project test plan is a document that describes the objectives, scope, approach, and
focus of a software testing effort. The process of preparing a test plan is a useful way to think
through the efforts needed to validate the acceptability of a software product. The completed
document will help people outside the test group understand the 'why' and 'how' of product
validation. It should be thorough enough to be useful but not so thorough that no one outside the
test group will read it. The following are some of the items that might be included in a test plan,
depending on the particular project:
• Title
• Identification of software including version/release numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test
plans, etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Overall software project organization and personnel/contact-info/responsibilties
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature, functionality,
process, system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment - hardware, operating systems, other required software, data
configurations, interfaces to other systems
• Test environment validity analysis - differences between the test and production systems
and their impact on test validity.
• Test environment setup and configuration issues
62
• Software migration processes
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as screen
capture software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by testers to
help track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilties, deliverables,
contact persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix - glossary, acronyms, etc.
What's a 'test case'?
• A test case is a document that describes an input, action, or event and an expected
response, to determine if a feature of an application is working correctly. A test case
should contain particulars such as test case identifier, test case name, objective, test
conditions/setup, input data requirements, steps, and expected results.
• Note that the process of developing test cases can help find problems in the
requirements or design of an application, since it requires completely thinking through the
operation of the application. For this reason, it's useful to prepare test cases early in the
development cycle if possible.
What should be done after a bug is found?
The bug needs to be communicated and assigned to developers that can fix it. After the problem
is resolved, fixes should be re-tested, and determinations made regarding requirements for
regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking
63
system is in place, it should encapsulate these processes. A variety of commercial problemtracking/
management software tools are available :
• Complete information such that developers can understand the bug, get an idea of it's
severity, and reproduce it if necessary.
• Bug identifier (number, ID, etc.)
• Current bug status (e.g., 'Released for Retest', 'New', etc.)
• The application name or identifier and version
• The function, module, feature, object, screen, etc. where the bug occurred
• Environment specifics, system, platform, relevant hardware specifics
• Test case name/number/identifier
• One-line bug description
• Full bug description
• Description of steps needed to reproduce the bug if not covered by a test case or if the
developer doesn't have easy access to the test case/test script/test tool
• Names and/or descriptions of file/data/messages/etc. used in test
• File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be
helpful in finding the cause of the problem
• Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
• Was the bug reproducible?
• Tester name
• Test date
• Bug reporting date
• Name of developer/group/organization the problem is assigned to
• Description of problem cause
• Description of fix
• Code section/file/module/class/method that was fixed
• Date of fix
• Application version that contains the fix
• Tester responsible for retest
• Retest date
• Retest results
• Regression testing requirements
• Tester responsible for regression tests
• Regression testing results
A reporting or tracking process should enable notification of appropriate personnel at various
stages. For instance, testers need to know when retesting is needed, developers need to know
when bugs are found and how to get the needed information, and reporting/summary capabilities
are needed for managers.
What is 'configuration management'?
64
Configuration management covers the processes used to control, coordinate, and track: code,
requirements, documentation, problems, change requests, designs,
tools/compilers/libraries/patches, changes made to them, and who makes the changes.
What if the software is so buggy it can't really be tested at all?
The best bet in this situation is for the testers to go through the process of reporting whatever
bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this
type of problem can severely affect schedules, and indicates deeper problems in the software
development process (such as insufficient unit testing or insufficient integration testing, poor
design, improper build or release procedures, etc.) managers should be notified, and provided
with some documentation as evidence of the problem.
How can it be known when to stop testing?
This can be difficult to determine. Many modern software applications are so complex, and run in
such an interdependent environment, that complete testing can never be done. Common factors
in deciding when to stop are:
• Deadlines (release deadlines, testing deadlines, etc.)
• Test cases completed with certain percentage passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• Bug rate falls below a certain level
• Beta or alpha testing period ends
What if there isn't enough time for thorough testing?
Use risk analysis to determine where testing should be focused.
Since it's rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong, risk analysis is
appropriate to most software development projects. This requires judgement skills, common
sense, and experience. (If warranted, formal methods are also available.) Considerations can
include:
• Which functionality is most important to the project's intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex, and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
65
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?
What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project. However, if extensive testing is
still not justified, risk analysis is again needed and the same considerations as described
previously in 'What if there isn't enough time for thorough testing?' apply. The tester might then
do ad hoc testing, or write up a limited test plan based on the risk analysis.
How does a client/server environment affect testing?
Client/server applications can be quite complex due to the multiple dependencies among clients,
data communications, hardware, and servers, especially in multi-tier systems. Thus testing
requirements can be extensive. When time is limited (as it usually is) the focus should be on
integration and system testing. Additionally, load/stress/performance testing may be useful in
determining client/server application limitations and capabilities. There are commercial tools to
assist with such testing.
How can World Wide Web sites be tested?
Web sites are essentially client/server applications - with web servers and 'browser' clients.
Consideration should be given to the interactions between html pages, TCP/IP communications,
Internet connections, firewalls, applications that run in web pages (such as applets, javascript,
plug-in applications), and applications that run on the server side (such as cgi scripts, database
interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a
wide variety of servers and browsers, various versions of each, small but sometimes significant
differences between them, variations in connection speeds, rapidly changing technologies, and
multiple standards and protocols. The end result is that testing for web sites can become a major
ongoing effort. Other considerations might include:
• What are the expected loads on the server (e.g., number of hits per unit time?), and what
kind of performance is required under such loads (such as web server response time,
database query response times). What kinds of tools will be needed for performance
testing (such as web load testing tools, other tools already in house that can be adapted,
web robot downloading tools, etc.)?
66
• Who is the target audience? What kind of browsers will they be using? What kind of
connection speeds will they by using? Are they intra- organization (thus with likely high
connection speeds and similar browsers) or Internet-wide (thus with a wide variety of
connection speeds and browser types)?
• What kind of performance is expected on the client side (e.g., how fast should pages
appear, how fast should animations, applets, etc. load and run)?
• Will down time for server and content maintenance/upgrades be allowed? how much?
• What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what
is it expected to do? How can it be tested?
• How reliable are the site's Internet connections required to be? And how does that affect
backup system or redundant connection requirements and testing?
• What processes will be required to manage updates to the web site's content, and what
are the requirements for maintaining, tracking, and controlling page content, graphics,
links, etc.?
• Which HTML specification will be adhered to? How strictly? What variations will be
allowed for targeted browsers?
• Will there be any standards or requirements for page appearance and/or graphics
throughout a site or parts of a site??
• How will internal and external links be validated and updated? how often?
• Can testing be done on the production system, or will a separate test system be
required? How are browser caching, variations in browser option settings, dial-up
connection variabilities, and real-world internet 'traffic congestion' problems to be
accounted for in testing?
• How extensive or customized are the server logging and reporting requirements; are they
considered an integral part of the system and do they require testing?
• How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained,
tracked, controlled, and tested?
• Pages should be 3-5 screens max unless content is tightly focused on a single topic. If
larger, provide internal links within the page.
• The page layouts and design elements should be consistent throughout a site, so that it's
clear to the user that they're still within a site.
• Pages should be as browser-independent as possible, or pages should be provided or
generated based on the browser-type.
• All pages should have links external to the page; there should be no dead-end pages.
• The page owner, revision date, and a link to a contact person or organization should be
included on each page.
How is testing affected by object-oriented designs?
Well-engineered object-oriented design can make it easier to trace from code to internal design to
functional design to requirements. While there will be little affect on black box testing (where an
understanding of the internal design of the application is unnecessary), white-box testing can be
67
oriented to the application's objects. If the application was well-designed this can simplify test
design.
What is Extreme Programming and what's it got to do with testing?
Extreme Programming (XP) is a software development approach for small teams on risk-prone
projects with unstable requirements. It was created by Kent Beck who described the approach in
his book 'Extreme Programming Explained'. Testing ('extreme testing') is a core aspect of
Extreme Programming. Programmers are expected to write unit and functional test code first -
before writing the application code. Test code is under source control along with the rest of the
code. Customers are expected to be an integral part of the project team and to help develope
scenarios for acceptance/black box testing. Acceptance tests are preferably automated, and are
modified and rerun for each of the frequent development iterations. QA and test personnel are
also required to be an integral part of the project team. Detailed requirements documentation is
not used, and frequent re-scheduling, re-estimating, and re-prioritizing is expected. For more info
on XP and other 'agile' software development approaches (Scrum, Crystal, etc.).
68

No comments:

Search

My Blog List