Preparation Courses for International Certifications, ASQ CQE, CSTE & CSQA & ISEB & ISTQB

Prep Courses for International Certifications, ASQ CQE, CSTE & CSQA & ISEB & ISTQB
&Business Analyst & Certifications in HYDERABAD. After receiving overwhelming
response to our last 55+ batches, SPECTRAMIND SOLUTIONS now announces a new
batch of Prep Courses for ASQ CQE, CSQA & CSTE& ISEB & ISTQB & Business
Analyst so as to prepare you thoroughly for the most prestigious certification exams
Conducted by International organizations. We have consistent record of almost 100%
passing results for last 6+ years.
1. Certified Quality Engineer (CQE)
What is CQE?
CQE Exam certification by ASQ is a formal recognition of a level of proficiency in the
information
Technology (IT) quality assurance industry. Acquiring the CQE indicates a professional
level of competence in the principles of Quality Assurance. Individuals clearing CQE
exam become member of a recognized professional group leading to rapid career
advancement and greater acceptance in their role as an advisors to management.
Who should attend?
Project Managers, Project Leaders
Software Professionals & Process Engineers
Directors, Senior Management professionals
Quality & Testing Professionals
ASQ CQE course contents:
1. The CQE Exam
2. Statistical Quality Control and Quality Systems Engineering
3. Probabilities
4. Basic Statistics
5. Hypothesis Testing
6. Acceptance Sampling
7. Control Charts
8. Reliability
9. Regressions and Correlation
10. Cost of Quality
11. Designs of Experiments
12.Metrology and Calibration
Start Date: 24th Mar 07
Duration: 2.5 months (Every Sunday 9.30 am to 11.30 am & 12.00 to 2.00 pm)
Fees: Rs. 10,000 (Inclusive of Study mtl, Question bank & Case studies, Mock Tests
etc.)
About the CQE Exam: (Visit http://cqeweb.com/)
Last Date to apply: 2 months before exam date
Forthcoming Exam dates: 2nd June 07, 1st Dec, 2007, 7th June 2008
Exam fees: $ 360 to be paid to ASQ, USA
Exam Center: Hyderabad, Bangalore, Pune, Mumbai and other metros
Nitro PDF Trial
www.nitropdf.com
DocuCom PDF Trial
www.pdfwizard.com
2. Certified Software Quality Analyst
(CSQA)
What is CSQA?
CSQA certification is a formal recognition of a level of proficiency in the information
technology (IT) quality assurance industry. Acquiring the CSQA indicates a
professional level of competence in the principles of Quality Assurance. Individuals
clearing CSQA exam become member of a recognized professional group leading to
rapid career advancement and greater acceptance in their role as a advisors to
management.
Who should attend?
ProjectManagers, Project Leaders
Software Professionals & Process Engineers
Directors, Senior Management professionals
Quality & Testing Professionals
CSQA Course contents :( Revised 2006 syllabus)
The course covers all the 10 domains of CSQA as per CBOK & case studies discussions
by Experts.
1. Quality Principles and Concepts
2. Quality Leadership
3. Quality Baselines (Assessments and Audits)
4. Quality Assurance
5. Quality Planning
6. Define, Build, Implement and Improve Work Processes
7. Quality Control Practices
8. Metrics and Measurement
9. Internal Control and Security
10.Outsourcing, COTS and Contracting Quality
Start Date: 24th March 07
Duration: 2.5 months (Every Sunday 9.30 am to 11.30 am & 12.00 to 2.00 pm)
Fees: Rs. 8000 (Inclusive of Study mtl, Question bank & Case studies, Mock Tests
etc.)About
the QAI Exam: ( Visit www.softwarecertifications.org )
Forthcoming Exam dates: 24th Mar 07,17th June 07
Last Date to apply: 2 months before exam date
Exam fees: $ 350 to be paid to QAI, USA
Exam Center: Hyderabad, Pune , Mumbai and other metros
Nitro PDF Trial
www.nitropdf.com
DocuCom PDF Trial
www.pdfwizard.com
3. Certified Software Test Engineer
(CSTE)
What is CSTE?
The CSTE certification is intended to establish standards for initial qualification and
provide direction for the testing function through an aggressive educational program.
Acquiring the designation of CSTE indicates a professional level of competence in the
principles and practices of quality control in the IT profession. This program helps
participants acquire a higher level of technical expertise in the Testing function.
Who should attend?
Software Professionals
ProjectManagers,
Project Leaders,
Test Leads,
Test Managers
Quality & Testing Professionals
CSTE Course contents :( Revised 2006 syllabus)
The course covers all the 10 domains of CSTE as per CBOK & case studies discussions
by Experts.
1. Software Testing Principles and Concepts
2. Building the Test Environment
3. Managing the Test Project
4. Test Planning
5. Executing the Test Plan
6. Test Status, Analysis and Reporting
7. User Acceptance Testing
8. Testing Software Developed by Outside Organizations
9. Testing Software Controls and the Adequacy of Security Procedures
10. Testing New Technologies
Start Date: 24th March 07
Duration: 2.5 months (Every Sunday 9.30 am to 11.30 am & 12.00 to 2.00 pm)
Fees: Rs. 8000(Inclusive of Study mtl, Question bank & Case studies, Mock Tests etc.)
About the QAI Exam: (Visit www.softwarecertifications.org)
Forthcoming Exam dates: 24thMar 07,17th June 07
Last Date to apply: 2 months before exam date
Exam fees: $ 350 to be paid to QAI, USA
Exam Center: Hyderabad, Pune, Mumbai and other metros
Nitro PDF Trial
www.nitropdf.com
DocuCom PDF Trial
www.pdfwizard.com
4. ISTQB / ITB Certification:
· Foundation Level
· Advanced Level
· Expert Level
· Technical Tester
· Functional Tester
· Test Manager
Start Date: 24th Mar 07
Duration: 1.5 month (Every Sun and Sat 9.30 am to 11.30 am & 12.00 to2.00 pm)
Fees: Rs. 3000/-(Inclusive of Study mtl, Question bank & Case Studies, Mock Tests,
etc.)
About the ISTQB / ITB Certification Exam: (Visit http://india.istqb.org/exam.htm)
Forthcoming Exam dates: Online exam
Exam fees: Rs 4000+ to be paid to ISTQB,
Exam Center: Hyderabad, Mumbai and other metros
5. ISEB Certification:
· Foundation Level
· Advanced Level
· Expert Level
Start Date: 24th Mar 07
Duration: 1.5 month (Every Sun and Sat 9.30 am to 11.30 am & 12.00 to2.00 pm)
Fees: Rs. 6000/-(Inclusive of Study mtl, Question bank & Case Studies, Mock Tests,
etc.)
About the ISEB Exam: (Visit:
http://www.bcs.org/BCS/Products/Qualifications/ISEB/About/default.htm)
Forthcoming Exam dates: Online exam
Exam fees: Rs 10,000+ to be paid to BCS, UK
Exam Center: Hyderabad, Mumbai and other metros
Nitro PDF Trial
www.nitropdf.com
DocuCom PDF Trial
www.pdfwizard.com
6. Business Analyst certification
Who should attend?
1. Entry-level IT Business Analysts
2. Self-taught IT Business Analysts wanting to fill in the gaps and put all the pieces
together
3. Systems Analysts and programmers interested in expanding their role into the
business area
4. IT Project Managers with responsibility for business analysis
Topics Covered
• Introduction to the Business Analyst s Role and Interview Techniques
• Introduction to Use Cases
• The Kick off Meeting
• Analyzing Business Use Cases
• Structuring System Use Cases - employing System Use Case Diagrams and advanced
features to organize requirements for maximum reuse
• Documenting System Use Cases/ Context and Basic Flow
• Documenting Alternate and Exception Flows
• Documenting inclusion, extension and generalized use cases
• Documenting Requirements for Legacy Systems using Structured Analysis
• Gathering Business Data Requirements
• Static Analysis and the BA
• Analyzing Business Classes
• Analyzing Sub-types (Generalizations and transient roles)
• Analyzing Aggregations, Associations and Multiplicities
• Discovering attributes and operations
• Static Modeling during Initiation
• Static Modeling during Analysis
• Data Modeling and Data Warehouses
• The BA Role in Testing
• Workflow Modeling and the Business Analyst
• Essential Concepts of Workflow Modeling
• Activity Diagrams
• State Machine Diagrams
• Text Documentation
• Other Workflow Modeling Notations
• Introduction to IT Project Management
• Guided Tour of an Iterative Project
• Initiation Phase - Initial Activities
• Initiation Phase - Analysis
• Initiation Phase Risk Management
• The Analysis Phases
• Execution, Test and Close-out Phases
• Overview of Software Development Lifecycle Methodologies
Start Date: 24th Mar 07
Duration: 2.5 months (Every Sun and Sat 9.30 am to 11.30 am & 12.00 to 2.00 pm)
Fees: Rs. 15,000/-(Inclusive of Study mtl, Question bank & Case studies, mock Tests,
etc.)
Exam fees: $ 575 to be paid to IIBA, USA
Exam Center: Hyderabad, Bangalore, Mumbai and other metros
Nitro PDF Trial
www.nitropdf.com
DocuCom PDF Trial
www.pdfwizard.com
Course Highlights:
1. Faculty Members: Our faculties consist of Quality Professionals working at leading
Software MNC in Hyderabad, Bangalore, Pune & Mumbai.. They are certified as CQE,
CSQA, CSTE, PMP, and IIBA and have rich IT industry experience in a number of
technical and managerial roles at senior positions.
2. There will be Classroom Sessions (covering all the domains of BOK) & Extensive
Case Study Discussion and experience sharing by Practicing Certified Professionals.
3. Two Mock Tests based on actual Exam pattern.
4. Trained more than 5000+ Professional for Software Quality & Testing including
CSQA/CSTE/IIBA/PMP/CQE in last 6 years.
5. Our last batch Results are 100%.
6. Certificates will be awarded to all the successful participants
PLEASE NOTE: 1.We also provides corporate trainings on ASQ CQE, CSPM, PMP,
Prince 2, IIBA, CAPM, CSTE, CSQA & Automated testing tools (Win Runner, Load
Runner, QTP, SILK, Rational tools etc).
RUSH FOR REGISTRATION: LIMITED
TIME OFFER!!!!!!!!!
For More Details Contact:
VIJAY
SPECTRAMIND SOLUTIONS
FLAT NO 404, EVEREST BLOCK, ADITYA ENCLAVE, AMEERPET, HYDERABAD-38.
PH: 91-040-40035734
MOBILE: 91-9440089341
Email:VIJAY@SPECTRAMINDSOLUTIONS.COM

Collection of questions


1. What automating testing tools are you familiar with?
2. How did you use automating testing tools in your job?
3. Describe some problem that you had with automating testing tool.
4. How do you plan test automation?
5. Can test automation improve test effectiveness?
6. What is data - driven automation?
7. What are the main attributes of test automation?
8. Does automation replace manual testing?
9. How will you choose a tool for test automation?
10. How you will evaluate the tool for test automation?
11. What are main benefits of test automation?
12. What could go wrong with test automation?
13. How you will describe testing activities?
14. What testing activities you may want to automate?
15. Describe common problems of test automation.
16. What types of scripting techniques for test automation do you know?
17. What are principles of good testing scripts for automation?
18. What tools are available for support of testing during software development life cycle?
19. Can the activities of test case design be automated?
20. What are the limitations of automating software testing?
21. What skills needed to be a good test automator?
22. How to find that tools work well with your existing system?
23.Describe some problem that you had with automating testing tool.
24.What are the main attributes of test automation?
25.What testing activities you may want to automate in a project?
26.How to find that tools work well with your existing system?


General questions:

1. What types of documents would you need for QA, QC, and Testing?
2. What did you include in a test plan?
3. Describe any bug you remember.
4. What is the purpose of the testing?
5. What do you like (not like) in this job?
6. What is quality assurance?
7. What is the difference between QA and testing?
8. How do you scope, organize, and execute a test project?
9. What is the role of QA in a development project?
10. What is the role of QA in a company that produces software?
11. Define quality for me as you understand it
12. Describe to me the difference between validation and verification.
13. Describe to me what you see as a process. Not a particular process, just the basics of having a process.
14. Describe to me when you would consider employing a failure mode and effect analysis.
15. Describe to me the Software Development Life Cycle, as you would define it.
16. What are the properties of a good requirement?
17. How do you differentiate the roles of Quality Assurance Manager and Project Manager?
18. Tell me about any quality efforts you have overseen or implemented. Describe some of the challenges you faced and how you overcame them.
19. How do you deal with environments that are hostile to quality change efforts?
20. In general, how do you see automation fitting into the overall process of testing?
21. How do you promote the concept of phase containment and defect prevention?
22. If you come onboard, give me a general idea of what your first overall tasks will be as far as starting a quality effort.
23. What kinds of testing have you done?
24. Have you ever created a test plan?
25. Have you ever written test cases or did you just execute those written by others?
26. What did your base your test cases?
27. How do you determine what to test?
28. How do you decide when you have 'tested enough?'
29. How do you test if you have minimal or no documentation about the product?
30. Describe me to the basic elements you put in a defect report?
31. How do you perform regression testing?
32. At what stage of the life cycle does testing begin in your opinion?
33. How do you analyse your test results? What metrics do you try to provide?
34. Realising you won't be able to test everything - how do you decide what to test first?
35. Where do you get your expected results?
36. If automating - what is your process for determining what to automate and in what order?
37. If you were tasked to test an ATM, what items might your test plan include?

38. If you were given a program that will average student grades, what kinds of inputs would you use?
39. Tell me about the best bug you ever found.
40. What made you pick testing over another career?
41. What is the exact difference between Integration & System testing; give me examples with your project.
42. How did you go about testing a project?
43. When should testing start in a project? Why?
44. How do you go about testing a web application?
45. Difference between Black & White box testing
46. What is Configuration management? Tools used?
47. What do you plan to become after say 2-5yrs (Ex: QA Manager, Why?)
48. Would you like to work in a team or alone, why?
49. Give me 5 strong & weak points of yours
50. Why do you want to join our company?
51. When should testing be stopped?
52. What sort of things would you put down in a bug report?
53. Who in the company is responsible for Quality?
54. Who defines quality?
55. What is an equivalence class?
56. Is a "A fast database retrieval rate" a testable requirement?
57. Should we test every possible combination/scenario for a program?
58. What criteria do you use when determining when to automate a test or leave it manual?
59. When do you start developing your automation tests?
60. Discuss what test metrics you feel are important to publish an organization?
61. In case anybody cares, here are the questions that I will be asking:
62. Describe the role that QA plays in the software lifecycle.
63. What should Development require of QA?
64. What should QA require of Development?
65. How would you define a "bug?"
66. Give me an example of the best and worst experiences you've had with QA.
67. How does unit testing play a role in the development / software lifecycle?
68. Explain some techniques for developing software components with respect to testability.
69. Describe a past experience with implementing a test harness in the development of software.
70. Have you ever worked with QA in developing test tools? Explain the participation Development should have with QA in leveraging such test tools for QA use.
71. Give me some examples of how you have participated in Integration Testing.
72. How would you describe the involvement you have had with the bug-fix cycle between Development and QA?
72. What is unit testing?
73. Describe your personal software development process.
74. How do you know when your code has met specifications?
75. How do you know your code has met specifications when there are no specifications?
76. Describe your experiences with code analysers.
77. How do you feel about cyclomatic complexity?
78. Who should test your code?
79.How do you survive chaos?
80. What processes/methodologies are you familiar with?
81. What type of documents would you need for QA/QC/Testing?
82. How can you use technology to solve problem?
83. What type of metrics would you use?
84. How to find that tools work well with your existing system?
85. What automated tools are you familiar with?
86. How well you work with a team?
87. How would you ensure 100% coverage of testing?
88. How would you build a test team?
89. What problem you have right now or in the past? How you solved it?
90. What you will do during the first day of job?
91. What would you like to do five years from now?
92. Tell me about the worst boss you've ever had.
93. What are your greatest weaknesses?
94. What are your strengths?
95. What is a successful product?
96. What do you like about Windows?
97. What is good code?
98. Who is Kent Beck, Dr Grace Hopper, Dennis Ritchie?
99. What are basic, core, practises for a QA specialist?
100. What do you like about QA?
101. What has not worked well in your previous QA experience and what would you change?
102. How you will begin to improve the QA process?
103. What is the difference between QA and QC?
104. What is UML and how to use it for testing?
105. What is CMM and CMMI? What is the difference?
106. What do you like about computers?
107. Do you have a favourite QA book? More than one? Which ones? And why.
108. What is the responsibility of programmers vs QA?
109.What are the properties of a good requirement?
110.Ho to do test if we have minimal or no documentation about the product?
111.What are all the basic elements in a defect report?
112.Is an "A fast database retrieval rate" a testable requirement?




1. What is software quality assurance?
2. What is the value of a testing group? How do you justify your work and budget?
3. What is the role of the test group vis-a­-vis documentation, tech support, and so forth?
4. How much interaction with users should testers have, and why?
5. How should you learn about problems discovered in the field, and what should you learn from those problems?
6. What are the roles of glass-box and black-box testing tools?
7. What issues come up in test automation, and how do you manage them?
8. What development model should programmers and the test group use?
9. How do you get programmers to build testability support into their code?
10. What is the role of a bug tracking system?
11. What are the key challenges of testing?
12. Have you ever completely tested any part of a product? How?
13. Have you done exploratory or specification-driven testing?
14. Should every business test its software the same way?
15. Discuss the economics of automation and the role of metrics in testing.
16. Describe components of a typical test plan, such as tools for interactive products and for database products, as well as cause-and-effect graphs and data-flow diagrams.
17. When have you had to focus on data integrity?
18. What are some of the typical bugs you encountered in your last assignment?
19. How do you prioritize testing tasks within a project?
20. How do you develop a test plan and schedule? Describe bottom-up and top-down approaches.
21. When should you begin test planning?
22. When should you begin testing?
23. Do you know of metrics that help you estimate the size of the testing effort?
24. How do you scope out the size of the testing effort?
25. How many hours a week should a tester work?
26. How should your staff be managed? How about your overtime?
27. How do you estimate staff requirements?
28. What do you do (with the project tasks) when the schedule fails?
29. How do you handle conflict with programmers?
30. How do you know when the product is tested well enough?
31. What characteristics would you seek in a candidate for test-group manager?
32. What do you think the role of test-group manager should be? Relative to senior management?
Relative to other technical groups in the company? Relative to your staff?
33. How do your characteristics compare to the profile of the ideal manager that you just described?
34. How does your preferred work style work with the ideal test-manager role that you just described? What is different between the way you work and the role you described?
35. Who should you hire in a testing group and why?
36. What is the role of metrics in comparing staff performance in human resources management?
37. How do you estimate staff requirements?
38. What do you do (with the project staff) when the schedule fails?
39. Describe some staff conflicts you’ve handled.

  1. Why did you ever become involved in QA/testing?
  2. What is the testing lifecycle and explain each of its phases?
  3. What is the difference between testing and Quality Assurance?
  4. What is Negative testing?
  5. What was a problem you had in your previous assignment (testing if possible)? How did you resolve it?
  6. What are two of your strengths that you will bring to our QA/testing team?
  7. How would you define Quality Assurance?
  8. What do you like most about Quality Assurance/Testing?
  9. What do you like least about Quality Assurance/Testing?
  10. What is the Waterfall Development Method and do you agree with all the steps?
  11. What is the V-Model Development Method and do you agree with this model?
  12. What is the Capability Maturity Model (CMM)? At what CMM level were the last few companies you worked?
  13. What is a "Good Tester"?
  14. Could you tell me two things you did in your previous assignment (QA/Testing related hopefully) that you are proud of?
  15. List 5 words that best describe your strengths.
  16. What are two of your weaknesses?
  17. What methodologies have you used to develop test cases?
  18. In an application currently in production, one module of code is being modified. Is it necessary to re- test the whole application or is it enough to just test functionality associated with that module?
  19. Define each of the following and explain how each relates to the other: Unit, System, and Integration testing.
  20. Define Verification and Validation. Explain the differences between the two.
  21. Explain the differences between White-box, Gray-box, and Black box testing.
  22. How do you go about going into a new organization? How do you assimilate?
  23. Define the following and explain their usefulness: Change Management, Configuration Management, Version Control, and Defect Tracking.
  24. What is ISO 9000? Have you ever been in an ISO shop?
  25. When are you done testing?
  26. What is the difference between a test strategy and a test plan?
  27. What is ISO 9003? Why is it important
  28. What are ISO standards? Why are they important?
  29. What is IEEE 829? (This standard is important for Software Test Documentation-Why?)
  30. What is IEEE? Why is it important?
  31. Do you support automated testing? Why?
  32. We have a testing assignment that is time-driven. Do you think automated tests are the best solution?
  33. What is your experience with change control? Our development team has only 10 members. Do you think managing change is such a big deal for us?
  34. Are reusable test cases a big plus of automated testing and explain why.
  35. How important is Change Management in today's computing environments?
  36. Do you think tools are required for managing change? Explain and please list some tools/practices that can help you managing change.
  37. We believe in ad-hoc software processes for projects. Do you agree with this? Please explain your answer.
  38. When is a good time for system testing?
  39. Are regression tests required or do you feel there is a better use for resources?
  40. Our software designers use UML for modelling applications. Based on their use cases, we would like to plan a test strategy. Do you agree with this approach or would this mean more effort for the testers.
  41. Tell me about a difficult time you had at work and how you worked through it.
  42. Give me an example of something you tried at work but did not work out so you had to go at things another way.
  43. How can one file compare future dated output files from a program, which has change, against the baseline run that used current date for input? The client does not want to mask dates on the output files to allow compares. - Answer-Rerun baseline and future date input files same # of days as future dated run of program with change. Now run a file compare against the baseline future dated output and the changed programs' future dated output.

Software QA and Testing FAQ's

What makes a good Software Test engineer?
A good test engineer has a 'test to break' attitude, an ability to take the point of view of
the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy
are useful in maintaining a cooperative relationship with developers, and an ability to
communicate with both technical (developers) and non-technical (customers,
management) people is useful. Previous software development experience can be helpful
as it provides a deeper understanding of the software development process, gives the
tester an appreciation for the developers' point of view, and reduce the learning curve in
automated test tool programming. Judgement skills are needed to assess high-risk areas
of an application on which to focus testing efforts when time is limited.
What makes a good Software QA engineer?
The same qualities a good tester has are useful for a QA engineer. Additionally, they
must be able to understand the entire software development process and how it can fit
into the business approach and goals of the organization. Communication skills and the
ability to understand various sides of issues are important. In organizations in the early
stages of implementing QA processes, patience and diplomacy are especially needed. An
ability to find problems as well as to see 'what's missing' is important for inspections and
reviews.
What makes a good QA or Test manager?
A good QA, test, or QA/Test(combined) manager should:
• be familiar with the software development process
• be able to maintain enthusiasm of their team and promote a positive atmosphere,
despite what is a somewhat 'negative' process (e.g., looking for or preventing
problems)
• be able to promote teamwork to increase productivity
• be able to promote cooperation between software, test, and QA engineers
• have the diplomatic skills needed to promote improvements in QA processes
• have the ability to withstand pressures and say 'no' to other managers when
quality is insufficient or QA processes are not being adhered to
• have people judgement skills for hiring and keeping skilled personnel
• be able to communicate with technical and non-technical people, engineers,
managers, and customers.
• be able to run meetings and keep them focused
What's the role of documentation in QA?
Critical. (Note that documentation can be electronic, not necessarily paper, may be
embedded in code comments, etc.) QA practices should be documented such that they are
repeatable. Specifications, designs, business rules, inspection reports, configurations,
code changes, test plans, test cases, bug reports, user manuals, etc. should all be
documented in some form. There should ideally be a system for easily finding and
obtaining information and determining what documentation will have a particular piece
of information. Change management for documentation should be used if possible.
What's the big deal about 'requirements'?
One of the most reliable methods of ensuring problems, or failure, in a large, complex
software project is to have poorly documented requirements specifications. Requirements
are the details describing an application's externally-perceived functionality and
properties. Requirements should be clear, complete, reasonably detailed, cohesive,
attainable, and testable. A non-testable requirement would be, for example, 'user-friendly'
(too subjective). A testable requirement would be something like 'the user must enter
their previously-assigned password to access the application'. Determining and
organizing requirements details in a useful and efficient way can be a difficult effort;
different methods are available depending on the particular project. Many books are
available that describe various approaches to this task.
Care should be taken to involve ALL of a project's significant 'customers' in the
requirements process. 'Customers' could be in-house personnel or out, and could include
end-users, customer acceptance testers, customer contract officers, customer
management, future software maintenance engineers, salespeople, etc. Anyone who could
later derail the project if their expectations aren't met should be included if possible.
Organizations vary considerably in their handling of requirements specifications. Ideally,
the requirements are spelled out in a document with statements such as 'The product
shall.....'. 'Design' specifications should not be confused with 'requirements'; design
specifications should be traceable back to the requirements.
In some organizations requirements may end up in high level project plans, functional
specification documents, in design documents, or in other documents at various levels of
detail. No matter what they are called, some type of documentation with detailed
requirements will be needed by testers in order to properly plan and execute tests.
Without such documentation, there will be no clear-cut way to determine if a software
application is performing correctly.
'Agile' methods such as XP use methods requiring close interaction and cooperation
between programmers and customers/end-users to iteratively develop requirements. In
the XP 'test first' approach developmers create automated unit testing code before the
application code, and these automated unit tests essentially embody the requirements.
What steps are needed to develop and run software tests?
The following are some of the steps to consider:
• Obtain requirements, functional design, and internal design specifications and
other necessary documents
• Obtain budget and schedule requirements
• Determine project-related personnel and their responsibilities, reporting
requirements, required standards and processes (such as release processes, change
processes, etc.)
• Determine project context, relative to the existing quality culture of the
organization and business, and how it might impact testing scope, aproaches, and
methods.
• Identify application's higher-risk aspects, set priorities, and determine scope and
limitations of tests
• Determine test approaches and methods - unit, integration, functional, system,
load, usability tests, etc.
• Determine test environment requirements (hardware, software, communications,
etc.)
• Determine testware requirements (record/playback tools, coverage analyzers, test
tracking, problem/bug tracking, etc.)
• Determine test input data requirements
• Identify tasks, those responsible for tasks, and labor requirements
• Set schedule estimates, timelines, milestones
• Determine input equivalence classes, boundary value analyses, error classes
• Prepare test plan document and have needed reviews/approvals
• Write test cases
• Have needed reviews/inspections/approvals of test cases
• Prepare test environment and testware, obtain needed user manuals/reference
documents/configuration guides/installation guides, set up test tracking processes,
set up logging and archiving processes, set up or obtain test input data
• Obtain and install software releases
• Perform tests
• Evaluate and report results
• Track problems/bugs and fixes
• Retest as needed
• Maintain and update test plans, test cases, test environment, and testware through
life cycle
What's a 'test plan'?
A software project test plan is a document that describes the objectives, scope, approach,
and focus of a software testing effort. The process of preparing a test plan is a useful way
to think through the efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group understand the 'why' and
'how' of product validation. It should be thorough enough to be useful but not so thorough
that no one outside the test group will read it. The following are some of the items that
might be included in a test plan, depending on the particular project:
• Title
• Identification of software including version/release numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test
plans, etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Overall software project organization and personnel/contact-info/responsibilties
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature,
functionality, process, system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment - hardware, operating systems, other required software, data
configurations, interfaces to other systems
• Test environment validity analysis - differences between the test and production
systems and their impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as
screen capture software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by
testers to help track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilties,
deliverables, contact persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix - glossary, acronyms, etc.
What's a 'test case'?
• A test case is a document that describes an input, action, or event and an expected
response, to determine if a feature of an application is working correctly. A test
case should contain particulars such as test case identifier, test case name,
objective, test conditions/setup, input data requirements, steps, and expected
results.
• Note that the process of developing test cases can help find problems in the
requirements or design of an application, since it requires completely thinking
through the operation of the application. For this reason, it's useful to prepare test
cases early in the development cycle if possible.
What should be done after a bug is found?
The bug needs to be communicated and assigned to developers that can fix it. After the
problem is resolved, fixes should be re-tested, and determinations made regarding
requirements for regression testing to check that fixes didn't create problems elsewhere. If
a problem-tracking system is in place, it should encapsulate these processes. A variety of
commercial problem-tracking/management software tools are available :
• Complete information such that developers can understand the bug, get an idea of
it's severity, and reproduce it if necessary.
• Bug identifier (number, ID, etc.)
• Current bug status (e.g., 'Released for Retest', 'New', etc.)
• The application name or identifier and version
• The function, module, feature, object, screen, etc. where the bug occurred
• Environment specifics, system, platform, relevant hardware specifics
• Test case name/number/identifier
• One-line bug description
• Full bug description
• Description of steps needed to reproduce the bug if not covered by a test case or if
the developer doesn't have easy access to the test case/test script/test tool
• Names and/or descriptions of file/data/messages/etc. used in test
• File excerpts/error messages/log file excerpts/screen shots/test tool logs that
would be helpful in finding the cause of the problem
• Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
• Was the bug reproducible?
• Tester name
• Test date
• Bug reporting date
• Name of developer/group/organization the problem is assigned to
• Description of problem cause
• Description of fix
• Code section/file/module/class/method that was fixed
• Date of fix
• Application version that contains the fix
• Tester responsible for retest
• Retest date
• Retest results
• Regression testing requirements
• Tester responsible for regression tests
• Regression testing results
A reporting or tracking process should enable notification of appropriate personnel at
various stages. For instance, testers need to know when retesting is needed, developers
need to know when bugs are found and how to get the needed information, and
reporting/summary capabilities are needed for managers.
What is 'configuration management'?
Configuration management covers the processes used to control, coordinate, and track:
code, requirements, documentation, problems, change requests, designs,
tools/compilers/libraries/patches, changes made to them, and who makes the changes.
What if the software is so buggy it can't really be tested at all?
The best bet in this situation is for the testers to go through the process of reporting
whatever bugs or blocking-type problems initially show up, with the focus being on
critical bugs. Since this type of problem can severely affect schedules, and indicates
deeper problems in the software development process (such as insufficient unit testing or
insufficient integration testing, poor design, improper build or release procedures, etc.)
managers should be notified, and provided with some documentation as evidence of the
problem.
How can it be known when to stop testing?
This can be difficult to determine. Many modern software applications are so complex,
and run in such an interdependent environment, that complete testing can never be done.
Common factors in deciding when to stop are:
• Deadlines (release deadlines, testing deadlines, etc.)
• Test cases completed with certain percentage passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• Bug rate falls below a certain level
• Beta or alpha testing period ends
What if there isn't enough time for thorough testing?
Use risk analysis to determine where testing should be focused.
Since it's rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong, risk
analysis is appropriate to most software development projects. This requires judgement
skills, common sense, and experience. (If warranted, formal methods are also available.)
Considerations can include:
• Which functionality is most important to the project's intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex, and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance
expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?
What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project. However, if extensive
testing is still not justified, risk analysis is again needed and the same considerations as
described previously in 'What if there isn't enough time for thorough testing?' apply. The
tester might then do ad hoc testing, or write up a limited test plan based on the risk
analysis.
How does a client/server environment affect testing?
Client/server applications can be quite complex due to the multiple dependencies among
clients, data communications, hardware, and servers, especially in multi-tier systems.
Thus testing requirements can be extensive. When time is limited (as it usually is) the
focus should be on integration and system testing. Additionally, load/stress/performance
testing may be useful in determining client/server application limitations and capabilities.
There are commercial tools to assist with such testing.
How can World Wide Web sites be tested?
Web sites are essentially client/server applications - with web servers and 'browser'
clients. Consideration should be given to the interactions between html pages, TCP/IP
communications, Internet connections, firewalls, applications that run in web pages (such
as applets, javascript, plug-in applications), and applications that run on the server side
(such as cgi scripts, database interfaces, logging applications, dynamic page generators,
asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions
of each, small but sometimes significant differences between them, variations in
connection speeds, rapidly changing technologies, and multiple standards and protocols.
The end result is that testing for web sites can become a major ongoing effort. Other
considerations might include:
• What are the expected loads on the server (e.g., number of hits per unit time?),
and what kind of performance is required under such loads (such as web server
response time, database query response times). What kinds of tools will be needed
for performance testing (such as web load testing tools, other tools already in
house that can be adapted, web robot downloading tools, etc.)?
• Who is the target audience? What kind of browsers will they be using? What kind
of connection speeds will they by using? Are they intra- organization (thus with
likely high connection speeds and similar browsers) or Internet-wide (thus with a
wide variety of connection speeds and browser types)?
• What kind of performance is expected on the client side (e.g., how fast should
pages appear, how fast should animations, applets, etc. load and run)?
• Will down time for server and content maintenance/upgrades be allowed? how
much?
• What kinds of security (firewalls, encryptions, passwords, etc.) will be required
and what is it expected to do? How can it be tested?
• How reliable are the site's Internet connections required to be? And how does that
affect backup system or redundant connection requirements and testing?
• What processes will be required to manage updates to the web site's content, and
what are the requirements for maintaining, tracking, and controlling page content,
graphics, links, etc.?
• Which HTML specification will be adhered to? How strictly? What variations
will be allowed for targeted browsers?
• Will there be any standards or requirements for page appearance and/or graphics
throughout a site or parts of a site??
• How will internal and external links be validated and updated? how often?
• Can testing be done on the production system, or will a separate test system be
required? How are browser caching, variations in browser option settings, dial-up
connection variabilities, and real-world internet 'traffic congestion' problems to be
accounted for in testing?
• How extensive or customized are the server logging and reporting requirements;
are they considered an integral part of the system and do they require testing?
• How are cgi programs, applets, javascripts, ActiveX components, etc. to be
maintained, tracked, controlled, and tested?
• Pages should be 3-5 screens max unless content is tightly focused on a single
topic. If larger, provide internal links within the page.
• The page layouts and design elements should be consistent throughout a site, so
that it's clear to the user that they're still within a site.
• Pages should be as browser-independent as possible, or pages should be provided
or generated based on the browser-type.
• All pages should have links external to the page; there should be no dead-end
pages.
• The page owner, revision date, and a link to a contact person or organization
should be included on each page.
How is testing affected by object-oriented designs?
Well-engineered object-oriented design can make it easier to trace from code to internal
design to functional design to requirements. While there will be little affect on black box
testing (where an understanding of the internal design of the application is unnecessary),
white-box testing can be oriented to the application's objects. If the application was welldesigned
this can simplify test design.
What is Extreme Programming and what's it got to do with testing?
Extreme Programming (XP) is a software development approach for small teams on riskprone
projects with unstable requirements. It was created by Kent Beck who described
the approach in his book 'Extreme Programming Explained'. Testing ('extreme testing') is
a core aspect of Extreme Programming. Programmers are expected to write unit and
functional test code first - before writing the application code. Test code is under source
control along with the rest of the code. Customers are expected to be an integral part of
the project team and to help develope scenarios for acceptance/black box testing.
Acceptance tests are preferably automated, and are modified and rerun for each of the
frequent development iterations. QA and test personnel are also required to be an integral
part of the project team. Detailed requirements documentation is not used, and frequent
re-scheduling, re-estimating, and re-prioritizing is expected. For more info on XP and
other 'agile' software development approaches (Scrum, Crystal, etc.).

Interview Questions On Bug Tracking

1. What are the different types of Bugs we normally see in any of the Project? Include the severity as well.

The Life Cycle of a bug in general context is:

Bugs are usually logged by the development team (While Unit Testing) and also by testers (While sytem or other type of testing).

So let me explain in terms of a tester's perspective:

A tester finds a new defect/bug, so using a defect tracking tool logs it.

1. Its status is 'NEW' and assigns to the respective dev team (Team lead or Manager). 2. Th
e team lead assign's it to the team member, so the status is 'ASSIGNED TO'
3. The developer works on the bug fixes it and re-assings to the tester for testing. Now the status is 'RE-ASSIGNED'
4. The tester, check if the defect is fixed, if its fixed he changes the status to 'VERIFIED'
5. If the tester has the autority (depends on the company) he can after verifying change the status to 'FIXED'. If not the test lead can verify it and change the status to 'fixed'.

6. If the defect is not fixed he re-assign's the defect back to the dev team for re-fixing.

This is the life cycle of a bug.

1. User Interface Defects - Low
2. Boundary Related Defects - Medium
3. Error Handling Defects - Medium
4. Calculation Defects - High
5. Improper Service Levels (Control flow defects) - High
6. Interpreting Data Defects - High
7. Race Conditions (Compatibility and Intersystem defects)- High
8. Load Conditions (Memory Leakages under load) - High
9. Hardware Failures:- High

2. Top Ten Tips for Bug Tracking

1. A good tester will always try to reduce the repro steps to the minimal steps to reproduce; this is extremely helpful for the programmer who has to find the bug.

2. Remember that the only person who can close a bug is the person who opened it in the first place. Anyone can resolve it, but only the person who saw the bug can really be sure that what they saw is fixed.

3. There are many ways to resolve a bug. FogBUGZ allows you to resolve a bug as fixed, won't fix, postponed, not repro, duplicate, or by design.

4. Not Repro means that nobody could ever reproduce the bug. Programmers often use this when the bug report is missing the repro steps.

5. You'll want to keep careful track of versions. Every build of the software that you give to testers should have a build ID number so that the poor tester doesn't have to retest the bug on a version of the software where it wasn't even supposed to be fixed.

6. If you're a programmer, and you're having trouble getting testers to use the bug database, just don't accept bug reports by any other method. If your testers are used to sending you email with bug reports, just bounce the emails back to them with a brief message: "please put this in the bug database. I can't keep track of emails."

7. If you're a tester, and you're having trouble getting programmers to use the bug database, just don't tell them about bugs - put them in the database and let the database email them.

8. If you're a programmer, and only some of your colleagues use the bug database, just start assigning them bugs in the database. Eventually they'll get the hint.

9. If you're a manager, and nobody seems to be using the bug database that you installed at great expense, start assigning new features to people using bugs. A bug database is also a great "unimplemented feature" database, too.

10. Avoid the temptation to add new fields to the bug database. Every month or so, somebody will come up with a great idea for a new field to put in the database. You get all kinds of clever ideas, for example, keeping track of the file where the bug was found; keeping track of what % of the time the bug is reproducible; keeping track of how many times the bug occurred; keeping track of which exact versions of which DLLs were installed on the machine where the bug happened. It's very important not to give in to these ideas. If you do, your new bug entry screen will end up with a thousand fields that you need to supply, and nobody will want to input bug reports any more. For the bug database to work, everybody needs to use it, and if entering bugs "formally" is too much work, people will go around the bug database.

Manual Testing Interview Questions

What is Regression testing?
Re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing..

Regression testing is done in the following cases:
1. If the bugs reported in the previous build are fixed
2. If a new functionality is added
3. If the environment changes
Regression testing is done to ensure that the functionality which was working in the previous build was not disturbed due to the modifications in the build.
It is done to check that the code changes did not introduce any new bugs or disturb the previous functionality

When do you start developing your automation tests?
First, the application has to be manually tested. Once the manual testing is over and baseline is established.

What is a successful product?
A bug free product, meeting the expectations of the user would make the product successful.

What you will do during the first day of job?
Get acquainted with my team and application

Who should test your code?
QA Tester

How do we regression testing?
Various automation testing tools can be used to perform regression testing like WinRunner, Rational Robot and Silk Test.

Why do we do regression testing?
In any application new functionalities can be added so the application has to be tested to see whether the added functionalities have affected the existing functionalities or not. Here instead of retesting all the existing functionalities baseline scripts created for these can be rerun and tested.

In a Calculator, what is the major functionality, you are going to Test, which has been built specifically for a accountant? Assume that all basic functions like addition, subtraction etc are supported.
Check the maximum numbers of digits it supports?
Check for the memory?
Check for the accuracy due to truncation

Difference between Load Testing & Stress Testing?
Load Testing : the application is tested within the normal limits to identify the load that a system can with stand. In Load testing the no. of users varies.
Stress Testing: Stress tests are designed to confront programs with abnormal situations. Stress testing executes a system in a manner that demand rescues in abnormal quantity, frequency or volume.

If you have shortage of time, how would you prioritize you testing?
1) Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. Considerations can include:

•Which functionality is most important to the project's intended purpose?
•Which functionality is most visible to the user?
•Which functionality has the largest safety impact?
•Which functionality has the largest financial impact on users?
•Which aspects of the application are most important to the customer?
•Which aspects of the application can be tested early in the development cycle?
•Which parts of the code are most complex, and thus most subject to errors?
•Which parts of the application were developed in rush or panic mode?
•Which aspects of similar/related previous projects caused problems?
•Which aspects of similar/related previous projects had large maintenance expenses?
•Which parts of the requirements and design are unclear or poorly thought out?
•What do the developers think are the highest-risk aspects of the application?
•What kinds of problems would cause the worst publicity?
•What kinds of problems would cause the most customer service complaints?
•What kinds of tests could easily cover multiple functionalities?
•Which tests will have the best high-risk-coverage to time-required ratio?

2) We work on the major functionalities Which functionality is most visible to the user, Which functionality is most important to the project, which application is most important to the customer, highest-risk aspects of the application

Who in the company is responsible for Quality?
Both development and quality assurance departments are responsible for the final product quality

2) Quality assurance teams. For both Development and testing side.

Should we test every possible combination/scenario for a program?
Ideally, yes we should test every possible scenario, but this may not always be possible. It depends on many factors viz., deadlines, budget, complexity of software and so on. In such cases, we have to prioritize and thoroughly test the critical areas of the application

2) Yes, we should test every possible scenario, but some time the same functionality occurs again and again like LOGIN WINDOW so there is no need to test those functionalities again. There are some more factors:

Priority of the application.
Time or deadline.
Budget.

How will you describe testing activities?
Testing planning, scripting, execution, defect reporting and tracking, regression testing.

What is the purpose of the testing?
Testing provides information whether or not a certain product meets the requirements.

When should testing be stopped?
This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:

-Deadlines (release deadlines, testing deadlines, etc.)

-Test cases completed with certain percentage passed

-Test budget depleted

-Coverage of code/functionality/requirements reaches a specified point

-Bug rate falls below a certain level

-Beta or alpha testing period ends

Do you have a favorite QA book? Why?
Effective Methods for Software Testing - Perry, William E.

It covers the whole software lifecycle, starting with testing the project plan and estimates and ending with testing the effectiveness of the testing process. The book is packed with checklists, worksheets and N-step procedures for each stage of testing.

What are the roles of glass-box and black-box testing tools?
Glass-box testing also called as white-box testing refers to testing, with detailed knowledge of the modules internals. Thus these tools concentrate more on the algorithms, data structures used in development of modules. These tools perform testing on individual modules more likely than the whole application. Black-Box testing tools refer to testing the interface, functionality and performance testing of the system module and the whole system.

How do we regression testing?
Various automation testing tools can be used to perform regression testing like WinRunner, Rational Robot and Silk Test

Why do we do regression testing?
In any application new functionalities can be added so the application has to be tested to see whether the added functionalities have affected the existing functionalities or not. Here instead of retesting all the existing functionalities baseline scripts created for these can be rerun and tested.

What is the value of a testing group? How do you justify your work and budget?
All software products contain defects/bugs, despite the best efforts of their development teams. It is important for an outside party (one who is not developer) to test the product from a viewpoint that is more objective and representative of the product user.
Testing group test the software from the requirements point of view or what is required by the user. Testers job is to examine a program and see if it does not do what it is supposed to do and also see what it does what it is not supposed to do.

At what stage of the SDLC does testing begin in your opinion?
QA process starts from the second phase of the Software Development Life Cycle i.e. Define the System. Actual Product testing will be done on Test the system phase(Phase-5). During this phase test team will verify the actual results against expected results

Explain the software development lifecycle.
There are seven stages of the software development lifecycle

1.Initiate the project – The users identify their Business requirements.

2.Define the project – The software development team translates the business requirements into system specifications and put together into System Specification Document.

3.Design the system – The System Architecture Team design the system and write Functional Design Document. During design phase general solutions re hypothesized and data and process structures are organized.

4.Build the system – The System Specifications and design documents are given to the development team code the modules by following the Requirements and Design document.

5.Test the system - The test team develops the test plan following the requirements. The software is build and installed on the test platform after developers have completed development and Unit Testing. The testers test the software by following the test plan.

6.Deploy the system – After the user-acceptance testing and certification of the software, it is installed on the production platform. Demos and training are given to the users.

7.Support the system - After the software is in production, the maintenance phase of the life begins. During this phase the development team works with the development document staff to modify and enhance the application and the test team works with the test documentation staff to verify and validate the changes and enhancement to the application software.

FREQUENTLY ASKED QUESTIONS

1) What are your roles and responsibilities as a tester?

2) Explain Software development life cycle

3) What is master test plan? What it contains? Who is responsible for writing it?

4) What is test plan? Who is responsible for writing it? What it contains?

5) What different type of test cases you wrote in the test plan?

6) Why test plan is controlled document?

7) What information you need to formulate test plan?

8) What template you used to write testplan?

9) What is MR?

10) Why you write MR?

11) What information it contains.?

12) Give me few examples of the MRs you wrote.

13) What is Whit Box/Unit testing?

14) What is integration testing?

15) What is black box testing?

16) What knowledge you require to do the white box, integration and black box testing?

17) How many testers were in the test team?

18) What was the test team hierarchy?

19) Which MR tool you used to write MR?

20) What is regression testing?

21) Why we do regression testing?

22) How we do regression testing?

23) What are the different automation tools you kno?.

24) What is difference between regression automation tool and performance automation tool?

25) What is client server architecture?

26) What is three tier and multi-tier architecture?

27) What is Internet?

28) What is intranet?

29) What is extranet?

30) How Intranet is different from client-server?

31) What is different about Web Testing than Client server testing?

32) What is byte code file?

33) What is an Applet?

34) How applet is different from application?

35) What is Java Virtual Machine?

36) What is ISO-9000?

37) What is QMO?

38) What are the different phases of software development cycle?

39) How do help developers to track the faults is the software?

40) What are positive scenarios?

41) What are negative scenarios?

42) What are individual test cases?

43) What are workflow test cases?

44) If we have executed individual test cases, why we do workflow scenarios?

45) What is object oriented model?

46) What is procedural model?

47) What is an object?

48) What is class?

49) What is encapsulation? Give one example

50) What is inheritance? Give example.

51) What is Polymorphism? Give example.

52) What are the different types of MRs?

53) What is test Metrics?

54) What is the use Metrics?

55) How we decide which automation tool we are going to use for the regression testing?

56) If you have shortage of time, how would you prioritize you testing?

57) What is the impact of environment of the actual results of performance testing?

58) What is stress testing, performance testing, Security testing, Recovery testing and volume testing.

59) What criteria you will follow to assign severity and due date to the MR.

60) What is user acceptance testing?

61) What is manual testing and what is automated testing?

62) What are build, version, and release.

63) What are the entrance and exit criteria in the system test?

64) What are the roles of Test Team Leader

65) What are the roles of Sr. Test Engineer

66) What are the roles of QA analyst/QA Tester

67) How do you decide what functionalities of the application are to be tested?

68) If there are no requirements, how will you write your test plan?

69) What is smoke testing?

70) What is soak testing?

71) What is a pre-condition data?

72) What are the different documents in QA?

73) How do you rate yourself in software testing

74) With all the skills, do you prefer to be a developer or a tester? And Why?

75) What are the best web sites that you frequently visit to upgrade your QA skills?

76) Are the words “Prevention” and “Detection” sound familiar? Explain

77) Is defect resolution a technical skill or interpersonal skill from QA view point?

78) Can you automate all the test scripts? Explain

79) What is End to End business logic testing?

80) Explain to me about a most critical defect you found in your last project.


What is integration, and how u will execute this?
A. Integrated System Testing (IST) is a systematic technique for validating the construction of the overall Software structure while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested modules and test the overall Software structure that has been dictated by design. IST can be done either as Top down integration (Stubs) or Bottom up Integration (Drivers).

Suppose there are 1000 bugs, and there is only 10 days to go for release the product. Developer's said that it can't be fixed within this period, then what u will do.
A. In this case, most critical bugs should be fixed first, such as Severity 1 & 2 bugs, and rest of the bug can be fixed in the next release. Again it completely depends on the business people.

In Sp testing, don't u think, u r doing Unit testing.
A. If we look in the developer's pointer of view, then yes, it is a kind of unit testing. But from a tester point of view, the tester tests the Store proc. in more detail than a developer.

What is regression testing, and how it started and end. Assume that in one module u found a bug, u send that to the developer end to fix that bug, but after that bug fixed, how will u do the regression testing and how will u end that.

A. Regression Testing is re testing unchanged segments of the application system, it normally involves re-running test that have been previously executed to ensure that the same results can be achieved currently as were achieved when the segment was last tested.. For example, the tester get a bug in a module, after getting that module, tester sends that part to the developers end to fix that bug. After fixing that bug, that module comes to the tester end. After received, the tester again test that module and find out that, whether all the bugs are fixed or not. If those bug are fixed and after that the tester have to checked out that by fixing these bugs, whether the developer made some idiot move, and it leads to rise other bugs in other modules. So the tester has to regressively test different modules.

Understandability
The more information we have, the smarter we will test.

•The design is well understood
•Dependencies between internal external and shared components are well understood.
•Changes to the design are communicated.
•Technical documentation is instantly accessible
•Technical documentation is well organized
•Technical documentation is specific and detailed
•Technical documentation is accurate

Stability
The fewer the changes, the fewer the disruptions to testing

•Changes to the software are infrequent
•Changes to the software are controlled
•Changes to the software do not invalidate existing tests
•The software recovers well from failures

Simplicity
The less there is to test, the more quickly it can be tested

•Functional simplicity
•Structural simplicity
•Code simplicity

Decomposability
By controlling the scope of testing, problems can be isolated quickly, and smarter testing can be performed.

•The software system is built from independent modules
•Software modules can be tested independently

Controllability
The better the software is controlled, the more the testing can be automated and optimized.

•All possible outputs can be generated through some combination of input
•All code is executable through some combination of input
•Software and hardware states can be controlled directly by testing
•Input and output formats are consistent and structured
•Tests can be conveniently specified, automated, and reproduced.

Observability
What is seen is what is tested

•Distinct output is generated for each input
•System states and variables are visible or queriable during execution
•Past system states and variables are visible or queriable ( e.g., transaction logs)
•All factors affecting the output are visible
•Incorrect output is easily identified
•Incorrect input is easily identified
•Internal errors are automatically detected through self-testing mechanism
•Internally errors are automatically reported
•Source code is accessible

Software Testing Requirements

Software testing is not an activity to take up when the product is ready. An effective testing begins with a proper plan from the user requirements stage itself. Software testability is the ease with which a computer program is tested. Metrics can be used to measure the testability of a product. The requirements for effective testing are given in the following sub-sections.

Testing Principles
The basic principles for effective software testing are as follows:

•A good test case is one that has a high probability of finding an as-yet undiscovered error.
•A successful test is one that uncovers an as-yet-undiscovered error.
•All tests should be traceable to the customer requirements
•Tests should be planned long before testing begins
•Testing should begin “ in the small” and progress towards testing “in the large”
•Exhaustive testing is not possible

Testing Objectives
Testing is a process of executing a program with the intent of finding an error.
Software testing is a critical element of software quality assurance and represents the ultimate review of system specification, design and coding. Testing is the last chance to uncover the errors / defects in the software and facilitates delivery of quality system.

Who will attend the User Acceptance Tests?
The MIS Development Unit is working with relevant Practitioner Groups and managers to identify the people who can best contribute to system testing. Most of those involved in testing will also have been involved in earlier discussions and decision making about the system set-up. All users will receive basic training to enable them to contribute effectively to the test.

What are the objectives of a User Acceptance Test?
Objectives of the User Acceptance Test are for a group of key users to:
? Validate system set-up for transactions and user access
? Confirm use of system in performing business processes
? Verify performance on business critical functions
? Confirm integrity of converted and additional data, for example values that appear in a look-up table
? Assess and sign off go-live readiness

What does the User Acceptance Test cover?
The scope of each User Acceptance Test will vary depending on which business process we are testing. In general however, all tests will cover the following broad areas:
? A number of defined test cases using quality data to validate end-to-end business processes
? A comparison of actual test results against expected results
? A meeting/discussion forum to evaluate the process and facilitate issue resolution

What is a User Acceptance Test?
A User Acceptance Test is:
? A chance to completely test business processes and software
? A scaled-down or condensed version of the system
? The final UAT for each module will be the last chance to perform the above in a test situation

What if the application has functionality that wasn't in the requirements?
It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only effects areas such as minor improvements in the user interface, for example, it may not be a significant risk.

What can be done if requirements are changing continuously?
A common problem and a major headache.

•Work with the project's stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible.

•It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch.

•If the code is well-commented and well-documented this makes changes easier for the developers.

•Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes.

•The project's initial schedule should allow for some extra time commensurate with the possibility of changes.

•Try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version.

•Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application.

•Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted - after all, that's their job.

•Balance the effort put into setting up automated testing with the expected effort required to re-do them to deal with changes.

•Try to design some flexibility into automated test scripts.

•Focus initial automated testing on application aspects that are most likely to remain unchanged.

•Devote appropriate effort to risk analysis of changes to minimize regression testing needs.

•Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)

•Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding of the added risk that this entails).

What if there isn't enough time for thorough testing?
Use risk analysis to determine where testing should be focused.
Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience. (If warranted, formal methods are also available.)

Considerations can include:

•Which functionality is most important to the project's intended purpose?

•Which functionality is most visible to the user?

•Which functionality has the largest safety impact?

•Which functionality has the largest financial impact on users?

•Which aspects of the application are most important to the customer?

•Which aspects of the application can be tested early in the development cycle?

•Which parts of the code are most complex, and thus most subject to errors?

•Which parts of the application were developed in rush or panic mode?

•Which aspects of similar/related previous projects caused problems?

•Which aspects of similar/related previous projects had large maintenance expenses?

•Which parts of the requirements and design are unclear or poorly thought out?

•What do the developers think are the highest-risk aspects of the application?

•What kinds of problems would cause the worst publicity?

•What kinds of problems would cause the most customer service complaints?

•What kinds of tests could easily cover multiple functionalities?

•Which tests will have the best high-risk-coverage to time-required ratio?

What steps are needed to develop and run software tests?
The following are some of the steps to consider:

•Obtain requirements, functional design, and internal design specifications and other necessary documents.

•Obtain budget and schedule requirements

•Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)

•Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests

•Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.

•Determine test environment requirements (hardware, software, communications, etc.)

•Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)

•Determine test input data requirements

•Identify tasks, those responsible for tasks, and labor requirements

•Set schedule estimates, timelines, milestones

•Determine input equivalence classes, boundary value analyses, error classes

•Prepare test plan document and have needed reviews/approvals

•Write test cases

•Have needed reviews/inspections/approvals of test cases

•Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data

•Obtain and install software releases

•Perform tests

•Evaluate and report results

•Track problems/bugs and fixes

•Retest as needed

•Maintain and update test plans, test cases, test environment, and testware through life cycle.

What's the role of documentation in QA?
Critical. (Note that documentation can be electronic, not necessarily paper.) QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information. Change management for documentation should be used if possible.

What makes a good QA or Test manager?
A good QA, test, or QA/Test(combined) manager should:

•be familiar with the software development process.

•be able to maintain enthusiasm of their team and promote a positive atmosphere, despite what is a somewhat 'negative' process (e.g., looking for or preventing problems).

•be able to promote teamwork to increase productivity.

•be able to promote cooperation between software, test, and QA engineers.

•have the diplomatic skills needed to promote improvements in QA processes.

•have the ability to withstand pressures and say 'no' to other managers when quality is insufficient or QA processes are not being adhered to.

•have people judgment skills for hiring and keeping skilled personnel.

•be able to communicate with technical and non-technical people, engineers, managers, and customers..

•be able to run meetings and keep them focused.

What makes a good Software QA engineer?
The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.

What makes a good test engineer?
A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool programming. Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.

Why do we need to Test?
Defects can exist in the software, as it is developed by human who can make mistakes during the development of software. However, it is the primary duty of a software vendor to ensure that software delivered does not have defects and the customers day-to-day operations do not get affected. This can be achieved by rigorously testing the software. The most common origin of software bugs is due to:

• Poor understanding and incomplete requirements
• Unrealistic schedule
• Fast changes in requirements
• Too many assumptions and complacency

How will be the testcases for product testing . Provide an example of test plan
For product testing, the test plan includes more rigorous testing since most of these products are off the self CD buys or net downloads.

Some of the common parameters in Testing must include
-------------------------------------------------------
1) Testing on Different Operating Systems
2) Installations done from CD ROM Drives with different machine configurations
3) Installations done from CD ROM Drives with different machine configurations with different versions of Browsers and Software Service Packs
4) LICENSE KEY functionality
5) Eval Version checks and Full Version checks with reference to eval keys that would need to be processed.

1. What we normally check for in the Database Testing?
In DB testing we need to check for,
1. The field size validation
2. Check constraints.
3. Indexes are done or not (for performance related issues)
4. Stored procedures
5. The field size defined in the application is matching with that in the db.

2. What is Database testing?
Data bas testing basically include the following.
1)Data validity testing.
2)Data Integrity testing
3)Performance related to data base.
4)Testing of Procedure, triggers and functions.

for doing data validity testing you should be good in SQL queries
For data integrity testing you should know about refer initial integrity and different constraint.
For performance related things you should have idea about the table structure and design.
for testing Procedure triggers and functions you should be able to understand the same

3. How to Test database in Manually? Explain with an example
Observing that operations, which are operated on front-end is effected on back-end or not.
The approach is as follows :
While adding a record through front-end check back-end that addition of record is effected or not.
So same for delete, update.
Ex: Enter employee record in database through front-end and check if the record is added or not to the back-end(manually).

1. What criteria would you use to select Web transactions for load testing?
this again comes from voice of customer, which includes what are the very commonly used transactions of the applications, we cannot load test all transactions , we need to understand the business critical transactions , this can be done either talking.

2. For what purpose are virtual users created?
Virtual users are created to emulate real users.

3. Why it is recommended to add verification checks to your all your scenarios?
To verify the Functional flow....verification checks are used in the scenarios

4. In what situation would you want to parameterize a text verification check?
I think verification is the process done when the test results are sent to the developer, developer fixes that and the rectification of the bugs. Then tester need to verification of the bugs which is sent by him.

5. Why do you need to parameterize fields in your virtual user script?
need for parameterization is ,for e.g. test for inserting a record in table, which is having a primary key field. the recorded vuser script tries to enter same record into the table for that many no of vusers. but failed due to integrity constraint. in that situation we definitely need parameterization.

6. What are the reasons why parameterization is necessary when load testing the Web server and the database server?
parameterization is done to check how your application performs the same operation with different data.In load runner it is necessary to make a single user to refer the page for several times similar in case of database server.

7. How can data caching have a negative effect on load testing results?
yes, data caching have a negative effect on load testing results, this can be altered according to the requirments of the scenario in the run-time settings.

8. What usually indicates that your virtual user script has dynamic data that is dependent on you parameterized fields?
Use the extended logging option of reporting.

9. What are the benefits of creating multiple actions within any virtual user s
Reusability. Repeatability, Reliability.

10. Load Testing - What should be analyzed.
To determine the performance of the system following objectives to be calculated.
1) Response time -: The time in which system responds to a transaction i.e., the time interval between submission of request and receiving response.
2) Think time -: Time

11. What is the difference between Load testing and Performance Testing?
Performance testing verifies loads, volume and response time as defined by requirements while load testing is testing an application under heavy loads to determine at what point the system response time degrades.

What makes a good Software QA engineer?
The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.

When do you start developing your automation tests?
First, the application has to be manually tested. Once the manual testing is over and baseline is established.

What is a successful product?
A bug free product, meeting the expectations of the user would make the product successful.

TestCases

Write Test Cases for testing ATM machine, Coffee Blending Machine, Telephone Handset?

Here the test cases should be in a organized way.

Coffee Machine Test Cases

1.verify the coffee machine is working properly or not by switching ON power supply.
2.verify the coffee machine when power supply is improper.
3.verify the machine that all buttons are visible.
4.verify the indicator light that the machine is turned ON after switching on power supply.
5.Verify the machine when there is no water.
6.verify the machine when there is no coffee powder.
7.Verify the machine when there is no milk.
10.Verify the machine when there is no sugar.
8.Verify the machine operation when it is empty.
9.Verify the machine operation when all the ingredients are upto the capacity level.
10.Verify the machine operation when water quantity is less than its limit.
11.Verify the machine operation when milk quantity is less than its capacity limit.
12.Verify the machine operation when coffee powder is less than its capacity limit.
13.verify the machine operation when sugar available is less than its capacity limit.
14.Verify the machine operation when there is metal piece is stuck inside the machine.
15.verify the machine by pressing the coffee button and
check it is pouring coffee with appropriate mixture and taste.
16.verify the machine by pressing the Tea button and check it is pouring Tea with appropriate mixture and taste.
17.It should fill the coffee cup appropriately i,e quantity.
18.verify coffee machine operation with in seconds after pouring milk, sugar, water etc. It should display message.
19.Verify all the buttons operation.
20.Verify all the machine operation by pressing the buttons simultaneously one after the other.
21.Verify the machine operation by pressing two buttons at a time.
22.verify the machine operation at the time power fluctuations.
23.Verify the machine operation when all the ingredients are overloaded.
24.Verify the machine operation when one of the ingredient is overloaded and others are upto limit.
25.Verify the machine operation when one or some of the parts inside the machine are damaged.

What are negative scenarios?
Testing to see whether the application is not doing what it is not suppose to do

What are positive scenarios?
Testing to see whether the application is doing what it is supposed to do.

In a web page, if two text boxes are there (one for Name Field another for Telephone no.), supported by "Save" & "Cancel" button. Then derive some test cases.
What more information you need?
Here is a sample list of questions that u can ask
Field Validation i.e alphanumeric for Name and Numeric for telephone Number
enable/disabled
focus
Boundary conditions ( i.e what is the max length for name and telephone no.)
field size
GUI standards for the controls

Some Test cases can be as follows: (it should be in a managed way)
Whether it is taking a valid name entry.
Whether it is taking a valid telephone no. entry.
Whether it is taking a long telephone no. etc.

What are Individual test case and Workflow test case? Why we do workflow scenarios
An individual test is one that is for a single features or requirement. However, it is important that related sequences of features be tested as well, as these correspond to units of work that user will typically perform. It will be important for the system tester to become familiar with what users intend to do with the product and how they intend to do it. Such testing can reveal errors that might not ordinarily be caught otherwise. For example while each operations in a series might produce the correct results it is possible that intermediate results get lost or corrupted between operations.

How do you determine what to test?
Depending upon the User Requirement document.

Have you ever written test cases or did you just execute those written by others?
Yes, I was involved in preparing and executing test cases in all the project.

What are the properties of a good requirement?
Understandable, Clear, Concise, Total Coverage of the application
What type of document do you need for QA, QC and testing?
Following is the list of documents required by QA and QC teams
Business requirements
SRS
Use cases
Test plan
Test cases

What is a good test case?
Accurate - tests what it’s designed to test
Economical - no unnecessary steps
Repeatable, reusable - keeps on going
Traceable - to a requirement
Appropriate - for test environment, testers
Self standing - independent of the writer
Self cleaning - picks up after itself

How to Write Better Test Cases
Test cases and software quality
Anatomy of a test case
Improving testability
Improving productivity
The seven most common mistakes
Case study

What's a 'test case'?
•A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

•Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.

How will you check that your test cases covered all the requirements?
By using traceability matrix.
Traceability matrix means the matrix showing the relationship b/w the requirements & testcases.

for a triangle(sum of two sides is greater than or equal to the third side),what is the minimal number of test cases required.
The answer is 3

1. Measure all sides of the triangle.
2. Add the minimum and second highest length of the triangle and store the result as Res.
3. Compare the Res with the largest side of the triangle.

Test Plan

What is UML and how it is used for testing?
The Unified Modeling Language (UML) is the industry-standard language for specifying, visualizing, constructing, and documenting the artifacts of software systems. It simplifies the complex process of software design, making a "blueprint" for construction. UML state charts provide a solid basis for test generation in a form that can be easily manipulated. This technique includes coverage criteria that enable highly effective tests to be developed. A tool has been developed that uses UML state charts produced by Rational Software Corporation's Rational Rose tool to generate test data.

What is good code?
These are some important qualities of good code
Cleanliness: Clean code is easy to read; this lets people read it with minimum effort so that they can understand it easily.

Consistency: Consistent code makes it easy for people to understand how a program works; when reading consistent code; one subconsciously forms a number of assumptions and expectations about how the code works, so it is easier and safer to make modifications to it.

Extensibility: General-purpose code is easier to reuse and modify than very specific code with lots of hard coded assumptions. When someone wants to add a new feature to a program, it will obviously be easier to do so if the code was designed to be extensible from the beginning.

Correctness: Finally, code that is designed to be correct lets people spend less time worrying about bugs and more time enhancing the features of a program.

What are the entrance and exit criteria in the system test?
Entrance and exit criteria of each testing phase is written in the master test plan.

Entrance Criteria:
-Integration exit criteria have been successfully met.

-All installation documents are completed.

-All shippable software has been successfully built

-Syate, test plan is baselined by completing the walkthrough of the test plan.

-Test environment should be setup.

-All severity 1 MR’s of integration test phase should be closed.

Exit Criteria:
-All the test cases in the test plan should be executed.

-All MR’s/defects are either closed or deferred.

-Regression testing cycle should be executed after closing the MR’s.

-All documents are reviewed, finalized and signed-off.

What is master test plan? What it contains? Who is responsible for writing it?
OR
What is a test plan? Who is responsible for writing it? What it contains.
OR
What's a 'test plan'? What did you include in a test plan?
A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:

•Title

•Identification of software including version/release numbers

•Revision history of document including authors, dates, approvals

•Table of Contents

•Purpose of document, intended audience

•Objective of testing effort

•Software product overview

•Relevant related document list, such as requirements, design documents, other test plans, etc.

•Relevant standards or legal requirements

•Trace ability requirements

•Relevant naming conventions and identifier conventions

•Overall software project organization and personnel/contact-info/responsibilities

•Test organization and personnel/contact-info/responsibilities

•Assumptions and dependencies

•Project risk analysis

•Testing priorities and focus

•Scope and limitations of testing

•Test outline - a decomposition of the test approach by test type, feature, functionality, process, system,
module, etc. as applicable

•Outline of data input equivalence classes, boundary value analysis, error classes

•Test environment - hardware, operating systems, other required software, data configurations, interfaces
to other systems

•Test environment validity analysis - differences between the test and production systems and their
impact on test validity.

•Test environment setup and configuration issues

•Software migration processes

•Software CM processes

•Test data setup requirements

•Database setup requirements

•Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that
will be used to help describe and report bugs

•Discussion of any specialized software or hardware tools that will be used by testers to help track the
cause or source of bugs

•Test automation - justification and overview

•Test tools to be used, including versions, patches, etc.

•Test script/test code maintenance processes and version control

•Problem tracking and resolution - tools and processes

•Project test metrics to be used

•Reporting requirements and testing deliverables

•Software entrance and exit criteria

•Initial sanity testing period and criteria

•Test suspension and restart criteria

•Personnel allocation

•Personnel pre-training needs

•Test site/location

•Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact persons,
and coordination issues

•Relevant proprietary, classified, security, and licensing issues.

•Open issues

•Appendix - glossary, acronyms, etc.


The team-lead or a Sr. QA Analyst is responsible to write this document.

Why is test plan a controlled document?
Because it controls the entire testing process. Testers have to follow this test plan during the entire testing process.

What's a 'test plan'?
A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:

•Title

•Identification of software including version/release numbers

•Revision history of document including authors, dates, approvals

•Table of Contents

•Purpose of document, intended audience

•Objective of testing effort

•Software product overview

•Relevant related document list, such as requirements, design documents, other test plans, etc.

•Relevant standards or legal requirements

•Traceability requirements

•Relevant naming conventions and identifier conventions

•Overall software project organization and personnel/contact-info/responsibilities

•Test organization and personnel/contact-info/responsibilities

•Assumptions and dependencies

•Project risk analysis

•Testing priorities and focus

•Scope and limitations of testing

•Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable

•Outline of data input equivalence classes, boundary value analysis, error classes

•Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems

•Test environment setup and configuration issues

•Test data setup requirements

•Database setup requirements

•Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs

•Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs

•Test automation - justification and overview

•Test tools to be used, including versions, patches, etc.

•Test script/test code maintenance processes and version control

•Problem tracking and resolution - tools and processes

•Project test metrics to be used

•Reporting requirements and testing deliverables

•Software entrance and exit criteria

•Initial sanity testing period and criteria

•Test suspension and restart criteria

•Personnel allocation

•Personnel pre-training needs

•Test site/location

•Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact persons, and coordination issues

•Relevant proprietary, classified, security, and licensing issues.

•Open issues

•Appendix - glossary, acronyms, etc.

What's the big deal about 'requirements'?
One of the most reliable methods of insuring problems, or failure, in a complex software project is to have poorly documented requirements specifications. Requirements are the details describing an application's externally-perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). A testable requirement would be something like 'the user must enter their previously-assigned password to access the application'. Determining and organizing requirements details in a useful and efficient way can be a difficult effort; different methods are available depending on the particular project. Many books are available that describe various approaches to this task.

Care should be taken to involve ALL of a project's significant 'customers' in the requirements process. 'Customers' could be in-house personnel or out, and could include end-users, customer acceptance testers, customer contract officers, customer management, future software maintenance engineers, salespeople, etc. Anyone who could later derail the project if their expectations aren't met should be included if possible.

Organizations vary considerably in their handling of requirements specifications. Ideally, the requirements are spelled out in a document with statements such as 'The product shall.....'. 'Design' specifications should not be confused with 'requirements'; design specifications should be traceable back to the requirements.
In some organizations requirements may end up in high level project plans, functional specification documents, in design documents, or in other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by testers in order to properly plan and execute tests. Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.

5. GUI contains 2 fields Field 1 to accept the value of x and Field 2 displays the result of the formula a+b/c-d where a=0.4*x, b=1.5*a, c=x, d=2.5*b; How many system test cases would you write
GUI contains 2 fields

Field 1 to accept the value of x and

Field 2 displays the result

so that there is only one test case to write.

4. Lets say we have an GUI map and scripts, a we got some 5 new pages included inan application how do we do that?
By integration testing.

3. Given an yahoo application how many test cases u can write?
First we need requirements of the yahoo application.
I think test cases are written against given requirements. So for any working web application or new application, requirements are needed to prepare test cases. The number of test cases depends on the requirements of the application

Note to learners : A Test Engineer must have knowledge on SDLC. I suggest learners to take any one exiting application and start practice from writing requirements.
2. Complete Testing with Time Constraints : Question: How do you complete the testing when you have a time constraint?
If i am doinf regression testing and i do not have sufficient time then we have to see for which sort of regression testing i have to go
1)unit regression testing
2)Regional Regression testing
3)Full Regression testing.

Testing Scenarios : How do you know that all the scenarios for testing are covered?
By using the Requirement Traceability Matrix (RTM) we can ensure that we have covered all the functionalities in Test Coverage.

RTM is a document that traces User Requirements from analysis through implementations. RTm can be used as a completeness check to verify that all the requirements are present or that there is no unnecessary/extra features and as a maintenance guide to new personnel.

We can use the simple format in Excel sheet where we map the Functionality with the Test case ID.

Bug Report

What is the role of a bug tracking system?
Bug tracking system captures, manages and communicates changes, issues and tasks, providing basic process control to ensure coordination and communication within and across development and content teams at every step..

Why you write MR?
MR is written for reporting problems/errors or suggestions in the software.

What is MR?
MR is a Modification Request also known as Defect Report, a request to modify the program so that program does what it is supposed to do.
Low priority and high severity.
suppose you are having a bug that application crashes for a wrong use case. that only 1 in 1000 customers will be performing those steps and the application crashes. here the severity is very high as the application crashes, but the priority to fix this bug is very less as this will effect only one customer that to for a wrong use case.

High priority and Low severity.
suppose you are having a bug that there is a spelling mistake in the name of your project/product. Here the severity is very less as this does not effect any thing as its just a spelling mistake. but the priority to fix this bug is very high as if anyone catches this bug, the image of our product will effect and customer will get bad impression. so this is of high
priority.

Severity tells us how bad the defect is. Priority tells us how soon it is desired to fix the problem.
In some companies, the defect reporter sets the severity and the triage team or product management sets the priority. In a small company, or project (or product), particularly where there aren't many defects to track, you can expect you don't really need both since a high severity defect is also a high priority defect. But in a large company, and particularly where there are many defects, using both is a form of risk management.

Major would be 1 and Trivial would be 3. You can add or multiply the two values together (there is only a small difference in the outcome) and then use the event's risk value to determine how you should address the problem. The lower values must be addressed and the higher values can wait.
Based on a military standard, MIL-STD-882.

They use a four-point severity rating (rather than three): Catastrophic; Critical; Marginal; Negligible. They then use a five-point (rather than three) probability rating: Frequent; Probable; Occasional; Remote; Improbable. Then rather than using a mathematical calculation to determine a risk level, they use a predefined chart.

Blocker: This bug prevents developers from testing or developing the software.
Critical: The software crashes, hangs, or causes you to lose data.
Major: A major feature is broken.
Normal: It's a bug that should be fixed.
Minor: Minor loss of function, and there's an easy work around.
Trivial: A cosmetic problem, such as a misspelled word or misaligned text.
Enhancement: Request for new feature or enhancement.

5 Level Error Classification Method

1. Catastrophic:
Defects that could (or did) cause disastrous consequences for the system in question.
E.g.) critical loss of data, critical loss of system availability, critical loss of
security, critical loss of safety, etc.

2. Severe:
Defects that could (or did) cause very serious consequences for the system in question.
E.g.) A function is severely broken, cannot be used and there is no workaround.

3. Major:
Defects that could (or did) cause significant consequences for the system in question - A
defect that needs to be fixed but there is a workaround.
E.g. 1.) losing data from a serial device during heavy loads.
E.g. 2.) Function badly broken but workaround exists

4. Minor:
Defects that could (or did) cause small or negligible consequences for the system in
question. Easy to recover or workaround.
E.g.1) Error messages misleading.
E.g.2) Displaying output in a font or format other than what the customer desired.

5. No Effect:
Trivial defects that can cause no negative consequences for the system in question. Such
defects normally produce no erroneous outputs.
E.g.1) simple typos in documentation.
E.g.2) bad layout or mis-spelling on screen.

What criteria you will follow to assign severity and due date to the MR?

Defects (MR) are assigned severity as follows:

Critical: show stoppers (the system is unusable)
High: The system is very hard to use and some cases are prone to convert to critical issues if not taken care of.
Medium: The system functionality has a major bug but is not too critical but needs to be fixed in order for the AUT to go to production environment.
Low: cosmetic (GUI related)

How do you help developer to track the fault s in the software?
By providing him with details of the defects which include the environment, test data, steps followed etc… and helping him to reproduce the defect in his environment.

You find a bug and the developer says “It’s not possible” what do u do?
I’ll discuss with him under what conditions (working environment) the bug was produced. I’ll provide him with more details and the snapshot of the bug.

What is the difference between exception and validation testing?
Validation testing aims to demonstrate that the software functions in a manner that can be reasonably expected by the customer. Testing the software in conformance to the Software Requirements Specifications.

Exception testing deals with handling the exceptions (unexpected events) while the AUT is run. Basically this testing involves how to change the control flow of the AUT when an exception arises.

What do you like about Windows?
Interface and User friendliness
Windows is one the best software I ever used. It is user friendly and very easy to learn.

What is a successful product?
A bug free product, meeting the expectations of the user would make the product successful.

How do you feel about cyclomatic complexity?
Cyclomatic complexity is a measure of the number of linearly independent paths through a program module. Cyclomatic complexity is a measure for the complexity of code related to the number of ways there are to traverse a piece of code. This determines the minimum number of inputs you need to test all ways to execute the program.

Describe your experience with code analyzers?
Code analyzers generally check for bad syntax, logic, and other language-specific programming errors at the source level. This level of testing is often referred to as unit testing and server component testing. I used code analyzers as part of white box testing.

What is ODBC?
Open Database Connectivity (ODBC) is an open standard application-programming interface (API) for accessing a database. ODBC is based on Structured Query Language (SQL) Call-Level Interface. It allows programs to use SQL requests that will access databases without having to know the proprietary interfaces to the databases. ODBC handles the SQL request and converts it into a request the individual database system understands.

What is the role of a bug tracking system?
Bug tracking system captures, manages and communicates changes, issues and tasks, providing basic process control to ensure coordination and communication within and across development and content teams at every step..

Which MR tool you used to write MR?
Test Director
Rational ClearQuest.
PVCS Tracker

Why you write MR?
MR is written for reporting problems/errors or suggestions in the software.

What information does MR contain?
OR
Describe me to the basic elements you put in a defect report?
OR
What is the procedure for bug reporting?
The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available.

The following are items to consider in the tracking process:

•Complete information such that developers can understand the bug, get an idea of its severity, and reproduce it if necessary.

•Bug identifier (number, ID, etc.)

•Current bug status (e.g., 'Released for Retest', 'New', etc.)

•The application name or identifier and version

•The function, module, feature, object, screen, etc. where the bug occurred

•Environment specifics, system, platform, relevant hardware specifics

•Test case name/number/identifier

•One-line bug description

•Full bug description

•Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't
have easy access to the test case/test script/test tool

•Names and/or descriptions of file/data/messages/etc. used in test

•File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding
the cause of the problem

•Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)

•Was the bug reproducible?

•Tester name

•Test date

•Bug reporting date

•Name of developer/group/organization the problem is assigned to

•Description of problem cause

•Description of fix

•Code section/file/module/class/method that was fixed

•Date of fix

•Application version that contains the fix

•Tester responsible for retest

•Retest date

•Retest results

•Regression testing requirements

•Tester responsible for regression tests

•Regression testing results

How defeat tracking is use
Used for assign bugs to the development team and it pops the developer to check the error.

How can it be known when to stop testing?
This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
• Deadlines (release deadlines, testing deadlines, etc.)
• Test cases completed with certain percentage passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• Bug rate falls below a certain level
• Beta or alpha testing period ends.

What should be done after a bug is found?
The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process:

•Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.

•Bug identifier (number, ID, etc.)

•Current bug status (e.g., 'Released for Retest', 'New', etc.)

•The application name or identifier and version

•The function, module, feature, object, screen, etc. where the bug occurred

•Environment specifics, system, platform, relevant hardware specifics

•Test case name/number/identifier

•One-line bug description

•Full bug description

•Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool

•Names and/or descriptions of file/data/messages/etc. used in test

•File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem

•Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)

•Was the bug reproducible?

•Tester name

•Test date

•Bug reporting date

•Name of developer/group/organization the problem is assigned to

•Description of problem cause

•Description of fix

•Code section/file/module/class/method that was fixed

•Date of fix

•Application version that contains the fix

•Tester responsible for retest

•Retest date

•Retest results

•Tester responsible for regression tests

•Regression testing results

A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers

Different types of defects
- Open Defects - The list of defects remaining in the defect tracking system with a status of Open. Technical Support has access to the system, so a report noting the defect ID, the problem area, and title should be sufficient.

- Deferred Defects - The list of defects remaining in the defect tracking system with a status of deferred. Deferred means the technical product manager has decided not to address the issue with the current release.

- Pending Defects - The list of defects remaining in the defect tracking system with a status of pending. Pending refers to any defect waiting on a decision from a technical product manager before a developer addresses the problem.

- Fixed Defects - The list of defects waiting for verification by QA.

- Closed Defects - The list of defects verified as fixed by QA during the project cycle.
The Release Package is compiled in anticipation of the Readiness Review meeting. It is reviewed by the QA Process Manager during the QA Process Review Meeting and is provided to the Release Board and Technical Support.

Bug Reports - Other Considerations
* If your bug is randomly reproducible, just mention it in your bug report. But don’t forget to file it. You can always add the exact steps to reproduce anytime later you (or anyone else) discover them. This will also come to your rescue when someone else reports this issue, especially if it’s a serious one.
* Mention the error messages in the bug report, especially if they are numbered. For example, error messages from the database.
* Mention the version numbers and build numbers in the bug reports.
* Mention the platforms on which the issue is reproducible. Precisely mention the platforms on which the issue is not reproducible. Also understand that there is difference between the issue being not reproducible on a particular platform and it not being tested on that platform. This might lead to confusion.
* If you come across several problems having the same cause, write a single bug report. The fix of the problem will be only one. Similarly, if you come across similar problems at different locations requiring the same kind of fix but at different places, write separate bug reports for each of the problems. One bug report for only one fix.
* If the test environment on which the bug is reproducible is accessible to the developers, mention the details of accessing this setup. This will help them save time to setting up the environment to reproduce your bug.
* Under no circumstances should you hold on to any information regarding the bug. Unnecessary iterations of the bug report between the developer and the tester before being fixed is just waste of time due to ineffective bug reporting.

How to Write Effective Bug Reports

The Purpose Of A Bug Report
When we uncover a defect, we need to inform the developers about it. Bug report is a medium of such communication. The primary aim of a bug report is to let the developers see the failure with their own eyes. If you can't be with them to make it fail in front of them, give them detailed instructions so that they can make it fail for themselves. The bug report is a document that explains the gap between the expected result and the actual result and detailing on how to reproduce the scenario.
After Finding The Defect

* Draft the bug report just when you are sure that you have found a bug, not after the end of test or end of day. It might be possible that you might miss out on some point. Worse, you might miss the bug itself.
* Invest some time to diagnose the defect you are reporting. Think of the possible causes. You might land up uncovering some more defects. Mention your discoveries in your bug report. The programmers will only be happy seeing that you have made their job easier.
* Take some time off before reading your bug report. You might feel like re-writing it.

Defect Summary
The summary of the bug report is the reader’s first interaction with your bug report. The fate of your bug heavily depends on the attraction grabbed by the summary of your bug report. The rule is that every bug should have a one-liner summary. It might sound like writing a good attention-grabbing advertisement campaign. But then, there are no exceptions. A good summary will not be more than 50-60 characters. Also, a good summary should not carry any subjective representations of the defect.
The Language

* Do not exaggerate the defect through the bug report. Similarly, do not undertone it.
* However nasty the bug might be, do not forget that it’s the bug that’s nasty, not the programmer. Never offend the efforts of the programmer. Use euphemisms. 'Dirty UI' can be made milder as 'Improper UI'. This will take care that the programmer's efforts are respected.
* Keep It Simple & Straight. You are not writing an essay or an article, so use simple language.
* Keep your target audience in mind while writing the bug report. They might be the developers, fellow testers, managers, or in some cases, even the customers. The bug reports should be understandable by all of them.

Steps To Reproduce
* The flow of the Steps To Reproduce should be logical.
* Clearly list down the pre-requisites.
* Write generic steps. For example, if a step requires the user to create file and name it, do not ask the user to name it like "Mihir's file". It can be better named as "Test File".
* The Steps To Reproduce should be detailed. For example, if you want the user to save a document from Microsoft Word, you can ask the user to go to File Menu and click on the Save menu entry. You can also just say "save the document". But remember, not everyone will not know how to save a document from Microsoft Word. So it is better to stick to the first method.
* Test your Steps To Reproduce on a fresh system. You might find some steps that are missing, or are extraneous.

Test Data
Strive to write generic bug reports. The developers might not have access to your test data. If the bug is specific to a certain test data, attach it with your bug report.

Screenshots
Screenshots are a quite essential part of the bug report. A picture makes up for a thousand words. But do not make it a habit to unnecessarily attach screen shots with every bug report. Ideally, your bug reports should be effective enough to enable the developers to reproduce the problem. Screen shots should be a medium just for verification.

* If you attach screen shots to your bug reports, ensure that they are not too heavy in terms of size. Use a format like jpg or gif, but definitely not bmp.
* Use annotations on screen shots to pin-point at the problems. This will help the developers to locate the problem at a single glance.

Severity / Priority
* The impact of the defect should be thoroughly analyzed before setting the severity of the bug report. If you think that your bug should be fixed with a high priority, justify it in the bug report. This justification should go in the Description section of the bug report.
* If the bug is the result of regression from the previous builds/versions, raise the alarm. The severity of such a bug may be low but the priority should be typically high.

Logs
Make it a point to attach logs or excerpts from the logs. This will help the developers to analyze and debug the system easily. Most of the time, if logs are not attached and the issue is not reproducible on the developer's end, they will revert to you asking for logs.

If the logs are not too large, say about 20-25 lines, you can paste it in bug report. But if it is large enough, add it to your bug report as an attachment, else your bug report will look like a log.

2. Top Ten Tips for Bug Tracking
1. A good tester will always try to reduce the repro steps to the minimal steps to reproduce; this is extremely helpful for the programmer who has to find the bug.

2. Remember that the only person who can close a bug is the person who opened it in the first place. Anyone can resolve it, but only the person who saw the bug can really be sure that what they saw is fixed.

3. There are many ways to resolve a bug. FogBUGZ allows you to resolve a bug as fixed, won't fix, postponed, not repro, duplicate, or by design.

4. Not Repro means that nobody could ever reproduce the bug. Programmers often use this when the bug report is missing the repro steps.

5. You'll want to keep careful track of versions. Every build of the software that you give to testers should have a build ID number so that the poor tester doesn't have to retest the bug on a version of the software where it wasn't even supposed to be fixed.

6. If you're a programmer, and you're having trouble getting testers to use the bug database, just don't accept bug reports by any other method. If your testers are used to sending you email with bug reports, just bounce the emails back to them with a brief message: "please put this in the bug database. I can't keep track of emails."

7. If you're a tester, and you're having trouble getting programmers to use the bug database, just don't tell them about bugs - put them in the database and let the database email them.

8. If you're a programmer, and only some of your colleagues use the bug database, just start assigning them bugs in the database. Eventually they'll get the hint.

9. If you're a manager, and nobody seems to be using the bug database that you installed at great expense, start assigning new features to people using bugs. A bug database is also a great "unimplemented feature" database, too.

10. Avoid the temptation to add new fields to the bug database. Every month or so, somebody will come up with a great idea for a new field to put in the database. You get all kinds of clever ideas, for example, keeping track of the file where the bug was found; keeping track of what % of the time the bug is reproducible; keeping track of how many times the bug occurred; keeping track of which exact versions of which DLLs were installed on the machine where the bug happened. It's very important not to give in to these ideas. If you do, your new bug entry screen will end up with a thousand fields that you need to supply, and nobody will want to input bug reports any more. For the bug database to work, everybody needs to use it, and if entering bugs "formally" is too much work, people will go around the bug database.

What are the different types of Bugs we normally see in any of the Project? Include the severity as well.
The Life Cycle of a bug in general context is:

Bugs are usually logged by the development team (While Unit Testing) and also by testers (While system or other type of testing).

So let me explain in terms of a tester's perspective:

A tester finds a new defect/bug, so using a defect tracking tool logs it.

1. Its status is 'NEW' and assigns to the respective dev team (Team lead or Manager). 2. Th
e team lead assign's it to the team member, so the status is 'ASSIGNED TO'
3. The developer works on the bug fixes it and re-assigns to the tester for testing. Now the status is 'RE-ASSIGNED'
4. The tester, check if the defect is fixed, if its fixed he changes the status to 'VERIFIED'
5. If the tester has the authority (depends on the company) he can after verifying change the status to 'FIXED'. If not the test lead can verify it and change the status to 'fixed'.

6. If the defect is not fixed he re-assign's the defect back to the dev team for re-fixing.

This is the life cycle of a bug.

1. User Interface Defects - Low
2. Boundary Related Defects - Medium
3. Error Handling Defects - Medium
4. Calculation Defects - High
5. Improper Service Levels (Control flow defects) - High
6. Interpreting Data Defects - High
7. Race Conditions (Compatibility and Intersystem defects)- High
8. Load Conditions (Memory Leakages under load) - High
9. Hardware Failures:- High

Search

My Blog List