Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Need help with Basic QA process structure 1

Status
Not open for further replies.

CraigBest

Programmer
Aug 1, 2001
545
US
Not sure if this is the right place to ask, but here goes.

I have been asked by my boss to put together a simple QA process structure plan. I'm a VB developer and we have a small shop (2 programmers & 1 DBA) and I've never done anything like this before. I have a vauge idea of what QA is about but I have no idea how to set up a process describing it. I know it is a process where testers put software through its paces and report back to developers where they find problems or questions, but that's all I really know.

Can someone either explain the process to me in some detail or dorect me to a place on the net where I can find an example or two of how this works?

Thanks in advance

Craig in NJ
 
Craig,

This is such a big topic that it has probably put a few people off answering your question. Do not think of Quality Assurance as simply testing the code after it has been written and packaged up for distribution. That is very important, certainly, to ensure that you have independently tested the code before it leaves the development shop. It is also the slowest and most expensive way to fix mistakes in the product.

You want to design a simple set of quality checkpoints throughout the SDLC. At each stage of the development process, there should be a review and a signoff.

For example, after the requirements analysis is complete, the designer, the QA person and the business sponsor should sit together and walk through the requirements to make sure they are complete and accurate. get a sign off at that point and you never have to go back there again unless the requirements change (which is not a QA issue but a project management issue).

Similarly, when your screen, data and infrastructure designs are complete, go through the same walk through process and get sign off. As code components are completed by developers, get the code reviewed by the rest of the team against the requirements and against your own development standards, then sign them off and lock them away.

Finally, write or get your business representatives to supply a number of critical test cases with known inputs and expected outcomes and run those tests against the completed product as it is delivered by the developers. You will probably need test databases to ensure that you can repeat the test conditions between product releases. If people outside your development group are installing, supporting or maintaining the product, make sure one of your tests includes installation and upgrade procedures.

You can make this whole process as complex or as simple as you like, but the aim is to have a complete quality process that matches the software development lifecycle, catches flaws at the earliest possible moment in the process and is consistently repeatable.

I hope this helps.
Clive
 
Clive, thanks! That's a big help and gives me a place to start. I really appreiciate it.

Craig in NJ
 
You're welcome.

Just remember that quality has to be everybody's concern and not just handed off to the QA person. Also, note that it is impossible to test every single case in a reasonable time frame so testing is only really sampling a subset of possible business activities and paths through the code.

Cheers,
Clive
 
There's an old axiom: quality is built in, not added on.

If you're going to do QA right, you do it right and right from the start. QA looks at the design and begins developing test scenarios up front.

This is a good news, bad news situation. The good news is that there is a phenomenal amount of literature on the subject -- and that's also the bad news. I'd head down to the library and grab a few books. Even out-of-date books will discuss principles that are timeless.

If I could sum up the most important concept in a single word, I would choose "regression".
 
From "Requirements", by Wiegers, there are four levels of QA (testing).
- Acceptance Testing (or UAT or Validation) insures that the Business Requirements have been met. So, signoff on the BR document as cjowsey mentioned is very important.
- System Testing (or Verification) insures that the Functional Requirements have been met.
- Integration Testing provides assurances that the Architecture is correct.
- Unit Testing checks the Detail Design.

Each of these QA/Testing phases checks a set of requirements, which must be documented in the Analysis and Design phase. In fact, test plans are developed from requirements and scenarios discussed in those documents. Without documented requirements, you have no ability to determine whether you have tested the right things. In fact, without requirements, you have no scale upon which to measure project completion.

Sometimes the grass is greener on the other side because there is more manure there - original.
 
johnherman,

You are absolutely right. However, judging by the size of Craig's development team, I doubt if they have much of a formal structure with business requirements, architecture and detailed component level design documents. In a three person development shop they probably won't be allowed the time to go to that level of project documentation and control.

So, my advice is that any process is better than none. I don't think it is easy to bring a complete development methodology in if you currently have nothing or very little. Start with the business requirements first then both business and technical people know what the end result has to do.
 
Craig,

Is this QA going to be for a development project or a maintenance project. Cause the QA process flow, inputs, deliverables may vary depending upon the nature of the project.
Typically you can break the QA processes into the following sub-processes
1. Configuration Management
2. Defect & Change Tracking
3. Testcase management, execution & Automation (Testing process).
4. Setting & maintaining environments(UAT,EST etc), Build installation, Tool management & administation. (This includes all the tools viz defect-tracking, test-automation, version-controlling, testcase-management, performance-test tools).

For a maintenance project the quality benchmarks, expectations, type of testing etc would vary. Testing would be more of a regression type. I'm assuming your requirements are frozen esp. since the product has been developed, so you can design your testcases based upon your requirements. Testcase maintenance would be less and these can be automated. As an input you may need the release notes for the Build. This may vary depending upon the nature of the Build, if it is a Full Build(mostly monthly - with major changes), an emergency patch or partial build(mostly weekly - for minor changes).

For dev. project you may need to develop testcases based upon your requirements(usecases etc) and refine these as the requirements change. So testcase maintenance is a more imp. activity here.

Also depending upon your service-level agreement (if applicable) you can start by setting or using available performance benchmarks (response time, throughput, concurrent transactions etc) and incorporate these in your performance tests.

Regards,
Milind Agate
 
Craig
I agree with all of the above, but it's also worth while building in time for general user "see if I can break it" testing.

Offer bribes for finding bugs, real users will often come up with things you never imagined.

Rosie
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top