- Up to 50% faster test data preparation than conventional methods.
- Covers 100% of tested business logic.
- Increased transparency of the testing process.
- Complex tests (logical and performance tests).
- Improved security of production data.
Test tool concept
Functionality of QAceGen is driven by the “focus to the point” concept:
- Easy access to all necessary information.
- Elimination of non-value added tasks.
- Focus on the problem solving.
“Focus to the point” means that work with test tool QAceGen is very efficient. All necessary information about data types, primary and foreign keys, constraints and relations impacting the test project is always easily available within the tool. Tester does not have to look for relevant information in documentation, databases or in other information sources. QAceGen also eliminates vast majority of the “non-value added” activities: keys, constraints, filters and records in all referential tables are automatically generated by qacegen. All data fields, although not directly related to the tested logic, but required due to the data integrity, are generated automatically. This significantly shortens the time to prepare test data. This innovative concept introduced by qacegen offers substantial user benefits. QAceGen saves up to 50% of the time needed for test data preparation when compared with conventional methods. When preparing test data manually, a tester has to often deal with very complex data relations, which is very time-consuming and often leads to grossly incorrect results. QAceGen however allows a tester to focus entirely on the tested business logic. Any “non-value added” activities (i.e. not directly related to tested business logic) are processed by the system.
Note: “business logic” specification defines rules which a software application follows. this is used for both development and application testing. QAaceGen has been designed as a “business logic driven data generator”. Tt means that the system generates test data based on the rules defined in the business specification. In QAceGen, the business logic is divided into logically indivisible elementary units called test scenarios. Each test scenario is then expressed in the dgl language (data generation language) of QAceGen. These definitions are then used by QAceGen to generate set of records for each scenario. Resultant set of test data covers every logical branch of a given business logic. Another useful feature of qacegen is self-documentation. Each test project has a defined logical structure, driven by the split of a tested logic into individual test scenarios. Project documentation is integrated directly into each test scenarios. These comments can be easily exported into a separate document (while preserving its original logical structure) and subsequently distributed to all involved parties. This feature prevents project documentation to lag behind the actual status of a project. The “self-documentation” feature minimizes administrative effort and costs and keeps the documentation always up to date. QAcegen generates test data in two modes:
- Data for a logical test.
- Performance test data.
Logical test data is primarily used to test the business logic. Tool generates only small volume of data (of a few thousands of records) to cover every logical branch of a tested business logic. Once successfully completed, the system (based on all test scenarios) prepares data for a performance test. In this mode, the volume of records is significantly higher (millions of records) to simulate a real-life situation and/or a “peak events” to verify that tested application meets speed requirements.
Note: qacegen always generates complex data, which covers entire business logic. No matter how insignificant change request is implemented, tool always generates data for a complex test. Only this way can be identified possible collateral effects of the change request, which may not show during an incremental test.
QAceGen offers an easy transportation functionality to deliver test data into all source files and/or tables. Metadata of each table/file are stored in a “source profile”, which defines location, data format and access rights. The “data transportation” icon transports data into required location automatically. When test scenarios include the “verification statements”, qacegen validates the results of each test scenario and creates detailed protocol. First, these results are displayed in a graphical mode in qacegen. Subsequently, the results can be exported into a text report while preserving the logical structure of the project. The level of detail can vary from the simple results overview to a detailed information about each transaction used during the test. QAceGen’s security features are system’s another advantage. Data used by the system contain only artificial strings and numbers unrelated to the real production data. No sensitive data from production system is ever used during test.
QAceGen system has been primarily designed for following type of type of activities:
- Data preparation – (e.g. development, training).
- Applications Testing.
- ITO applications.
- Complex systems.
- Data quality management (dqm).
Note: ito = Input / transformation / output (i.e. an application processing data)
ITO application – takes data from input tables/files, transforms data according to the business logic and stores results into output tables/files. An etl process is an example of an ito application, however similar processes/applications can be found among data warehouse application or transaction systems. Complex systems – specialized solutions (e.g. erp, b2b), which can be seen as a “black box” by users. Data processing and storage of data inside database is not exactly known but these systems communicate with users or other application via pre-defined interface according to given business processes. Data preparation – according to the business logic defined in the specification. As mentioned above, the business logic is split into scenarios and qacegen generates set of records covering each scenario. Such data can be used for applications development or loading data into test/training environments, where the use of production data is either not possible or not recommended due to security issues. Testing – designed scenarios in addition to the “data preparation” include “verification statements” which validate results of a tested application. QAceGen also supports data generation/testing in a complex systems environment. Many organizations run many specialized systems from various vendors and on various platforms. It is often necessary to test how these systems exchange data and cooperate among themselves. These systems communicate via a pre-defined interface. This implies that the input data structures as well as the loaders feeding those systems have a defined structure. Test scenarios are defined by a given business process, which defines both list of required input data (e.g. client, order, transaction) and a sequence of input data loads. QAceGen tool generates data according to the input tables/files structures and then it runs a relevant api or a script to transport input data. In addition, data input using screen forms can be automated. If required, the results verification can be addressed similarly. QAceGen can run an api/script with the relevant business process, record and verify the result. Data quality management (dqm) feature controls data quality. The feature includes:
- Duplicity monitoring.
- Cardinality monitoring.
- SCD2 (slowly changing dimensions type2 – e.g. historization, incidences, …).
Using the simple dqm commands, it is possible to set checks above tables or joins of tables. These commands then generate a set of control sql statements and after execution are automatically verified by QAceGen. The outcome is recorded into a standard report. The main advantage of this solution is fast control of virtually hundreds of tables across different databases. Also, dqm functionality is an “open solution” where new dqm commands can be designed if required.
- Lower project costs than traditional solutions.
- Time efficient.
- Enhanced transparency of a test project.
- Improved data security.