muteria.drivers.testgeneration.meta_testcasetool module

This module is used through MetaTestcaseTool class Which access the relevant testcase tools as specified

The tools are organized by programming language For each language, there is a folder for each tool, named after the tool in lowercase

Each testcase tool package have the following in the __init__.py file: >>> import <Module>.<class extending BaseTestcaseTool> as TestcaseTool

class muteria.drivers.testgeneration.meta_testcasetool.MetaTestcaseTool(language, tests_working_dir, code_builds_factory, test_tool_config_list, head_explorer, hash_outlog=True)[source]

Bases: object

FAULT_MATRIX_KEY = 'fault_revealed'
PROGRAM_EXECOUTPUT_KEY = 'program'
TOOL_OBJ_KEY = 'tool_obj'
TOOL_TYPE_KEY = 'tool_type'
TOOL_WORKDIR_KEY = 'tool_working_dir'
check_get_flakiness(meta_testcases, repeat_count=2, get_flaky_tests_outputs=True)[source]

Check if tests have flakiness by running multiple times :return: The list of flaky tests

check_tools_installed()[source]
clear_working_dir()[source]
execute_testcase(meta_testcase, exe_path_map, env_vars, timeout=None, use_recorded_timeout_times=None, recalculate_execution_times=False, with_output_summary=True, hash_outlog=None)[source]

Execute a test case with the given executable and say whether it failed

Parameters:
  • meta_testcase – string name of the test cases to execute

  • exe_path_map – string representing the file system path to the executable to execute with the test

  • env_vars – dict of environment variables to set before executing the test ({<variable>: <value>})

Hash_outlog:

decide whether to hash the outlog or not

Returns:

pair of: - boolean failed verdict of the test

(True if failed, False otherwise)

  • test execution output log hash data object or None

generate_tests(meta_criteria_tool_obj=None, exe_path_map=None, test_tool_type_list=None, max_time=None, test_generation_guidance_obj=None, parallel_testgen_count=1, restart_checkpointer=False, finish_destroy_checkpointer=True)[source]
This method should be used to generate the tests and must

always have a single instance running (it has single checkpoint file). Note: The caller must explicitely destroy the checkpointer after this call succeed, to ensure that a sceduler will not re-execute this

Parameters:
  • meta_criteria_tool_obj

  • exe_path_map

  • test_tool_type_list

  • est_generation_guidance_obj

  • parallel_testgen_count

  • restart_checkointer (bool) – Decide whether to discard checkpoint and restart anew.

  • finish_destroy_checkpointer (bool) – Decide whether to automatically destroy the checkpointer when done or not Useful is caller has a checkpointer to update.

Raises:

Return type:

get_candidate_tools_aliases(test_tool_type_list)[source]
get_checkpoint_state_object()[source]
get_devtest_toolalias()[source]
get_flakiness_workdir()[source]
get_test_tools_by_name(toolname)[source]
get_testcase_info_file(candidate_tool_aliases=None)[source]
get_testcase_info_object(candidate_tool_aliases=None)[source]
classmethod get_toolnames_by_types_by_language()[source]

get imformation about the plugged-in testcase tool drivers. :return: a dict having the form:

{
language: {
TestToolType: [

(toolname, is_installed?)

]

}

}

has_checkpointer()[source]
runtests(meta_testcases=None, exe_path_map=None, env_vars=None, stop_on_failure=False, per_test_timeout=None, use_recorded_timeout_times=None, recalculate_execution_times=False, fault_test_execution_matrix_file=None, fault_test_execution_execoutput_file=None, with_output_summary=True, hash_outlog=None, test_prioritization_module=None, parallel_test_count=1, parallel_test_scheduler=None, restart_checkpointer=False, finish_destroy_checkpointer=True)[source]

Execute the list of test cases with the given executable and say, for each test case, whether it failed

Parameters:
  • meta_testcases – list of test cases to execute

  • exe_path_map – string representing the file system path to the executable to execute with the tests

  • env_vars – dict of environment variables to set before executing each test ({<variable>: <value>})

  • stop_on_failure – decide whether to stop the test execution once a test fails

  • fault_test_execution_matrix_file – Optional matrix file to store the tests’ pass fail execution data

  • fault_test_execution_execoutput_file – Optional output log file to store the tests’ execution actual output (hashed)

  • with_output_summary – decide whether to return outlog hash

  • test_prioritization_module – Specify the test prioritization module. (TODO: Implement support)

  • parallel_test_count – Specify the number of parallel test Execution. must be an integer >= 1 or None. When None, the max possible value is used.

  • parallel_test_scheduler – Specify the function that will handle parallel test scheduling by tool, using the test execution optimizer. (TODO: Implement support)

  • restart_checkointer (bool) – Decide whether to discard checkpoint and restart anew.

  • finish_destroy_checkpointer (bool) – Decide whether to automatically destroy the checkpointer when done or not Useful is caller has a checkpointer to update.

Hash_outlog:

decide whether to hash the outlog or not

Returns:

dict of testcase and their failed verdict. {<test case name>: <True if failed, False if passed,

UNCERTAIN_TEST_VERDICT if uncertain>}

If stop_on_failure is True, only return the tests that have been executed until the failure