* Trying to time every solution
* Proposal 2 for timing PE solutions:
- Use pytest fixture along with --capture=no flag to print out the
top DURATIONS slowest solution at the end of the test sessions.
- Remove the print part and try ... except ... block from the test
function.
* Proposal 3 for timing PE solutions:
Completely changed the way I was performing the tests. Instead of
parametrizing the problem numbers and expected output, I will
parametrize the solution file path.
Steps:
- Collect all the solution file paths
- Convert the paths into a Python module
- Call solution on the module
- Assert the answer with the expected results
For assertion, it was needed to convert the JSON list object to
Python dictionary object which required changing the JSON file itself.
* Add type hints for variables
* Fix whitespace in single_qubit_measure
* Removed print error_msgs at the end of test:
This was done only to reduce the message clutter produced by 60
failing tests. As that is fixed, we can produce the traceback in
short form and allow pytest to print the captured error message
output at the end of test.
* Start validate_solutions script for Travis CI
I am separating out the solution testing and doctest as validating
the solutions for the current number of solutions present is
taking 2 minutes to run.
* Add file for testing Project Euler solutions
* Remove the importlib import
* Update project_euler/solution_test.py
Co-authored-by: Christian Clauss <cclauss@me.com>
* Small tweaks to project_euler/solution_test.py
* Test Project Euler solutions through Travis
* Improved testing for Project Euler solutions:
- Renamed file so that it isn't picked up by pytest
- Fail fast on validating solutions through Travis CI
* Update validate_solutions.py
* Use namedtuple for input parameters and answer
- Remove logging
- Remove unnecessary checks for PROJECT_EULER_PATH as Travis CI
picks up the same path
* Fix flake8 errors: line too long
* Small tweaks to validate_solutions.py
* Add all answers & back to using dictionary
* Using pytest for testing Project Euler solutions
- As we want to fail fast on testing solutions, we need to test using
this script first before we use tests written by the author.
- As pytest stops testing as soon as it receives a traceback, we need to
use pytest-subtests to tell pytest to test all the iterations for the
function with given parameters.
* Print error messages in oneline format
* Separated answers into a separate file:
- Add custom print function to print all the error messages at the
end of all the tests
- Let Travis skip if this failed
Co-authored-by: Christian Clauss <cclauss@me.com>