Compare commits

...

108 Commits

Author SHA1 Message Date
Isidro Arias
38aca4daca Merge branch 'master' into patch-2 2023-08-14 20:54:34 +02:00
Isidro Arias
f76fbace02 remove type hint 2023-08-14 20:53:22 +02:00
isidroas
7973b7b265
child instead of children
Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>
2023-08-14 20:42:09 +02:00
isidroas
ba2efa2181
convert property 'is_right' to one-liner
Also use 'is' instead of '=='

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>
2023-08-14 20:39:50 +02:00
Caeden Perelli-Harris
fb1b939a89
Consolidate find_min and find_min recursive and find_max and find_max_recursive (#8960)
* updating DIRECTORY.md

* refactor(min-max): Consolidate implementations

* updating DIRECTORY.md

* refactor(min-max): Append _iterative to func name

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-08-14 04:17:27 -07:00
robertjcalistri
2ab3bf2689
Added functions to calculate temperature of an ideal gas and number o… (#8919)
* Added functions to calculate temperature of an ideal gas and number of moles of an ideal gas

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update physics/ideal_gas_law.py

Renamed function name

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update physics/ideal_gas_law.py

Updated formatting

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update physics/ideal_gas_law.py

Removed unnecessary parentheses

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update physics/ideal_gas_law.py

Removed unnecessary parentheses

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update ideal_gas_law.py

Updated incorrect function calls moles of gas system doctests

* Update physics/ideal_gas_law.py

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>
2023-08-14 02:31:53 -07:00
Adithya Awati
ac68dc1128
Fixed Pytest warnings for machine_learning/forecasting (#8958)
* updating DIRECTORY.md

* Fixed pyTest Warnings

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-08-14 01:34:16 -07:00
Caeden Perelli-Harris
4b7ecb6a81
Create is valid email address algorithm (#8907)
* feat(strings): Create is valid email address

* updating DIRECTORY.md

* feat(strings): Create is_valid_email_address algorithm

* chore(is_valid_email_address): Implement changes from code review

* Update strings/is_valid_email_address.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* chore(is_valid_email_address): Fix ruff error

* Update strings/is_valid_email_address.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-08-14 01:28:52 -07:00
Adithya Awati
c290dd6a43
Update run.py in machine_learning/forecasting (#8957)
* Fixed reading CSV file, added type check for data_safety_checker function

* Formatted run.py

* updating DIRECTORY.md

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-08-14 00:16:24 -07:00
Ajinkya Chikhale
02d89bde67
Added implementation for Tribonacci sequence using dp (#6356)
* Added implementation for Tribonacci sequence using dp

* Updated parameter name

* Apply suggestions from code review

---------

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>
2023-08-14 00:12:42 -07:00
Amir Hosseini
f24ab2c60d
Add: Two Regex match algorithm (Recursive & DP) (#6321)
* Add recursive solution to regex_match.py

* Add dp solution to regex_match.py

* Add link to regex_match.py

* Minor edit

* Minor change

* Minor change

* Update dynamic_programming/regex_match.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update dynamic_programming/regex_match.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Fix ruff formatting in if statements

* Update dynamic_programming/regex_match.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-08-13 22:37:41 -07:00
Caeden Perelli-Harris
9d86d4edaa
Create wa-tor algorithm (#8899)
* feat(cellular_automata): Create wa-tor algorithm

* updating DIRECTORY.md

* chore(quality): Implement algo-keeper bot changes

* Update cellular_automata/wa_tor.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* refactor(repr): Return repr as python object

* Update cellular_automata/wa_tor.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update cellular_automata/wa_tor.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update cellular_automata/wa_tor.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update cellular_automata/wa_tor.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update cellular_automata/wa_tor.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update cellular_automata/wa_tor.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update cellular_automata/wa_tor.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update cellular_automata/wa_tor.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update cellular_automata/wa_tor.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update cellular_automata/wa_tor.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update cellular_automata/wa_tor.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* refactor(display): Rename to display_visually to visualise

* refactor(wa-tor): Use double for loop

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* chore(wa-tor): Implement suggestions from code review

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>
2023-08-13 17:58:17 -07:00
Maxim Smolskiy
4f2a346c27
Reduce the complexity of linear_algebra/src/polynom_for_points.py (#8605)
* Reduce the complexity of linear_algebra/src/polynom_for_points.py

* updating DIRECTORY.md

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix

* Fix review issues

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-08-13 03:05:42 -07:00
Suman
c39b7eadbd
updated the URL and HTML tags for scrapping yahoo finance (#8942)
* updated the url and tags for yahoo finance

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* updated to return the error text

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-08-12 14:58:37 -07:00
Tianyi Zheng
ae0fc85401
Fix ruff errors (#8936)
* Fix ruff errors

Renamed neural_network/input_data.py to neural_network/input_data.py_tf
because it should be left out of the directory for the following
reasons:

1. Its sole purpose is to be used by neural_network/gan.py_tf, which is
   itself left out of the directory because of issues with TensorFlow.

2. It was taken directly from TensorFlow's codebase and is actually
   already deprecated. If/when neural_network/gan.py_tf is eventually
   re-added back to the directory, its implementation should be changed
   to not use neural_network/input_data.py anyway.

* updating DIRECTORY.md

* Change input_data.py_tf file extension

Change input_data.py_tf file extension because algorithms-keeper bot is being picky about it

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-08-09 13:25:30 +05:30
AmirSoroush
842d03fb2a
improvements to jump_search.py (#8932)
* improvements to jump_search.py

* add more tests to jump_search.py
2023-08-08 14:47:09 -07:00
pre-commit-ci[bot]
ac62cdb94f
[pre-commit.ci] pre-commit autoupdate (#8930)
* [pre-commit.ci] pre-commit autoupdate

updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.281 → v0.0.282](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.281...v0.0.282)

* updating DIRECTORY.md

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-08-07 19:52:39 -04:00
Dipankar Mitra
db6bd4b17f
IQR function is added (#8851)
* tanh function been added

* tanh function been added

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* tanh function is added

* tanh function is added

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* tanh function added

* tanh function added

* tanh function is added

* Apply suggestions from code review

* ELU activation function is added

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* elu activation is added

* ELU activation is added

* Update maths/elu_activation.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Exponential_linear_unit activation is added

* Exponential_linear_unit activation is added

* SiLU activation is added

* SiLU activation is added

* mish added

* mish activation is added

* inter_quartile_range function is added

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Mish activation function is added

* Mish action is added

* mish activation added

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* mish activation added

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* inter quartile range (IQR) function is added

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* IQR function is added

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* code optimized in IQR function

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* interquartile_range function is added

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update maths/interquartile_range.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Changes on interquartile_range

* numpy removed from interquartile_range

* Fixes from code review

* Update interquartile_range.py

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-08-07 07:47:42 -04:00
AmirSoroush
ce218c57f1
fixes #8673; Add operator's associativity check for stacks/infix_to_p… (#8674)
* fixes #8673; Add operator's associativity check for stacks/infix_to_postfix_conversion.py

* fix ruff N806 in stacks/infix_to_postfix_conversion.py

* Update data_structures/stacks/infix_to_postfix_conversion.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update data_structures/stacks/infix_to_postfix_conversion.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

---------

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>
2023-08-01 11:23:34 -07:00
pre-commit-ci[bot]
c9a7234a95
[pre-commit.ci] pre-commit autoupdate (#8914)
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.280 → v0.0.281](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.280...v0.0.281)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-08-01 09:26:23 +05:30
Jan Wojciechowski
f7c5e55609
Window closing fix (#8625)
* The window will now remain open after the fractal is finished being drawn, and will only close upon your click.

* Update fractals/sierpinski_triangle.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-07-31 20:02:49 -07:00
Minha, Jeong
f8fe72dc37
Update game_of_life.py (#4921)
* Update game_of_life.py

docstring error fix
delete no reason delete next_gen_canvas code(local variable)

* Update cellular_automata/game_of_life.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-07-31 14:24:12 -07:00
Tianyi Zheng
5cf34d901e
Ruff fixes (#8913)
* updating DIRECTORY.md

* Fix ruff error in eulerian_path_and_circuit_for_undirected_graph.py

* Fix ruff error in newtons_second_law_of_motion.py

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-07-31 13:53:26 -07:00
Dylan Buchi
90a8e6e0d2
Update sorts/bubble_sort.py (#5802)
* Add missing type annotations in bubble_sort.py

* Refactor bubble_sort function
2023-07-31 11:50:00 -07:00
roger-sato
0b0214c42f
Handle empty input case in Segment Tree build process (#8718) 2023-07-31 11:46:30 -07:00
Tianyi Zheng
629eb86ce0
Fix merge conflicts to merge change from #5080 (#8911)
* Input for user choose his Collatz sequence

Now the user can tell the algorithm what number he wants to run on the Collatz Sequence.

* updating DIRECTORY.md

---------

Co-authored-by: Hugo Folloni <hugofollogua07@gmail.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-07-31 07:23:23 +02:00
AmirSoroush
384c407a26
Enhance the implementation of Queue using list (#8608)
* enhance the implementation of queue using list

* enhance readability of queue_on_list.py

* rename 'queue_on_list' to 'queue_by_list' to match the class name
2023-07-30 19:07:35 -07:00
Almas Bekbayev
8cce9cf066
Fix linear_search docstring return value (#8644) 2023-07-30 18:32:05 -07:00
David Leal
4710e51deb
chore: use newest Discord invite link (#8696)
* updating DIRECTORY.md

* chore: use newest Discord invite link

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-07-30 18:15:30 -07:00
AmirSoroush
d4f2873e39
add reverse_inorder traversal to binary_tree_traversals.py (#8726)
* add reverse_inorder traversal to binary_tree_traversals.py

* Apply suggestions from code review

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

---------

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>
2023-07-30 17:54:15 -07:00
Bazif Rasool
8b831cb600
Added Altitude Pressure equation (#8909)
* Added Altitude Pressure equation

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Removed trailing whitespaces

* Removed pylint

* Fix lru_cache_pythonic.py

* Fixed spellings

* Fix again lru_cache_pythonic.py

* Update .vscode/settings.json

Co-authored-by: Christian Clauss <cclauss@me.com>

* Third fix lru_cache_pythonic.py

* Update .vscode/settings.json

Co-authored-by: Christian Clauss <cclauss@me.com>

* 4th fix lru_cache_pythonic.py

* Update physics/altitude_pressure.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* lru_cache_pythonic.py: def get(self, key: Any, /) -> Any | None:

* Delete lru_cache_pythonic.py

* Added positive and negative pressure test cases

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-07-30 17:00:58 +02:00
Yatharth Mathur
d31750adec
Pythonic implementation of LRU Cache (#4630)
* Added a more pythonic implementation of LRU_Cache.[#4628]

* Added test cases and doctest

* Fixed doc tests

* Added more tests in doctests and fixed return types fixes [#4628]

* better doctests

* added doctests to main()

* Added dutch_national_flag.py in sorts. fixing [#4636]

* Delete dutch_national_flag.py

incorrect commit

* Update lru_cache_pythonic.py

* Remove pontification

---------

Co-authored-by: Christian Clauss <cclauss@me.com>
2023-07-30 11:27:45 +02:00
Colin Leroy-Mira
2cfef0913a
Fix greyscale computation and inverted coords (#8905)
* Fix greyscale computation and inverted coords

* Fix test

* Add test cases

* Add reference to the greyscaling formula

---------

Co-authored-by: Colin Leroy-Mira <colin.leroy-mira@sigfox.com>
2023-07-29 10:03:43 -07:00
Tianyi Zheng
0ef9306976
Disable quantum/quantum_random.py (attempt 2) (#8902)
* Disable quantum/quantum_random.py

Temporarily disable quantum/quantum_random.py because it produces an illegal instruction error that causes all builds to fail

* updating DIRECTORY.md

* Disable quantum/quantum_random.py attempt 2

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-07-28 22:08:40 +02:00
Alex Bernhardt
a0b642cfe5
Physics/basic orbital capture (#8857)
* Added file basic_orbital_capture

* updating DIRECTORY.md

* added second source

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fixed spelling errors

* accepted changes

* updating DIRECTORY.md

* corrected spelling error

* Added file basic_orbital_capture

* added second source

* fixed spelling errors

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* applied changes

* reviewed and checked file

* added doctest

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* removed redundant constnant

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* added scipy imports

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* added doctests to capture_radii and scipy const

* fixed conflicts

* finalizing file. Added tests

* Update physics/basic_orbital_capture.py

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-07-28 20:30:05 +02:00
Tianyi Zheng
e406801f9e
Reimplement polynomial_regression.py (#8889)
* Reimplement polynomial_regression.py

Rename machine_learning/polymonial_regression.py to
machine_learning/polynomial_regression.py

Reimplement machine_learning/polynomial_regression.py using numpy
because the old original implementation was just a how-to on doing
polynomial regression using sklearn

Add detailed function documentation, doctests, and algorithm
explanation

* updating DIRECTORY.md

* Fix matrix formatting in docstrings

* Try to fix failing doctest

* Debugging failing doctest

* Fix failing doctest attempt 2

* Remove unnecessary return value descriptions in docstrings

* Readd placeholder doctest for main function

* Fix typo in algorithm description

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-07-28 20:17:46 +02:00
Caeden Perelli-Harris
4a83e3f0b1
Fix failing build due to missing requirement (#8900)
* feat(cellular_automata): Create wa-tor algorithm

* updating DIRECTORY.md

* chore(quality): Implement algo-keeper bot changes

* build: Fix broken ci

* git rm cellular_automata/wa_tor.py

* updating DIRECTORY.md

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-07-28 20:12:31 +02:00
Christian Clauss
46454e204c
[skip-ci] In .devcontainer/Dockerfile: pipx install pre-commit ruff (#8893)
[skip-ci] In .devcontainer/Dockerfile: pipx install pre-commit ruff
2023-07-28 18:54:45 +02:00
Christian Clauss
dbaff34572
Fix ruff rules ISC flake8-implicit-str-concat (#8892) 2023-07-28 17:53:09 +01:00
HManiac74
b77e6adf3a
Add Docker devcontainer configuration files (#8887)
* Added Docker container configuration files

* Update Dockerfile

Copy and install requirements

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Updated Docker devcontainer configuration

* Update requierements.txt

* Update Dockerfile

* Update Dockerfile

* Update .devcontainer/devcontainer.json

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update Dockerfile

* Update Dockerfile. Add linebreak

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-07-25 22:23:20 +02:00
Sangmin Jeon
a03b739d23
Fix radix_tree.py insertion fail in ["*X", "*XX"] cases (#8870)
* Fix insertion fail in ["*X", "*XX"] cases

Consider a word, and a copy of that word, but with the last letter repeating twice. (e.g., ["ABC", "ABCC"])
When adding the second word's last letter, it only compares the previous word's prefix—the last letter of the word already in the Radix Tree: 'C'—and the letter to be added—the last letter of the word we're currently adding: 'C'. So it wrongly passes the "Case 1" check, marks the current node as a leaf node when it already was, then returns when there's still one more letter to add.
The issue arises because `prefix` includes the letter of the node itself. (e.g., `nodes: {'C' : RadixNode()}, is_leaf: True, prefix: 'C'`) It can be easily fixed by simply adding the `is_leaf` check, asking if there are more letters to be added.

- Test Case: `"A AA AAA AAAA"`
  - Fixed correct output:
  ```
  Words: ['A', 'AA', 'AAA', 'AAAA']
  Tree:
  - A   (leaf)
  -- A   (leaf)
  --- A   (leaf)
  ---- A   (leaf)
  ```
  - Current incorrect output:
  ```
  Words: ['A', 'AA', 'AAA', 'AAAA']
  Tree:
  - A   (leaf)
  -- AA   (leaf)
  --- A   (leaf)
  ```

*N.B.* This passed test cases for [Croatian Open Competition in Informatics 2012/2013 Contest #3 Task 5 HERKABE](https://hsin.hr/coci/archive/2012_2013/)

* Add a doctest for previous fix

* improve doctest readability
2023-07-24 11:29:05 +02:00
Caeden Perelli-Harris
9e08c7726d
Small docstring time complexity fix in number_container _system (#8875)
* fix: Write time is O(log n) not O(n log n)

* chore: Update pre-commit ruff version

* revert: Undo previous commit
2023-07-22 12:34:19 +02:00
Tianyi Zheng
f7531d9874
Add note in CONTRIBUTING.md about not asking to be assigned to issues (#8871)
* Add note in CONTRIBUTING.md about not asking to be assigned to issues

Add a paragraph to CONTRIBUTING.md explicitly asking contributors to not ask to be assigned to issues

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

---------

Co-authored-by: Christian Clauss <cclauss@me.com>
2023-07-22 12:11:04 +02:00
Caeden Perelli-Harris
93fb169627
[Upgrade Ruff] Fix all errors raised from ruff (#8879)
* chore: Fix tests

* chore: Fix failing ruff

* chore: Fix ruff errors

* chore: Fix ruff errors

* chore: Fix ruff errors

* chore: Fix ruff errors

* chore: Fix ruff errors

* chore: Fix ruff errors

* chore: Fix ruff errors

* chore: Fix ruff errors

* chore: Fix ruff errors

* chore: Fix ruff errors

* chore: Fix ruff errors

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* chore: Fix ruff errors

* chore: Fix ruff errors

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update cellular_automata/game_of_life.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* chore: Update ruff version in pre-commit

* chore: Fix ruff errors

* Update edmonds_karp_multiple_source_and_sink.py

* Update factorial.py

* Update primelib.py

* Update min_cost_string_conversion.py

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-07-22 12:05:10 +02:00
pre-commit-ci[bot]
5aefc00f0f
[pre-commit.ci] pre-commit autoupdate (#8872)
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.277 → v0.0.278](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.277...v0.0.278)
- [github.com/psf/black: 23.3.0 → 23.7.0](https://github.com/psf/black/compare/23.3.0...23.7.0)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-07-18 09:58:22 +05:30
pre-commit-ci[bot]
f614ed7217
[pre-commit.ci] pre-commit autoupdate (#8860)
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.276 → v0.0.277](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.276...v0.0.277)
- [github.com/tox-dev/pyproject-fmt: 0.12.1 → 0.13.0](https://github.com/tox-dev/pyproject-fmt/compare/0.12.1...0.13.0)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-07-11 11:55:32 +02:00
Caeden Perelli-Harris
44b1bcc7c7
Fix failing tests from ruff/newton_raphson (ignore S307 "possibly insecure function") (#8862)
* chore: Fix failing tests (ignore S307 "possibly insecure function")

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix: Move noqa back to right line

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-07-11 11:51:21 +02:00
Tianyi Zheng
a0eec90466
Consolidate duplicate implementations of max subarray (#8849)
* Remove max subarray sum duplicate implementations

* updating DIRECTORY.md

* Rename max_sum_contiguous_subsequence.py

* Fix typo in dynamic_programming/max_subarray_sum.py

* Remove duplicate divide and conquer max subarray

* updating DIRECTORY.md

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-07-11 11:44:12 +02:00
pre-commit-ci[bot]
c9ee6ed188
[pre-commit.ci] pre-commit autoupdate (#8853)
* [pre-commit.ci] pre-commit autoupdate

updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.275 → v0.0.276](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.275...v0.0.276)

* Update double_ended_queue.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update double_ended_queue.py

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-07-04 00:20:35 +02:00
pre-commit-ci[bot]
929d3d9219
[pre-commit.ci] pre-commit autoupdate (#8842)
* [pre-commit.ci] pre-commit autoupdate

updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.274 → v0.0.275](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.274...v0.0.275)
- [github.com/tox-dev/pyproject-fmt: 0.12.0 → 0.12.1](https://github.com/tox-dev/pyproject-fmt/compare/0.12.0...0.12.1)
- [github.com/pre-commit/mirrors-mypy: v1.3.0 → v1.4.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.3.0...v1.4.1)

* updating DIRECTORY.md

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-06-27 07:23:54 +02:00
Tianyi Zheng
69f20033e5
Remove duplicate implementation of Collatz sequence (#8836)
* updating DIRECTORY.md

* Remove duplicate implementation of Collatz sequence

* updating DIRECTORY.md

* Add suggestions from PR review

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-06-26 11:15:31 +02:00
duongoku
62dcbea943
Add power sum problem (#8832)
* Add powersum problem

* Add doctest

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add more doctests

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add more doctests

* Improve paramater name

* Fix line too long

* Remove global variables

* Apply suggestions from code review

* Apply suggestions from code review

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-06-26 09:39:18 +02:00
Christian Clauss
d764eec655
Fix failing pytest quantum/bb84.py (#8838)
* Fix failing pytest quantum/bb84.py

* Update bb84.py test results to match current qiskit
2023-06-26 08:54:50 +05:30
Christian Clauss
3bfa89dacf
GitHub Actions build: Add more tests (#8837)
* GitHub Actions build: Add more tests

Re-enable some tests that were disabled in #6591.
Fixes #8818

* updating DIRECTORY.md

* TODO: Re-enable quantum tests

* fails: pytest quantum/bb84.py quantum/q_fourier_transform.py

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-06-25 21:58:01 +05:30
Tianyi Zheng
267a8b72f9
Clarify how to add issue numbers in PR template and CONTRIBUTING.md (#8833)
* updating DIRECTORY.md

* Clarify wording in PR template

* Clarify CONTRIBUTING.md wording about adding issue numbers

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add suggested change from review to CONTRIBUTING.md

Co-authored-by: Christian Clauss <cclauss@me.com>

* Incorporate review edit to CONTRIBUTING.md

Co-authored-by: Christian Clauss <cclauss@me.com>

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-06-23 15:56:58 +02:00
Himanshu Tomar
331585f3f8
Algorithm: Calculating Product Sum from a Special Array with Nested Structures (#8761)
* Added minimum waiting time problem solution using greedy algorithm

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* ruff --fix

* Add type hints

* Added two more doc test

* Removed unnecessary comments

* updated type hints

* Updated the code as per the code review

* Added recursive algo to calculate product sum from an array

* Added recursive algo to calculate product sum from an array

* Update doc string

* Added doctest for product_sum function

* Updated the code and added more doctests

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Added more test coverage for product_sum method

* Update product_sum.py

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-06-23 10:26:05 +02:00
Jan-Lukas Huhn
f54a966810
Energy conversions (#8801)
* Create TestShiva

* Delete TestShiva

* Create energy_conversions.py

* Update conversions/energy_conversions.py

Co-authored-by: Caeden Perelli-Harris <caedenperelliharris@gmail.com>

---------

Co-authored-by: ShivaDahal99 <130563462+ShivaDahal99@users.noreply.github.com>
Co-authored-by: Caeden Perelli-Harris <caedenperelliharris@gmail.com>
2023-06-22 14:31:48 +02:00
Tianyi Zheng
5ffe601c86
Fix mypy errors in maths/sigmoid_linear_unit.py (#8786)
* updating DIRECTORY.md

* Fix mypy errors in sigmoid_linear_unit.py

* updating DIRECTORY.md

* updating DIRECTORY.md

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-06-22 14:24:34 +02:00
Linus M. Henkel
5b0890bd83
Dijkstra algorithm with binary grid (#8802)
* Create TestShiva

* Delete TestShiva

* Implementation of the Dijkstra-Algorithm in a binary grid

* Update double_ended_queue.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update least_common_multiple.py

* Update sol1.py

* Update pyproject.toml

* Update pyproject.toml

* https://github.com/astral-sh/ruff-pre-commit v0.0.274

---------

Co-authored-by: ShivaDahal99 <130563462+ShivaDahal99@users.noreply.github.com>
Co-authored-by: jlhuhn <134317018+jlhuhn@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-06-22 13:49:09 +02:00
Christian Clauss
07e6812888
Update .pre-commit-config.yaml (#8828)
* Update .pre-commit-config.yaml

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-06-20 21:33:16 +05:30
pre-commit-ci[bot]
0dee4a402c
[pre-commit.ci] pre-commit autoupdate (#8827)
* [pre-commit.ci] pre-commit autoupdate

updates:
- [github.com/codespell-project/codespell: v2.2.4 → v2.2.5](https://github.com/codespell-project/codespell/compare/v2.2.4...v2.2.5)
- [github.com/tox-dev/pyproject-fmt: 0.11.2 → 0.12.0](https://github.com/tox-dev/pyproject-fmt/compare/0.11.2...0.12.0)

* updating DIRECTORY.md

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-06-20 15:56:14 +02:00
Turro
ea6c6056cf
Added apr_interest function to financial (#6025)
* Added apr_interest function to financial

* Update interest.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update financial/interest.py

* float

---------

Co-authored-by: Christian Clauss <cclauss@me.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-06-19 13:46:29 +02:00
Frank-1998
b0f871032e
Fix removing the root node in binary_search_tree.py removes the whole tree (#8752)
* fix issue #8715

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-06-18 18:30:06 +02:00
Ilkin Mengusoglu
e6f89a6b89
Simplex algorithm (#8825)
* feat: added simplex.py

* added docstrings

* Update linear_programming/simplex.py

Co-authored-by: Caeden Perelli-Harris <caedenperelliharris@gmail.com>

* Update linear_programming/simplex.py

Co-authored-by: Caeden Perelli-Harris <caedenperelliharris@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update linear_programming/simplex.py

Co-authored-by: Caeden Perelli-Harris <caedenperelliharris@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* ruff fix

Co-authored by: CaedenPH <caedenperelliharris@gmail.com>

* removed README to add in separate PR

* Update linear_programming/simplex.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* Update linear_programming/simplex.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

* fix class docstring

* add comments

---------

Co-authored-by: Caeden Perelli-Harris <caedenperelliharris@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>
2023-06-18 18:00:02 +02:00
pre-commit-ci[bot]
4637986125
[pre-commit.ci] pre-commit autoupdate (#8817)
updates:
- [github.com/charliermarsh/ruff-pre-commit: v0.0.270 → v0.0.272](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.270...v0.0.272)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-06-13 00:09:33 +02:00
Caeden Perelli-Harris
daa0c8f3d3
Create count negative numbers in matrix algorithm (#8813)
* updating DIRECTORY.md

* feat: Count negative numbers in sorted matrix

* updating DIRECTORY.md

* chore: Fix pre-commit

* refactor: Combine functions into iteration

* style: Reformat reference

* feat: Add timings of each implementation

* chore: Fix problems with algorithms-keeper bot

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* test: Remove doctest from benchmark function

* Update matrix/count_negative_numbers_in_sorted_matrix.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update matrix/count_negative_numbers_in_sorted_matrix.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update matrix/count_negative_numbers_in_sorted_matrix.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update matrix/count_negative_numbers_in_sorted_matrix.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update matrix/count_negative_numbers_in_sorted_matrix.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update matrix/count_negative_numbers_in_sorted_matrix.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* refactor: Use sum instead of large iteration

* refactor: Use len not sum

* Update count_negative_numbers_in_sorted_matrix.py

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-06-10 14:21:49 +02:00
Jan Wojciechowski
9c9da8ebf1
Improve readability of ciphers/mixed_keyword_cypher.py (#8626)
* refactored the code

* the code will now pass the test

* looked more into it and fixed the logic

* made the code easier to read, added comments and fixed the logic

* got rid of redundant code + plaintext can contain chars that are not in the alphabet

* fixed the reduntant conversion of ascii_uppercase to a list

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* keyword and plaintext won't have default values

* ran the ruff command

* Update linear_discriminant_analysis.py and rsa_cipher.py (#8680)

* Update rsa_cipher.py by replacing %s with {}

* Update rsa_cipher.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update linear_discriminant_analysis.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update linear_discriminant_analysis.py

* Update linear_discriminant_analysis.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update linear_discriminant_analysis.py

* Update linear_discriminant_analysis.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update linear_discriminant_analysis.py

* Update machine_learning/linear_discriminant_analysis.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update linear_discriminant_analysis.py

* updated

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>

* fixed some difficulties

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* added comments, made printing mapping optional, added 1 test

* shortened the line that was too long

* Update ciphers/mixed_keyword_cypher.py

Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Rohan Anand <96521078+rohan472000@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
Co-authored-by: Tianyi Zheng <tianyizheng02@gmail.com>
2023-06-09 11:06:37 +02:00
Caeden Perelli-Harris
7775de0ef7
Create number container system algorithm (#8808)
* feat: Create number container system algorithm

* updating DIRECTORY.md

* chore: Fix failing tests

* Update other/number_container_system.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update other/number_container_system.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update other/number_container_system.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* chore: Add more tests

* chore: Create binary_search_insert failing test

* type: Update typehints to accept str, list and range

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-06-08 14:40:38 +02:00
ShivaDahal99
fa12b9a286
Speed of sound (#8803)
* Create TestShiva

* Delete TestShiva

* Add speed of sound

* Update physics/speed_of_sound.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update physics/speed_of_sound.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update speed_of_sound.py

* Update speed_of_sound.py

---------

Co-authored-by: jlhuhn <134317018+jlhuhn@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-06-07 23:47:27 +02:00
Caeden Perelli-Harris
80d95fccc3
Pytest locally fails due to API_KEY env variable (#8738)
* fix: Pytest locally fails due to API_KEY env variable (#8737)

* chore: Fix ruff errors
2023-06-03 18:16:33 +02:00
Chris O
3a9e5fa5ec
Create a Simultaneous Equation Solver Algorithm (#8773)
* Added simultaneous_linear_equation_solver.py

* Removed Augment class, replaced with recursive functions

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fixed edge cases

* Update settings.json

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-06-02 07:14:25 +02:00
nith2001
4621b0bb4f
Improved Graph Implementations (#8730)
* Improved Graph Implementations

Provides new implementation for graph_list.py and graph_matrix.py along with pytest suites for each. Fixes #8709

* Graph implementation style fixes, corrections, and refactored tests

* Helpful docs about graph implementation

* Refactored code to separate files and applied enumerate()

* Renamed files and refactored code to fail fast

* Error handling style fix

* Fixed f-string code quality issue

* Last f-string fix

* Added return types to test functions and more style fixes

* Added more function return types

* Added more function return types pt2

* Fixed error messages
2023-05-31 22:06:12 +02:00
Rudransh Bhardwaj
e871540e37
Added rank of matrix in linear algebra (#8687)
* Added rank of matrix in linear algebra

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Corrected name of function

* Corrected Rank_of_Matrix.py

* Completed rank_of_matrix.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* delete to rename Rank_of_Matrix.py

* created rank_of_matrix

* added more doctests in rank_of_matrix.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fixed some issues in rank_of_matrix.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* added moreeee doctestsss in rank_of_mtrix.py and fixed some bugss

* Update linear_algebra/src/rank_of_matrix.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update linear_algebra/src/rank_of_matrix.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update linear_algebra/src/rank_of_matrix.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update rank_of_matrix.py

* Update linear_algebra/src/rank_of_matrix.py

Co-authored-by: Caeden Perelli-Harris <caedenperelliharris@gmail.com>

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
Co-authored-by: Caeden Perelli-Harris <caedenperelliharris@gmail.com>
2023-05-31 17:03:02 +02:00
Sundaram Kumar Jha
4a27b54430
Update permutations.py (#8102) 2023-05-31 12:56:59 +12:00
Tianyi Zheng
c93659d7ce
Fix type error in strassen_matrix_multiplication.py (#8784)
* Fix type error in strassen_matrix_multiplication.py

* updating DIRECTORY.md

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-05-30 12:37:54 +12:00
Christian Clauss
4b79d771cd
Add more ruff rules (#8767)
* Add more ruff rules

* Add more ruff rules

* pre-commit: Update ruff v0.0.269 -> v0.0.270

* Apply suggestions from code review

* Fix doctest

* Fix doctest (ignore whitespace)

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-05-26 09:34:17 +02:00
Christian Clauss
dd3b499bfa
Rename is_palindrome.py to is_int_palindrome.py (#8768)
* Rename is_palindrome.py to is_int_palindrome.py

* updating DIRECTORY.md

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-05-26 12:20:33 +05:30
Juyoung Kim
a17791d022
fix: graphs/greedy_best_first typo (#8766)
#8764
2023-05-25 14:54:18 +02:00
Caeden Perelli-Harris
cfbbfd9896
Merge and add benchmarks to palindrome algorithms in the strings/ directory (#8749)
* refactor: Merge and add benchmarks to palindrome

* updating DIRECTORY.md

* chore: Fix failing tests

* Update strings/palindrome.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update palindrome.py

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-05-25 12:56:23 +02:00
Ratnesh Kumar
a6631487b0
Fix CI badge in the README.md (#8137) 2023-05-25 12:34:11 +02:00
Chris O
200429fc47
Dual Number Automatic Differentiation (#8760)
* Added dual_number_automatic_differentiation.py

* updating DIRECTORY.md

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update maths/dual_number_automatic_differentiation.py

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-05-25 08:04:42 +02:00
Caeden Perelli-Harris
df88771905
Mark fetch anime and play as broken (#8763)
* updating DIRECTORY.md

* updating DIRECTORY.md

* fix: Correct ruff errors

* fix: Mark anime algorithm as broken

* updating DIRECTORY.md

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-05-25 07:59:15 +02:00
pre-commit-ci[bot]
ce43a8ac4a
[pre-commit.ci] pre-commit autoupdate (#8759)
* [pre-commit.ci] pre-commit autoupdate

updates:
- [github.com/charliermarsh/ruff-pre-commit: v0.0.267 → v0.0.269](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.267...v0.0.269)
- [github.com/abravalheri/validate-pyproject: v0.12.2 → v0.13](https://github.com/abravalheri/validate-pyproject/compare/v0.12.2...v0.13)

* updating DIRECTORY.md

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-05-23 05:54:30 +02:00
Daniel Luo
edc17b60e0
add __main__ around print (#8747) 2023-05-19 12:40:52 +12:00
Rohan Saraogi
cf5e34d479
Added is_palindrome.py (#8748) 2023-05-19 11:48:22 +12:00
Caeden Perelli-Harris
9b3e4028c6
Fixes broken "Create guess_the_number_search.py" (#8746) 2023-05-17 18:47:23 +12:00
Harkishan Khuva
a2783c6597
Create guess_the_number_search.py (#7937) 2023-05-17 12:22:24 +12:00
Alexander Pantyukhin
61cfb43d2b
Add h index (#8036) 2023-05-17 12:21:16 +12:00
Rohan Saraogi
3dc143f721
Added odd_sieve.py (#8740) 2023-05-17 12:08:56 +12:00
Tianyi Zheng
8102424950
local_weighted_learning.py: fix mypy errors and more (#8073) 2023-05-17 12:05:55 +12:00
Maxim Smolskiy
c0892a0651
Reduce the complexity of genetic_algorithm/basic_string.py (#8606) 2023-05-16 09:47:50 +12:00
pre-commit-ci[bot]
2a57dafce0
[pre-commit.ci] pre-commit autoupdate (#8716)
updates:
- [github.com/charliermarsh/ruff-pre-commit: v0.0.263 → v0.0.267](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.263...v0.0.267)
- [github.com/tox-dev/pyproject-fmt: 0.11.1 → 0.11.2](https://github.com/tox-dev/pyproject-fmt/compare/0.11.1...0.11.2)
- [github.com/pre-commit/mirrors-mypy: v1.2.0 → v1.3.0](https://github.com/pre-commit/mirrors-mypy/compare/v1.2.0...v1.3.0)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-05-15 22:27:59 +01:00
Caeden Perelli-Harris
1faf10b5c2
Correct ruff failures (#8732)
* fix: Correct ruff problems

* updating DIRECTORY.md

* fix: Fix pre-commit errors

* updating DIRECTORY.md

---------

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-05-14 22:03:13 +01:00
Pronoy Mandal
793e564e1d
Create maximum_subsequence.py (#7811) 2023-05-11 07:00:59 +12:00
Margaret
6939538a41
adding the remove digit algorithm (#6708) 2023-05-11 06:55:48 +12:00
Margaret
997d56fb63
Switch case (#7995) 2023-05-11 06:53:47 +12:00
shricubed
44aa17fb86
Working binary insertion sort in Python (#8024) 2023-05-11 06:50:32 +12:00
Rohan Anand
209a59ee56
Update and_gate.py (#8690)
* Update and_gate.py

addressing issue #8656 by calling `test_and_gate()` , ensuring that all the assertions are verified before the actual output is printed.

* Update and_gate.py

addressing issue #8632
2023-05-10 21:38:52 +12:00
Pronoy Mandal
91cc3a240f
Update game_of_life.py (#8703)
Rectify spelling in docstring
2023-05-10 21:34:36 +12:00
Dipankar Mitra
7310514509
The ELU activation is added (#8699)
* tanh function been added

* tanh function been added

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* tanh function is added

* tanh function is added

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* tanh function added

* tanh function added

* tanh function is added

* Apply suggestions from code review

* ELU activation function is added

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* elu activation is added

* ELU activation is added

* Update maths/elu_activation.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Exponential_linear_unit activation is added

* Exponential_linear_unit activation is added

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-05-02 16:36:28 +02:00
pre-commit-ci[bot]
777f966893
[pre-commit.ci] pre-commit autoupdate (#8704)
* [pre-commit.ci] pre-commit autoupdate

updates:
- [github.com/charliermarsh/ruff-pre-commit: v0.0.262 → v0.0.263](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.262...v0.0.263)
- [github.com/tox-dev/pyproject-fmt: 0.10.0 → 0.11.1](https://github.com/tox-dev/pyproject-fmt/compare/0.10.0...0.11.1)

* updating DIRECTORY.md

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-05-01 23:48:56 +02:00
Himanshu Tomar
e966c5cc0f
Added minimum waiting time problem solution using greedy algorithm (#8701)
* Added minimum waiting time problem solution using greedy algorithm

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* ruff --fix

* Add type hints

* Added two more doc test

* Removed unnecessary comments

* updated type hints

* Updated the code as per the code review

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2023-05-01 12:23:03 +02:00
Christian Clauss
f6df26bf0f
Fix docstring in present_value.py (#8702)
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-05-01 06:29:42 +05:30
Sahil Goel
c4dcc44dd4
Added an algorithm to calculate the present value of cash flows (#8700)
* Added an algorithm to calculate the present value of cash flows

* added doctest and reference

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Resolving deprecation issues with typing module

* Fixing argument type checks and adding doctest case

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fixing failing doctest case by requiring less precision due to floating point inprecision

* Updating return type

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Added test cases for more coverage

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Make improvements based on Rohan's suggestions

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update financial/present_value.py

Committed first suggestion

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update financial/present_value.py

Committed second suggestion

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update financial/present_value.py

Committed third suggestion

Co-authored-by: Christian Clauss <cclauss@me.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-04-30 19:33:22 +02:00
Christian Clauss
4c1f876567
Solving the Top k most frequent words problem using a max-heap (#8685)
* Solving the `Top k most frequent words` problem using a max-heap

* Mentioning Python standard library solution in `Top k most frequent words` docstring

* ruff --fix .

* updating DIRECTORY.md

---------

Co-authored-by: Amos Paribocci <aparibocci@gmail.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-04-27 23:02:07 +05:30
Dipankar Mitra
c1b3ea5355
The tanh activation function is added (#8689)
* tanh function been added

* tanh function been added

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* tanh function is added

* tanh function is added

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* tanh function added

* tanh function added

* tanh function is added

* Apply suggestions from code review

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-04-25 18:06:14 +02:00
pre-commit-ci[bot]
a650426350
[pre-commit.ci] pre-commit autoupdate (#8691)
* [pre-commit.ci] pre-commit autoupdate

updates:
- [github.com/charliermarsh/ruff-pre-commit: v0.0.261 → v0.0.262](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.261...v0.0.262)
- [github.com/tox-dev/pyproject-fmt: 0.9.2 → 0.10.0](https://github.com/tox-dev/pyproject-fmt/compare/0.9.2...0.10.0)

* updating DIRECTORY.md

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2023-04-25 06:05:45 +02:00
Rohan Anand
bf30b18192
Update linear_discriminant_analysis.py and rsa_cipher.py (#8680)
* Update rsa_cipher.py by replacing %s with {}

* Update rsa_cipher.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update linear_discriminant_analysis.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update linear_discriminant_analysis.py

* Update linear_discriminant_analysis.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update linear_discriminant_analysis.py

* Update linear_discriminant_analysis.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update linear_discriminant_analysis.py

* Update machine_learning/linear_discriminant_analysis.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Update linear_discriminant_analysis.py

* updated

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2023-04-24 07:28:30 +02:00
201 changed files with 6663 additions and 1638 deletions

8
.devcontainer/Dockerfile Normal file
View File

@ -0,0 +1,8 @@
# https://github.com/microsoft/vscode-dev-containers/blob/main/containers/python-3/README.md
ARG VARIANT=3.11-bookworm
FROM mcr.microsoft.com/vscode/devcontainers/python:${VARIANT}
COPY requirements.txt /tmp/pip-tmp/
RUN python3 -m pip install --upgrade pip \
&& python3 -m pip install --no-cache-dir install -r /tmp/pip-tmp/requirements.txt \
&& pipx install pre-commit ruff \
&& pre-commit install

View File

@ -0,0 +1,42 @@
{
"name": "Python 3",
"build": {
"dockerfile": "Dockerfile",
"context": "..",
"args": {
// Update 'VARIANT' to pick a Python version: 3, 3.10, 3.9, 3.8, 3.7, 3.6
// Append -bullseye or -buster to pin to an OS version.
// Use -bullseye variants on local on arm64/Apple Silicon.
"VARIANT": "3.11-bookworm",
}
},
// Configure tool-specific properties.
"customizations": {
// Configure properties specific to VS Code.
"vscode": {
// Set *default* container specific settings.json values on container create.
"settings": {
"python.defaultInterpreterPath": "/usr/local/bin/python",
"python.linting.enabled": true,
"python.formatting.blackPath": "/usr/local/py-utils/bin/black",
"python.linting.mypyPath": "/usr/local/py-utils/bin/mypy"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance"
]
}
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "pip3 install --user -r requirements.txt",
// Comment out to connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode"
}

View File

@ -17,4 +17,4 @@
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".

View File

@ -22,11 +22,9 @@ jobs:
python -m pip install --upgrade pip setuptools six wheel
python -m pip install pytest-cov -r requirements.txt
- name: Run tests
# See: #6591 for re-enabling tests on Python v3.11
# TODO: #8818 Re-enable quantum tests
run: pytest
--ignore=computer_vision/cnn_classification.py
--ignore=machine_learning/lstm/lstm_prediction.py
--ignore=quantum/
--ignore=quantum/q_fourier_transform.py
--ignore=project_euler/
--ignore=scripts/validate_solutions.py
--cov-report=term-missing:skip-covered

View File

@ -15,25 +15,25 @@ repos:
hooks:
- id: auto-walrus
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.0.261
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.0.282
hooks:
- id: ruff
- repo: https://github.com/psf/black
rev: 23.3.0
rev: 23.7.0
hooks:
- id: black
- repo: https://github.com/codespell-project/codespell
rev: v2.2.4
rev: v2.2.5
hooks:
- id: codespell
additional_dependencies:
- tomli
- repo: https://github.com/tox-dev/pyproject-fmt
rev: "0.9.2"
rev: "0.13.0"
hooks:
- id: pyproject-fmt
@ -46,12 +46,12 @@ repos:
pass_filenames: false
- repo: https://github.com/abravalheri/validate-pyproject
rev: v0.12.2
rev: v0.13
hooks:
- id: validate-pyproject
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.2.0
rev: v1.4.1
hooks:
- id: mypy
args:

5
.vscode/settings.json vendored Normal file
View File

@ -0,0 +1,5 @@
{
"githubPullRequests.ignoredPullRequestBranches": [
"master"
]
}

View File

@ -25,7 +25,14 @@ We appreciate any contribution, from fixing a grammar mistake in a comment to im
Your contribution will be tested by our [automated testing on GitHub Actions](https://github.com/TheAlgorithms/Python/actions) to save time and mental energy. After you have submitted your pull request, you should see the GitHub Actions tests start to run at the bottom of your submission page. If those tests fail, then click on the ___details___ button try to read through the GitHub Actions output to understand the failure. If you do not understand, please leave a comment on your submission page and a community member will try to help.
Please help us keep our issue list small by adding fixes: #{$ISSUE_NO} to the commit message of pull requests that resolve open issues. GitHub will use this tag to auto-close the issue when the PR is merged.
If you are interested in resolving an [open issue](https://github.com/TheAlgorithms/Python/issues), simply make a pull request with your proposed fix. __We do not assign issues in this repo__ so please do not ask for permission to work on an issue.
Please help us keep our issue list small by adding `Fixes #{$ISSUE_NUMBER}` to the description of pull requests that resolve open issues.
For example, if your pull request fixes issue #10, then please add the following to its description:
```
Fixes #10
```
GitHub will use this tag to [auto-close the issue](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue) if and when the PR is merged.
#### What is an Algorithm?

View File

@ -29,6 +29,7 @@
* [Minmax](backtracking/minmax.py)
* [N Queens](backtracking/n_queens.py)
* [N Queens Math](backtracking/n_queens_math.py)
* [Power Sum](backtracking/power_sum.py)
* [Rat In Maze](backtracking/rat_in_maze.py)
* [Sudoku](backtracking/sudoku.py)
* [Sum Of Subsets](backtracking/sum_of_subsets.py)
@ -73,6 +74,7 @@
* [Game Of Life](cellular_automata/game_of_life.py)
* [Nagel Schrekenberg](cellular_automata/nagel_schrekenberg.py)
* [One Dimensional](cellular_automata/one_dimensional.py)
* [Wa Tor](cellular_automata/wa_tor.py)
## Ciphers
* [A1Z26](ciphers/a1z26.py)
@ -146,6 +148,7 @@
* [Decimal To Binary Recursion](conversions/decimal_to_binary_recursion.py)
* [Decimal To Hexadecimal](conversions/decimal_to_hexadecimal.py)
* [Decimal To Octal](conversions/decimal_to_octal.py)
* [Energy Conversions](conversions/energy_conversions.py)
* [Excel Title To Column](conversions/excel_title_to_column.py)
* [Hex To Bin](conversions/hex_to_bin.py)
* [Hexadecimal To Decimal](conversions/hexadecimal_to_decimal.py)
@ -166,6 +169,7 @@
* Arrays
* [Permutations](data_structures/arrays/permutations.py)
* [Prefix Sum](data_structures/arrays/prefix_sum.py)
* [Product Sum](data_structures/arrays/product_sum.py)
* Binary Tree
* [Avl Tree](data_structures/binary_tree/avl_tree.py)
* [Basic Binary Tree](data_structures/binary_tree/basic_binary_tree.py)
@ -233,8 +237,8 @@
* [Double Ended Queue](data_structures/queue/double_ended_queue.py)
* [Linked Queue](data_structures/queue/linked_queue.py)
* [Priority Queue Using List](data_structures/queue/priority_queue_using_list.py)
* [Queue By List](data_structures/queue/queue_by_list.py)
* [Queue By Two Stacks](data_structures/queue/queue_by_two_stacks.py)
* [Queue On List](data_structures/queue/queue_on_list.py)
* [Queue On Pseudo Stack](data_structures/queue/queue_on_pseudo_stack.py)
* Stacks
* [Balanced Parentheses](data_structures/stacks/balanced_parentheses.py)
@ -290,7 +294,7 @@
* [Inversions](divide_and_conquer/inversions.py)
* [Kth Order Statistic](divide_and_conquer/kth_order_statistic.py)
* [Max Difference Pair](divide_and_conquer/max_difference_pair.py)
* [Max Subarray Sum](divide_and_conquer/max_subarray_sum.py)
* [Max Subarray](divide_and_conquer/max_subarray.py)
* [Mergesort](divide_and_conquer/mergesort.py)
* [Peak](divide_and_conquer/peak.py)
* [Power](divide_and_conquer/power.py)
@ -321,24 +325,27 @@
* [Matrix Chain Order](dynamic_programming/matrix_chain_order.py)
* [Max Non Adjacent Sum](dynamic_programming/max_non_adjacent_sum.py)
* [Max Product Subarray](dynamic_programming/max_product_subarray.py)
* [Max Sub Array](dynamic_programming/max_sub_array.py)
* [Max Sum Contiguous Subsequence](dynamic_programming/max_sum_contiguous_subsequence.py)
* [Max Subarray Sum](dynamic_programming/max_subarray_sum.py)
* [Min Distance Up Bottom](dynamic_programming/min_distance_up_bottom.py)
* [Minimum Coin Change](dynamic_programming/minimum_coin_change.py)
* [Minimum Cost Path](dynamic_programming/minimum_cost_path.py)
* [Minimum Partition](dynamic_programming/minimum_partition.py)
* [Minimum Size Subarray Sum](dynamic_programming/minimum_size_subarray_sum.py)
* [Minimum Squares To Represent A Number](dynamic_programming/minimum_squares_to_represent_a_number.py)
* [Minimum Steps To One](dynamic_programming/minimum_steps_to_one.py)
* [Minimum Tickets Cost](dynamic_programming/minimum_tickets_cost.py)
* [Optimal Binary Search Tree](dynamic_programming/optimal_binary_search_tree.py)
* [Palindrome Partitioning](dynamic_programming/palindrome_partitioning.py)
* [Regex Match](dynamic_programming/regex_match.py)
* [Rod Cutting](dynamic_programming/rod_cutting.py)
* [Subset Generation](dynamic_programming/subset_generation.py)
* [Sum Of Subset](dynamic_programming/sum_of_subset.py)
* [Tribonacci](dynamic_programming/tribonacci.py)
* [Viterbi](dynamic_programming/viterbi.py)
* [Word Break](dynamic_programming/word_break.py)
## Electronics
* [Apparent Power](electronics/apparent_power.py)
* [Builtin Voltage](electronics/builtin_voltage.py)
* [Carrier Concentration](electronics/carrier_concentration.py)
* [Circular Convolution](electronics/circular_convolution.py)
@ -348,6 +355,7 @@
* [Electrical Impedance](electronics/electrical_impedance.py)
* [Ind Reactance](electronics/ind_reactance.py)
* [Ohms Law](electronics/ohms_law.py)
* [Real And Reactive Power](electronics/real_and_reactive_power.py)
* [Resistor Equivalence](electronics/resistor_equivalence.py)
* [Resonant Frequency](electronics/resonant_frequency.py)
@ -360,6 +368,7 @@
## Financial
* [Equated Monthly Installments](financial/equated_monthly_installments.py)
* [Interest](financial/interest.py)
* [Present Value](financial/present_value.py)
* [Price Plus Tax](financial/price_plus_tax.py)
## Fractals
@ -406,6 +415,7 @@
* [Dijkstra 2](graphs/dijkstra_2.py)
* [Dijkstra Algorithm](graphs/dijkstra_algorithm.py)
* [Dijkstra Alternate](graphs/dijkstra_alternate.py)
* [Dijkstra Binary Grid](graphs/dijkstra_binary_grid.py)
* [Dinic](graphs/dinic.py)
* [Directed And Undirected (Weighted) Graph](graphs/directed_and_undirected_(weighted)_graph.py)
* [Edmonds Karp Multiple Source And Sink](graphs/edmonds_karp_multiple_source_and_sink.py)
@ -415,8 +425,9 @@
* [Frequent Pattern Graph Miner](graphs/frequent_pattern_graph_miner.py)
* [G Topological Sort](graphs/g_topological_sort.py)
* [Gale Shapley Bigraph](graphs/gale_shapley_bigraph.py)
* [Graph Adjacency List](graphs/graph_adjacency_list.py)
* [Graph Adjacency Matrix](graphs/graph_adjacency_matrix.py)
* [Graph List](graphs/graph_list.py)
* [Graph Matrix](graphs/graph_matrix.py)
* [Graphs Floyd Warshall](graphs/graphs_floyd_warshall.py)
* [Greedy Best First](graphs/greedy_best_first.py)
* [Greedy Min Vertex Cover](graphs/greedy_min_vertex_cover.py)
@ -445,6 +456,7 @@
## Greedy Methods
* [Fractional Knapsack](greedy_methods/fractional_knapsack.py)
* [Fractional Knapsack 2](greedy_methods/fractional_knapsack_2.py)
* [Minimum Waiting Time](greedy_methods/minimum_waiting_time.py)
* [Optimal Merge Pattern](greedy_methods/optimal_merge_pattern.py)
## Hashes
@ -474,15 +486,20 @@
* [Lib](linear_algebra/src/lib.py)
* [Polynom For Points](linear_algebra/src/polynom_for_points.py)
* [Power Iteration](linear_algebra/src/power_iteration.py)
* [Rank Of Matrix](linear_algebra/src/rank_of_matrix.py)
* [Rayleigh Quotient](linear_algebra/src/rayleigh_quotient.py)
* [Schur Complement](linear_algebra/src/schur_complement.py)
* [Test Linear Algebra](linear_algebra/src/test_linear_algebra.py)
* [Transformations 2D](linear_algebra/src/transformations_2d.py)
## Linear Programming
* [Simplex](linear_programming/simplex.py)
## Machine Learning
* [Astar](machine_learning/astar.py)
* [Data Transformations](machine_learning/data_transformations.py)
* [Decision Tree](machine_learning/decision_tree.py)
* [Dimensionality Reduction](machine_learning/dimensionality_reduction.py)
* Forecasting
* [Run](machine_learning/forecasting/run.py)
* [Gradient Descent](machine_learning/gradient_descent.py)
@ -497,7 +514,7 @@
* Lstm
* [Lstm Prediction](machine_learning/lstm/lstm_prediction.py)
* [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py)
* [Polymonial Regression](machine_learning/polymonial_regression.py)
* [Polynomial Regression](machine_learning/polynomial_regression.py)
* [Scoring Functions](machine_learning/scoring_functions.py)
* [Self Organizing Map](machine_learning/self_organizing_map.py)
* [Sequential Minimum Optimization](machine_learning/sequential_minimum_optimization.py)
@ -508,7 +525,6 @@
* [Xgboost Regressor](machine_learning/xgboost_regressor.py)
## Maths
* [3N Plus 1](maths/3n_plus_1.py)
* [Abs](maths/abs.py)
* [Add](maths/add.py)
* [Addition Without Arithmetic](maths/addition_without_arithmetic.py)
@ -544,6 +560,7 @@
* [Dodecahedron](maths/dodecahedron.py)
* [Double Factorial Iterative](maths/double_factorial_iterative.py)
* [Double Factorial Recursive](maths/double_factorial_recursive.py)
* [Dual Number Automatic Differentiation](maths/dual_number_automatic_differentiation.py)
* [Entropy](maths/entropy.py)
* [Euclidean Distance](maths/euclidean_distance.py)
* [Euclidean Gcd](maths/euclidean_gcd.py)
@ -556,9 +573,7 @@
* [Fermat Little Theorem](maths/fermat_little_theorem.py)
* [Fibonacci](maths/fibonacci.py)
* [Find Max](maths/find_max.py)
* [Find Max Recursion](maths/find_max_recursion.py)
* [Find Min](maths/find_min.py)
* [Find Min Recursion](maths/find_min_recursion.py)
* [Floor](maths/floor.py)
* [Gamma](maths/gamma.py)
* [Gamma Recursive](maths/gamma_recursive.py)
@ -571,16 +586,16 @@
* [Hardy Ramanujanalgo](maths/hardy_ramanujanalgo.py)
* [Hexagonal Number](maths/hexagonal_number.py)
* [Integration By Simpson Approx](maths/integration_by_simpson_approx.py)
* [Interquartile Range](maths/interquartile_range.py)
* [Is Int Palindrome](maths/is_int_palindrome.py)
* [Is Ip V4 Address Valid](maths/is_ip_v4_address_valid.py)
* [Is Square Free](maths/is_square_free.py)
* [Jaccard Similarity](maths/jaccard_similarity.py)
* [Juggler Sequence](maths/juggler_sequence.py)
* [Kadanes](maths/kadanes.py)
* [Karatsuba](maths/karatsuba.py)
* [Krishnamurthy Number](maths/krishnamurthy_number.py)
* [Kth Lexicographic Permutation](maths/kth_lexicographic_permutation.py)
* [Largest Of Very Large Numbers](maths/largest_of_very_large_numbers.py)
* [Largest Subarray Sum](maths/largest_subarray_sum.py)
* [Least Common Multiple](maths/least_common_multiple.py)
* [Line Length](maths/line_length.py)
* [Liouville Lambda](maths/liouville_lambda.py)
@ -600,10 +615,12 @@
* [Newton Raphson](maths/newton_raphson.py)
* [Number Of Digits](maths/number_of_digits.py)
* [Numerical Integration](maths/numerical_integration.py)
* [Odd Sieve](maths/odd_sieve.py)
* [Perfect Cube](maths/perfect_cube.py)
* [Perfect Number](maths/perfect_number.py)
* [Perfect Square](maths/perfect_square.py)
* [Persistence](maths/persistence.py)
* [Pi Generator](maths/pi_generator.py)
* [Pi Monte Carlo Estimation](maths/pi_monte_carlo_estimation.py)
* [Points Are Collinear 3D](maths/points_are_collinear_3d.py)
* [Pollard Rho](maths/pollard_rho.py)
@ -625,6 +642,7 @@
* [Radians](maths/radians.py)
* [Radix2 Fft](maths/radix2_fft.py)
* [Relu](maths/relu.py)
* [Remove Digit](maths/remove_digit.py)
* [Runge Kutta](maths/runge_kutta.py)
* [Segmented Sieve](maths/segmented_sieve.py)
* Series
@ -640,6 +658,7 @@
* [Sigmoid Linear Unit](maths/sigmoid_linear_unit.py)
* [Signum](maths/signum.py)
* [Simpson Rule](maths/simpson_rule.py)
* [Simultaneous Linear Equation Solver](maths/simultaneous_linear_equation_solver.py)
* [Sin](maths/sin.py)
* [Sock Merchant](maths/sock_merchant.py)
* [Softmax](maths/softmax.py)
@ -650,6 +669,7 @@
* [Sum Of Harmonic Series](maths/sum_of_harmonic_series.py)
* [Sumset](maths/sumset.py)
* [Sylvester Sequence](maths/sylvester_sequence.py)
* [Tanh](maths/tanh.py)
* [Test Prime Check](maths/test_prime_check.py)
* [Trapezoidal Rule](maths/trapezoidal_rule.py)
* [Triplet Sum](maths/triplet_sum.py)
@ -664,6 +684,7 @@
## Matrix
* [Binary Search Matrix](matrix/binary_search_matrix.py)
* [Count Islands In Matrix](matrix/count_islands_in_matrix.py)
* [Count Negative Numbers In Sorted Matrix](matrix/count_negative_numbers_in_sorted_matrix.py)
* [Count Paths](matrix/count_paths.py)
* [Cramers Rule 2X2](matrix/cramers_rule_2x2.py)
* [Inverse Of Matrix](matrix/inverse_of_matrix.py)
@ -686,9 +707,10 @@
## Neural Network
* [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py)
* Activation Functions
* [Exponential Linear Unit](neural_network/activation_functions/exponential_linear_unit.py)
* [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py)
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
* [Input Data](neural_network/input_data.py)
* [Perceptron](neural_network/perceptron.py)
* [Simple Neural Network](neural_network/simple_neural_network.py)
@ -702,13 +724,16 @@
* [Gauss Easter](other/gauss_easter.py)
* [Graham Scan](other/graham_scan.py)
* [Greedy](other/greedy.py)
* [Guess The Number Search](other/guess_the_number_search.py)
* [H Index](other/h_index.py)
* [Least Recently Used](other/least_recently_used.py)
* [Lfu Cache](other/lfu_cache.py)
* [Linear Congruential Generator](other/linear_congruential_generator.py)
* [Lru Cache](other/lru_cache.py)
* [Magicdiamondpattern](other/magicdiamondpattern.py)
* [Maximum Subarray](other/maximum_subarray.py)
* [Maximum Subsequence](other/maximum_subsequence.py)
* [Nested Brackets](other/nested_brackets.py)
* [Number Container System](other/number_container_system.py)
* [Password](other/password.py)
* [Quine](other/quine.py)
* [Scoring Algorithm](other/scoring_algorithm.py)
@ -716,7 +741,9 @@
* [Tower Of Hanoi](other/tower_of_hanoi.py)
## Physics
* [Altitude Pressure](physics/altitude_pressure.py)
* [Archimedes Principle](physics/archimedes_principle.py)
* [Basic Orbital Capture](physics/basic_orbital_capture.py)
* [Casimir Effect](physics/casimir_effect.py)
* [Centripetal Force](physics/centripetal_force.py)
* [Grahams Law](physics/grahams_law.py)
@ -732,6 +759,7 @@
* [Potential Energy](physics/potential_energy.py)
* [Rms Speed Of Molecule](physics/rms_speed_of_molecule.py)
* [Shear Stress](physics/shear_stress.py)
* [Speed Of Sound](physics/speed_of_sound.py)
## Project Euler
* Problem 001
@ -1037,7 +1065,6 @@
* [Q Fourier Transform](quantum/q_fourier_transform.py)
* [Q Full Adder](quantum/q_full_adder.py)
* [Quantum Entanglement](quantum/quantum_entanglement.py)
* [Quantum Random](quantum/quantum_random.py)
* [Quantum Teleportation](quantum/quantum_teleportation.py)
* [Ripple Adder Classic](quantum/ripple_adder_classic.py)
* [Single Qubit Measure](quantum/single_qubit_measure.py)
@ -1071,6 +1098,7 @@
## Sorts
* [Bead Sort](sorts/bead_sort.py)
* [Binary Insertion Sort](sorts/binary_insertion_sort.py)
* [Bitonic Sort](sorts/bitonic_sort.py)
* [Bogo Sort](sorts/bogo_sort.py)
* [Bubble Sort](sorts/bubble_sort.py)
@ -1139,10 +1167,10 @@
* [Indian Phone Validator](strings/indian_phone_validator.py)
* [Is Contains Unique Chars](strings/is_contains_unique_chars.py)
* [Is Isogram](strings/is_isogram.py)
* [Is Palindrome](strings/is_palindrome.py)
* [Is Pangram](strings/is_pangram.py)
* [Is Spain National Id](strings/is_spain_national_id.py)
* [Is Srilankan Phone Number](strings/is_srilankan_phone_number.py)
* [Is Valid Email Address](strings/is_valid_email_address.py)
* [Jaro Winkler](strings/jaro_winkler.py)
* [Join](strings/join.py)
* [Knuth Morris Pratt](strings/knuth_morris_pratt.py)
@ -1161,7 +1189,9 @@
* [Reverse Words](strings/reverse_words.py)
* [Snake Case To Camel Pascal Case](strings/snake_case_to_camel_pascal_case.py)
* [Split](strings/split.py)
* [String Switch Case](strings/string_switch_case.py)
* [Text Justification](strings/text_justification.py)
* [Top K Frequent Words](strings/top_k_frequent_words.py)
* [Upper](strings/upper.py)
* [Wave](strings/wave.py)
* [Wildcard Pattern Matching](strings/wildcard_pattern_matching.py)
@ -1181,7 +1211,6 @@
* [Daily Horoscope](web_programming/daily_horoscope.py)
* [Download Images From Google Query](web_programming/download_images_from_google_query.py)
* [Emails From Url](web_programming/emails_from_url.py)
* [Fetch Anime And Play](web_programming/fetch_anime_and_play.py)
* [Fetch Bbc News](web_programming/fetch_bbc_news.py)
* [Fetch Github Info](web_programming/fetch_github_info.py)
* [Fetch Jobs](web_programming/fetch_jobs.py)

View File

@ -13,7 +13,7 @@
<img src="https://img.shields.io/static/v1.svg?label=Contributions&message=Welcome&color=0059b3&style=flat-square" height="20" alt="Contributions Welcome">
</a>
<img src="https://img.shields.io/github/repo-size/TheAlgorithms/Python.svg?label=Repo%20size&style=flat-square" height="20">
<a href="https://discord.gg/c7MnfGFGa6">
<a href="https://the-algorithms.com/discord">
<img src="https://img.shields.io/discord/808045925556682782.svg?logo=discord&colorB=7289DA&style=flat-square" height="20" alt="Discord chat">
</a>
<a href="https://gitter.im/TheAlgorithms/community">
@ -42,7 +42,7 @@ Read through our [Contribution Guidelines](CONTRIBUTING.md) before you contribut
## Community Channels
We are on [Discord](https://discord.gg/c7MnfGFGa6) and [Gitter](https://gitter.im/TheAlgorithms/community)! Community channels are a great way for you to ask questions and get help. Please join us!
We are on [Discord](https://the-algorithms.com/discord) and [Gitter](https://gitter.im/TheAlgorithms/community)! Community channels are a great way for you to ask questions and get help. Please join us!
## List of Algorithms

View File

@ -49,7 +49,9 @@ def jacobi_iteration_method(
>>> constant = np.array([[2], [-6]])
>>> init_val = [0.5, -0.5, -0.5]
>>> iterations = 3
>>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
>>> jacobi_iteration_method(
... coefficient, constant, init_val, iterations
... ) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ValueError: Coefficient and constant matrices dimensions must be nxn and nx1 but
@ -59,7 +61,9 @@ def jacobi_iteration_method(
>>> constant = np.array([[2], [-6], [-4]])
>>> init_val = [0.5, -0.5]
>>> iterations = 3
>>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
>>> jacobi_iteration_method(
... coefficient, constant, init_val, iterations
... ) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ValueError: Number of initial values must be equal to number of rows in coefficient
@ -79,24 +83,26 @@ def jacobi_iteration_method(
rows2, cols2 = constant_matrix.shape
if rows1 != cols1:
raise ValueError(
f"Coefficient matrix dimensions must be nxn but received {rows1}x{cols1}"
)
msg = f"Coefficient matrix dimensions must be nxn but received {rows1}x{cols1}"
raise ValueError(msg)
if cols2 != 1:
raise ValueError(f"Constant matrix must be nx1 but received {rows2}x{cols2}")
msg = f"Constant matrix must be nx1 but received {rows2}x{cols2}"
raise ValueError(msg)
if rows1 != rows2:
raise ValueError(
f"""Coefficient and constant matrices dimensions must be nxn and nx1 but
received {rows1}x{cols1} and {rows2}x{cols2}"""
msg = (
"Coefficient and constant matrices dimensions must be nxn and nx1 but "
f"received {rows1}x{cols1} and {rows2}x{cols2}"
)
raise ValueError(msg)
if len(init_val) != rows1:
raise ValueError(
f"""Number of initial values must be equal to number of rows in coefficient
matrix but received {len(init_val)} and {rows1}"""
msg = (
"Number of initial values must be equal to number of rows in coefficient "
f"matrix but received {len(init_val)} and {rows1}"
)
raise ValueError(msg)
if iterations <= 0:
raise ValueError("Iterations must be at least 1")

View File

@ -80,10 +80,11 @@ def lower_upper_decomposition(table: np.ndarray) -> tuple[np.ndarray, np.ndarray
# Ensure that table is a square array
rows, columns = np.shape(table)
if rows != columns:
raise ValueError(
f"'table' has to be of square shaped array but got a "
msg = (
"'table' has to be of square shaped array but got a "
f"{rows}x{columns} array:\n{table}"
)
raise ValueError(msg)
lower = np.zeros((rows, columns))
upper = np.zeros((rows, columns))

View File

@ -25,9 +25,11 @@ def newton_raphson(
"""
x = a
while True:
x = Decimal(x) - (Decimal(eval(func)) / Decimal(eval(str(diff(func)))))
x = Decimal(x) - (
Decimal(eval(func)) / Decimal(eval(str(diff(func)))) # noqa: S307
)
# This number dictates the accuracy of the answer
if abs(eval(func)) < precision:
if abs(eval(func)) < precision: # noqa: S307
return float(x)

View File

@ -50,16 +50,18 @@ class IIRFilter:
a_coeffs = [1.0, *a_coeffs]
if len(a_coeffs) != self.order + 1:
raise ValueError(
f"Expected a_coeffs to have {self.order + 1} elements for {self.order}"
f"-order filter, got {len(a_coeffs)}"
msg = (
f"Expected a_coeffs to have {self.order + 1} elements "
f"for {self.order}-order filter, got {len(a_coeffs)}"
)
raise ValueError(msg)
if len(b_coeffs) != self.order + 1:
raise ValueError(
f"Expected b_coeffs to have {self.order + 1} elements for {self.order}"
f"-order filter, got {len(a_coeffs)}"
msg = (
f"Expected b_coeffs to have {self.order + 1} elements "
f"for {self.order}-order filter, got {len(a_coeffs)}"
)
raise ValueError(msg)
self.a_coeffs = a_coeffs
self.b_coeffs = b_coeffs

View File

@ -91,7 +91,8 @@ def open_knight_tour(n: int) -> list[list[int]]:
return board
board[i][j] = 0
raise ValueError(f"Open Kight Tour cannot be performed on a board of size {n}")
msg = f"Open Kight Tour cannot be performed on a board of size {n}"
raise ValueError(msg)
if __name__ == "__main__":

93
backtracking/power_sum.py Normal file
View File

@ -0,0 +1,93 @@
"""
Problem source: https://www.hackerrank.com/challenges/the-power-sum/problem
Find the number of ways that a given integer X, can be expressed as the sum
of the Nth powers of unique, natural numbers. For example, if X=13 and N=2.
We have to find all combinations of unique squares adding up to 13.
The only solution is 2^2+3^2. Constraints: 1<=X<=1000, 2<=N<=10.
"""
from math import pow
def backtrack(
needed_sum: int,
power: int,
current_number: int,
current_sum: int,
solutions_count: int,
) -> tuple[int, int]:
"""
>>> backtrack(13, 2, 1, 0, 0)
(0, 1)
>>> backtrack(100, 2, 1, 0, 0)
(0, 3)
>>> backtrack(100, 3, 1, 0, 0)
(0, 1)
>>> backtrack(800, 2, 1, 0, 0)
(0, 561)
>>> backtrack(1000, 10, 1, 0, 0)
(0, 0)
>>> backtrack(400, 2, 1, 0, 0)
(0, 55)
>>> backtrack(50, 1, 1, 0, 0)
(0, 3658)
"""
if current_sum == needed_sum:
# If the sum of the powers is equal to needed_sum, then we have a solution.
solutions_count += 1
return current_sum, solutions_count
i_to_n = int(pow(current_number, power))
if current_sum + i_to_n <= needed_sum:
# If the sum of the powers is less than needed_sum, then continue adding powers.
current_sum += i_to_n
current_sum, solutions_count = backtrack(
needed_sum, power, current_number + 1, current_sum, solutions_count
)
current_sum -= i_to_n
if i_to_n < needed_sum:
# If the power of i is less than needed_sum, then try with the next power.
current_sum, solutions_count = backtrack(
needed_sum, power, current_number + 1, current_sum, solutions_count
)
return current_sum, solutions_count
def solve(needed_sum: int, power: int) -> int:
"""
>>> solve(13, 2)
1
>>> solve(100, 2)
3
>>> solve(100, 3)
1
>>> solve(800, 2)
561
>>> solve(1000, 10)
0
>>> solve(400, 2)
55
>>> solve(50, 1)
Traceback (most recent call last):
...
ValueError: Invalid input
needed_sum must be between 1 and 1000, power between 2 and 10.
>>> solve(-10, 5)
Traceback (most recent call last):
...
ValueError: Invalid input
needed_sum must be between 1 and 1000, power between 2 and 10.
"""
if not (1 <= needed_sum <= 1000 and 2 <= power <= 10):
raise ValueError(
"Invalid input\n"
"needed_sum must be between 1 and 1000, power between 2 and 10."
)
return backtrack(needed_sum, power, 1, 0, 0)[1] # Return the solutions_count
if __name__ == "__main__":
import doctest
doctest.testmod()

View File

@ -14,10 +14,11 @@ def get_reverse_bit_string(number: int) -> str:
TypeError: operation can not be conducted on a object of type str
"""
if not isinstance(number, int):
raise TypeError(
msg = (
"operation can not be conducted on a object of type "
f"{type(number).__name__}"
)
raise TypeError(msg)
bit_string = ""
for _ in range(0, 32):
bit_string += str(number % 2)

View File

@ -43,6 +43,8 @@ def test_and_gate() -> None:
if __name__ == "__main__":
test_and_gate()
print(and_gate(1, 0))
print(and_gate(0, 0))
print(and_gate(0, 1))
print(and_gate(1, 1))

View File

@ -10,7 +10,7 @@ Python:
- 3.5
Usage:
- $python3 game_o_life <canvas_size:int>
- $python3 game_of_life <canvas_size:int>
Game-Of-Life Rules:
@ -34,7 +34,7 @@ import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
usage_doc = "Usage of script: script_nama <size_of_canvas:int>"
usage_doc = "Usage of script: script_name <size_of_canvas:int>"
choice = [0] * 100 + [1] * 10
random.shuffle(choice)
@ -52,7 +52,8 @@ def seed(canvas: list[list[bool]]) -> None:
def run(canvas: list[list[bool]]) -> list[list[bool]]:
"""This function runs the rules of game through all points, and changes their
"""
This function runs the rules of game through all points, and changes their
status accordingly.(in the same canvas)
@Args:
--
@ -60,7 +61,7 @@ def run(canvas: list[list[bool]]) -> list[list[bool]]:
@returns:
--
None
canvas of population after one step
"""
current_canvas = np.array(canvas)
next_gen_canvas = np.array(create_canvas(current_canvas.shape[0]))
@ -70,10 +71,7 @@ def run(canvas: list[list[bool]]) -> list[list[bool]]:
pt, current_canvas[r - 1 : r + 2, c - 1 : c + 2]
)
current_canvas = next_gen_canvas
del next_gen_canvas # cleaning memory as we move on.
return_canvas: list[list[bool]] = current_canvas.tolist()
return return_canvas
return next_gen_canvas.tolist()
def __judge_point(pt: bool, neighbours: list[list[bool]]) -> bool:
@ -98,7 +96,7 @@ def __judge_point(pt: bool, neighbours: list[list[bool]]) -> bool:
if pt:
if alive < 2:
state = False
elif alive == 2 or alive == 3:
elif alive in {2, 3}:
state = True
elif alive > 3:
state = False

550
cellular_automata/wa_tor.py Normal file
View File

@ -0,0 +1,550 @@
"""
Wa-Tor algorithm (1984)
@ https://en.wikipedia.org/wiki/Wa-Tor
@ https://beltoforion.de/en/wator/
@ https://beltoforion.de/en/wator/images/wator_medium.webm
This solution aims to completely remove any systematic approach
to the Wa-Tor planet, and utilise fully random methods.
The constants are a working set that allows the Wa-Tor planet
to result in one of the three possible results.
"""
from collections.abc import Callable
from random import randint, shuffle
from time import sleep
from typing import Literal
WIDTH = 50 # Width of the Wa-Tor planet
HEIGHT = 50 # Height of the Wa-Tor planet
PREY_INITIAL_COUNT = 30 # The initial number of prey entities
PREY_REPRODUCTION_TIME = 5 # The chronons before reproducing
PREDATOR_INITIAL_COUNT = 50 # The initial number of predator entities
# The initial energy value of predator entities
PREDATOR_INITIAL_ENERGY_VALUE = 15
# The energy value provided when consuming prey
PREDATOR_FOOD_VALUE = 5
PREDATOR_REPRODUCTION_TIME = 20 # The chronons before reproducing
MAX_ENTITIES = 500 # The max number of organisms on the board
# The number of entities to delete from the unbalanced side
DELETE_UNBALANCED_ENTITIES = 50
class Entity:
"""
Represents an entity (either prey or predator).
>>> e = Entity(True, coords=(0, 0))
>>> e.prey
True
>>> e.coords
(0, 0)
>>> e.alive
True
"""
def __init__(self, prey: bool, coords: tuple[int, int]) -> None:
self.prey = prey
# The (row, col) pos of the entity
self.coords = coords
self.remaining_reproduction_time = (
PREY_REPRODUCTION_TIME if prey else PREDATOR_REPRODUCTION_TIME
)
self.energy_value = None if prey is True else PREDATOR_INITIAL_ENERGY_VALUE
self.alive = True
def reset_reproduction_time(self) -> None:
"""
>>> e = Entity(True, coords=(0, 0))
>>> e.reset_reproduction_time()
>>> e.remaining_reproduction_time == PREY_REPRODUCTION_TIME
True
>>> e = Entity(False, coords=(0, 0))
>>> e.reset_reproduction_time()
>>> e.remaining_reproduction_time == PREDATOR_REPRODUCTION_TIME
True
"""
self.remaining_reproduction_time = (
PREY_REPRODUCTION_TIME if self.prey is True else PREDATOR_REPRODUCTION_TIME
)
def __repr__(self) -> str:
"""
>>> Entity(prey=True, coords=(1, 1))
Entity(prey=True, coords=(1, 1), remaining_reproduction_time=5)
>>> Entity(prey=False, coords=(2, 1)) # doctest: +NORMALIZE_WHITESPACE
Entity(prey=False, coords=(2, 1),
remaining_reproduction_time=20, energy_value=15)
"""
repr_ = (
f"Entity(prey={self.prey}, coords={self.coords}, "
f"remaining_reproduction_time={self.remaining_reproduction_time}"
)
if self.energy_value is not None:
repr_ += f", energy_value={self.energy_value}"
return f"{repr_})"
class WaTor:
"""
Represents the main Wa-Tor algorithm.
:attr time_passed: A function that is called every time
time passes (a chronon) in order to visually display
the new Wa-Tor planet. The time_passed function can block
using time.sleep to slow the algorithm progression.
>>> wt = WaTor(10, 15)
>>> wt.width
10
>>> wt.height
15
>>> len(wt.planet)
15
>>> len(wt.planet[0])
10
>>> len(wt.get_entities()) == PREDATOR_INITIAL_COUNT + PREY_INITIAL_COUNT
True
"""
time_passed: Callable[["WaTor", int], None] | None
def __init__(self, width: int, height: int) -> None:
self.width = width
self.height = height
self.time_passed = None
self.planet: list[list[Entity | None]] = [[None] * width for _ in range(height)]
# Populate planet with predators and prey randomly
for _ in range(PREY_INITIAL_COUNT):
self.add_entity(prey=True)
for _ in range(PREDATOR_INITIAL_COUNT):
self.add_entity(prey=False)
self.set_planet(self.planet)
def set_planet(self, planet: list[list[Entity | None]]) -> None:
"""
Ease of access for testing
>>> wt = WaTor(WIDTH, HEIGHT)
>>> planet = [
... [None, None, None],
... [None, Entity(True, coords=(1, 1)), None]
... ]
>>> wt.set_planet(planet)
>>> wt.planet == planet
True
>>> wt.width
3
>>> wt.height
2
"""
self.planet = planet
self.width = len(planet[0])
self.height = len(planet)
def add_entity(self, prey: bool) -> None:
"""
Adds an entity, making sure the entity does
not override another entity
>>> wt = WaTor(WIDTH, HEIGHT)
>>> wt.set_planet([[None, None], [None, None]])
>>> wt.add_entity(True)
>>> len(wt.get_entities())
1
>>> wt.add_entity(False)
>>> len(wt.get_entities())
2
"""
while True:
row, col = randint(0, self.height - 1), randint(0, self.width - 1)
if self.planet[row][col] is None:
self.planet[row][col] = Entity(prey=prey, coords=(row, col))
return
def get_entities(self) -> list[Entity]:
"""
Returns a list of all the entities within the planet.
>>> wt = WaTor(WIDTH, HEIGHT)
>>> len(wt.get_entities()) == PREDATOR_INITIAL_COUNT + PREY_INITIAL_COUNT
True
"""
return [entity for column in self.planet for entity in column if entity]
def balance_predators_and_prey(self) -> None:
"""
Balances predators and preys so that prey
can not dominate the predators, blocking up
space for them to reproduce.
>>> wt = WaTor(WIDTH, HEIGHT)
>>> for i in range(2000):
... row, col = i // HEIGHT, i % WIDTH
... wt.planet[row][col] = Entity(True, coords=(row, col))
>>> entities = len(wt.get_entities())
>>> wt.balance_predators_and_prey()
>>> len(wt.get_entities()) == entities
False
"""
entities = self.get_entities()
shuffle(entities)
if len(entities) >= MAX_ENTITIES - MAX_ENTITIES / 10:
prey = [entity for entity in entities if entity.prey]
predators = [entity for entity in entities if not entity.prey]
prey_count, predator_count = len(prey), len(predators)
entities_to_purge = (
prey[:DELETE_UNBALANCED_ENTITIES]
if prey_count > predator_count
else predators[:DELETE_UNBALANCED_ENTITIES]
)
for entity in entities_to_purge:
self.planet[entity.coords[0]][entity.coords[1]] = None
def get_surrounding_prey(self, entity: Entity) -> list[Entity]:
"""
Returns all the prey entities around (N, S, E, W) a predator entity.
Subtly different to the try_to_move_to_unoccupied square.
>>> wt = WaTor(WIDTH, HEIGHT)
>>> wt.set_planet([
... [None, Entity(True, (0, 1)), None],
... [None, Entity(False, (1, 1)), None],
... [None, Entity(True, (2, 1)), None]])
>>> wt.get_surrounding_prey(
... Entity(False, (1, 1))) # doctest: +NORMALIZE_WHITESPACE
[Entity(prey=True, coords=(0, 1), remaining_reproduction_time=5),
Entity(prey=True, coords=(2, 1), remaining_reproduction_time=5)]
>>> wt.set_planet([[Entity(False, (0, 0))]])
>>> wt.get_surrounding_prey(Entity(False, (0, 0)))
[]
>>> wt.set_planet([
... [Entity(True, (0, 0)), Entity(False, (1, 0)), Entity(False, (2, 0))],
... [None, Entity(False, (1, 1)), Entity(True, (2, 1))],
... [None, None, None]])
>>> wt.get_surrounding_prey(Entity(False, (1, 0)))
[Entity(prey=True, coords=(0, 0), remaining_reproduction_time=5)]
"""
row, col = entity.coords
adjacent: list[tuple[int, int]] = [
(row - 1, col), # North
(row + 1, col), # South
(row, col - 1), # West
(row, col + 1), # East
]
return [
ent
for r, c in adjacent
if 0 <= r < self.height
and 0 <= c < self.width
and (ent := self.planet[r][c]) is not None
and ent.prey
]
def move_and_reproduce(
self, entity: Entity, direction_orders: list[Literal["N", "E", "S", "W"]]
) -> None:
"""
Attempts to move to an unoccupied neighbouring square
in either of the four directions (North, South, East, West).
If the move was successful and the remaining_reproduction time is
equal to 0, then a new prey or predator can also be created
in the previous square.
:param direction_orders: Ordered list (like priority queue) depicting
order to attempt to move. Removes any systematic
approach of checking neighbouring squares.
>>> planet = [
... [None, None, None],
... [None, Entity(True, coords=(1, 1)), None],
... [None, None, None]
... ]
>>> wt = WaTor(WIDTH, HEIGHT)
>>> wt.set_planet(planet)
>>> wt.move_and_reproduce(Entity(True, coords=(1, 1)), direction_orders=["N"])
>>> wt.planet # doctest: +NORMALIZE_WHITESPACE
[[None, Entity(prey=True, coords=(0, 1), remaining_reproduction_time=4), None],
[None, None, None],
[None, None, None]]
>>> wt.planet[0][0] = Entity(True, coords=(0, 0))
>>> wt.move_and_reproduce(Entity(True, coords=(0, 1)),
... direction_orders=["N", "W", "E", "S"])
>>> wt.planet # doctest: +NORMALIZE_WHITESPACE
[[Entity(prey=True, coords=(0, 0), remaining_reproduction_time=5), None,
Entity(prey=True, coords=(0, 2), remaining_reproduction_time=4)],
[None, None, None],
[None, None, None]]
>>> wt.planet[0][1] = wt.planet[0][2]
>>> wt.planet[0][2] = None
>>> wt.move_and_reproduce(Entity(True, coords=(0, 1)),
... direction_orders=["N", "W", "S", "E"])
>>> wt.planet # doctest: +NORMALIZE_WHITESPACE
[[Entity(prey=True, coords=(0, 0), remaining_reproduction_time=5), None, None],
[None, Entity(prey=True, coords=(1, 1), remaining_reproduction_time=4), None],
[None, None, None]]
>>> wt = WaTor(WIDTH, HEIGHT)
>>> reproducable_entity = Entity(False, coords=(0, 1))
>>> reproducable_entity.remaining_reproduction_time = 0
>>> wt.planet = [[None, reproducable_entity]]
>>> wt.move_and_reproduce(reproducable_entity,
... direction_orders=["N", "W", "S", "E"])
>>> wt.planet # doctest: +NORMALIZE_WHITESPACE
[[Entity(prey=False, coords=(0, 0),
remaining_reproduction_time=20, energy_value=15),
Entity(prey=False, coords=(0, 1), remaining_reproduction_time=20,
energy_value=15)]]
"""
row, col = coords = entity.coords
adjacent_squares: dict[Literal["N", "E", "S", "W"], tuple[int, int]] = {
"N": (row - 1, col), # North
"S": (row + 1, col), # South
"W": (row, col - 1), # West
"E": (row, col + 1), # East
}
# Weight adjacent locations
adjacent: list[tuple[int, int]] = []
for order in direction_orders:
adjacent.append(adjacent_squares[order])
for r, c in adjacent:
if (
0 <= r < self.height
and 0 <= c < self.width
and self.planet[r][c] is None
):
# Move entity to empty adjacent square
self.planet[r][c] = entity
self.planet[row][col] = None
entity.coords = (r, c)
break
# (2.) See if it possible to reproduce in previous square
if coords != entity.coords and entity.remaining_reproduction_time <= 0:
# Check if the entities on the planet is less than the max limit
if len(self.get_entities()) < MAX_ENTITIES:
# Reproduce in previous square
self.planet[row][col] = Entity(prey=entity.prey, coords=coords)
entity.reset_reproduction_time()
else:
entity.remaining_reproduction_time -= 1
def perform_prey_actions(
self, entity: Entity, direction_orders: list[Literal["N", "E", "S", "W"]]
) -> None:
"""
Performs the actions for a prey entity
For prey the rules are:
1. At each chronon, a prey moves randomly to one of the adjacent unoccupied
squares. If there are no free squares, no movement takes place.
2. Once a prey has survived a certain number of chronons it may reproduce.
This is done as it moves to a neighbouring square,
leaving behind a new prey in its old position.
Its reproduction time is also reset to zero.
>>> wt = WaTor(WIDTH, HEIGHT)
>>> reproducable_entity = Entity(True, coords=(0, 1))
>>> reproducable_entity.remaining_reproduction_time = 0
>>> wt.planet = [[None, reproducable_entity]]
>>> wt.perform_prey_actions(reproducable_entity,
... direction_orders=["N", "W", "S", "E"])
>>> wt.planet # doctest: +NORMALIZE_WHITESPACE
[[Entity(prey=True, coords=(0, 0), remaining_reproduction_time=5),
Entity(prey=True, coords=(0, 1), remaining_reproduction_time=5)]]
"""
self.move_and_reproduce(entity, direction_orders)
def perform_predator_actions(
self,
entity: Entity,
occupied_by_prey_coords: tuple[int, int] | None,
direction_orders: list[Literal["N", "E", "S", "W"]],
) -> None:
"""
Performs the actions for a predator entity
:param occupied_by_prey_coords: Move to this location if there is prey there
For predators the rules are:
1. At each chronon, a predator moves randomly to an adjacent square occupied
by a prey. If there is none, the predator moves to a random adjacent
unoccupied square. If there are no free squares, no movement takes place.
2. At each chronon, each predator is deprived of a unit of energy.
3. Upon reaching zero energy, a predator dies.
4. If a predator moves to a square occupied by a prey,
it eats the prey and earns a certain amount of energy.
5. Once a predator has survived a certain number of chronons
it may reproduce in exactly the same way as the prey.
>>> wt = WaTor(WIDTH, HEIGHT)
>>> wt.set_planet([[Entity(True, coords=(0, 0)), Entity(False, coords=(0, 1))]])
>>> wt.perform_predator_actions(Entity(False, coords=(0, 1)), (0, 0), [])
>>> wt.planet # doctest: +NORMALIZE_WHITESPACE
[[Entity(prey=False, coords=(0, 0),
remaining_reproduction_time=20, energy_value=19), None]]
"""
assert entity.energy_value is not None # [type checking]
# (3.) If the entity has 0 energy, it will die
if entity.energy_value == 0:
self.planet[entity.coords[0]][entity.coords[1]] = None
return
# (1.) Move to entity if possible
if occupied_by_prey_coords is not None:
# Kill the prey
prey = self.planet[occupied_by_prey_coords[0]][occupied_by_prey_coords[1]]
assert prey is not None
prey.alive = False
# Move onto prey
self.planet[occupied_by_prey_coords[0]][occupied_by_prey_coords[1]] = entity
self.planet[entity.coords[0]][entity.coords[1]] = None
entity.coords = occupied_by_prey_coords
# (4.) Eats the prey and earns energy
entity.energy_value += PREDATOR_FOOD_VALUE
else:
# (5.) If it has survived the certain number of chronons it will also
# reproduce in this function
self.move_and_reproduce(entity, direction_orders)
# (2.) Each chronon, the predator is deprived of a unit of energy
entity.energy_value -= 1
def run(self, *, iteration_count: int) -> None:
"""
Emulate time passing by looping iteration_count times
>>> wt = WaTor(WIDTH, HEIGHT)
>>> wt.run(iteration_count=PREDATOR_INITIAL_ENERGY_VALUE - 1)
>>> len(list(filter(lambda entity: entity.prey is False,
... wt.get_entities()))) >= PREDATOR_INITIAL_COUNT
True
"""
for iter_num in range(iteration_count):
# Generate list of all entities in order to randomly
# pop an entity at a time to simulate true randomness
# This removes the systematic approach of iterating
# through each entity width by height
all_entities = self.get_entities()
for __ in range(len(all_entities)):
entity = all_entities.pop(randint(0, len(all_entities) - 1))
if entity.alive is False:
continue
directions: list[Literal["N", "E", "S", "W"]] = ["N", "E", "S", "W"]
shuffle(directions) # Randomly shuffle directions
if entity.prey:
self.perform_prey_actions(entity, directions)
else:
# Create list of surrounding prey
surrounding_prey = self.get_surrounding_prey(entity)
surrounding_prey_coords = None
if surrounding_prey:
# Again, randomly shuffle directions
shuffle(surrounding_prey)
surrounding_prey_coords = surrounding_prey[0].coords
self.perform_predator_actions(
entity, surrounding_prey_coords, directions
)
# Balance out the predators and prey
self.balance_predators_and_prey()
if self.time_passed is not None:
# Call time_passed function for Wa-Tor planet
# visualisation in a terminal or a graph.
self.time_passed(self, iter_num)
def visualise(wt: WaTor, iter_number: int, *, colour: bool = True) -> None:
"""
Visually displays the Wa-Tor planet using
an ascii code in terminal to clear and re-print
the Wa-Tor planet at intervals.
Uses ascii colour codes to colourfully display
the predators and prey.
(0x60f197) Prey = #
(0xfffff) Predator = x
>>> wt = WaTor(30, 30)
>>> wt.set_planet([
... [Entity(True, coords=(0, 0)), Entity(False, coords=(0, 1)), None],
... [Entity(False, coords=(1, 0)), None, Entity(False, coords=(1, 2))],
... [None, Entity(True, coords=(2, 1)), None]
... ])
>>> visualise(wt, 0, colour=False) # doctest: +NORMALIZE_WHITESPACE
# x .
x . x
. # .
<BLANKLINE>
Iteration: 0 | Prey count: 2 | Predator count: 3 |
"""
if colour:
__import__("os").system("")
print("\x1b[0;0H\x1b[2J\x1b[?25l")
reprint = "\x1b[0;0H" if colour else ""
ansi_colour_end = "\x1b[0m " if colour else " "
planet = wt.planet
output = ""
# Iterate over every entity in the planet
for row in planet:
for entity in row:
if entity is None:
output += " . "
else:
if colour is True:
output += (
"\x1b[38;2;96;241;151m"
if entity.prey
else "\x1b[38;2;255;255;15m"
)
output += f" {'#' if entity.prey else 'x'}{ansi_colour_end}"
output += "\n"
entities = wt.get_entities()
prey_count = sum(entity.prey for entity in entities)
print(
f"{output}\n Iteration: {iter_number} | Prey count: {prey_count} | "
f"Predator count: {len(entities) - prey_count} | {reprint}"
)
# Block the thread to be able to visualise seeing the algorithm
sleep(0.05)
if __name__ == "__main__":
import doctest
doctest.testmod()
wt = WaTor(WIDTH, HEIGHT)
wt.time_passed = visualise
wt.run(iteration_count=100_000)

View File

@ -34,9 +34,8 @@ def base64_encode(data: bytes) -> bytes:
"""
# Make sure the supplied data is a bytes-like object
if not isinstance(data, bytes):
raise TypeError(
f"a bytes-like object is required, not '{data.__class__.__name__}'"
)
msg = f"a bytes-like object is required, not '{data.__class__.__name__}'"
raise TypeError(msg)
binary_stream = "".join(bin(byte)[2:].zfill(8) for byte in data)
@ -88,10 +87,11 @@ def base64_decode(encoded_data: str) -> bytes:
"""
# Make sure encoded_data is either a string or a bytes-like object
if not isinstance(encoded_data, bytes) and not isinstance(encoded_data, str):
raise TypeError(
"argument should be a bytes-like object or ASCII string, not "
f"'{encoded_data.__class__.__name__}'"
msg = (
"argument should be a bytes-like object or ASCII string, "
f"not '{encoded_data.__class__.__name__}'"
)
raise TypeError(msg)
# In case encoded_data is a bytes-like object, make sure it contains only
# ASCII characters so we convert it to a string object

View File

@ -5,7 +5,7 @@ Author: Mohit Radadiya
from string import ascii_uppercase
dict1 = {char: i for i, char in enumerate(ascii_uppercase)}
dict2 = {i: char for i, char in enumerate(ascii_uppercase)}
dict2 = dict(enumerate(ascii_uppercase))
# This function generates the key in

View File

@ -6,7 +6,8 @@ def gcd(a: int, b: int) -> int:
def find_mod_inverse(a: int, m: int) -> int:
if gcd(a, m) != 1:
raise ValueError(f"mod inverse of {a!r} and {m!r} does not exist")
msg = f"mod inverse of {a!r} and {m!r} does not exist"
raise ValueError(msg)
u1, u2, u3 = 1, 0, a
v1, v2, v3 = 0, 1, m
while v3 != 0:

View File

@ -10,13 +10,13 @@ primes = {
5: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA237327FFFFFFFFFFFFFFFF",
"29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
"EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
"E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
"EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
"C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
"83655D23DCA3AD961C62F356208552BB9ED529077096966D"
"670C354E4ABC9804F1746C08CA237327FFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
@ -25,16 +25,16 @@ primes = {
14: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ "15728E5A8AACAA68FFFFFFFFFFFFFFFF",
"29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
"EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
"E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
"EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
"C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
"83655D23DCA3AD961C62F356208552BB9ED529077096966D"
"670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
"E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
"DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
"15728E5A8AACAA68FFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
@ -43,21 +43,21 @@ primes = {
15: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ "15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
+ "ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
+ "ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
+ "F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
+ "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
+ "43DB5BFCE0FD108E4B82D120A93AD2CAFFFFFFFFFFFFFFFF",
"29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
"EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
"E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
"EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
"C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
"83655D23DCA3AD961C62F356208552BB9ED529077096966D"
"670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
"E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
"DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
"15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
"ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
"ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
"F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
"BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
"43DB5BFCE0FD108E4B82D120A93AD2CAFFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
@ -66,27 +66,27 @@ primes = {
16: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ "15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
+ "ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
+ "ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
+ "F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
+ "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
+ "43DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D7"
+ "88719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA"
+ "2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6"
+ "287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED"
+ "1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA9"
+ "93B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934063199"
+ "FFFFFFFFFFFFFFFF",
"29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
"EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
"E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
"EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
"C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
"83655D23DCA3AD961C62F356208552BB9ED529077096966D"
"670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
"E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
"DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
"15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
"ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
"ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
"F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
"BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
"43DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D7"
"88719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA"
"2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6"
"287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED"
"1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA9"
"93B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934063199"
"FFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
@ -95,33 +95,33 @@ primes = {
17: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E08"
+ "8A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B"
+ "302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9"
+ "A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE6"
+ "49286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8"
+ "FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C"
+ "180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF695581718"
+ "3995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D"
+ "04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7D"
+ "B3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D226"
+ "1AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
+ "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFC"
+ "E0FD108E4B82D120A92108011A723C12A787E6D788719A10BDBA5B26"
+ "99C327186AF4E23C1A946834B6150BDA2583E9CA2AD44CE8DBBBC2DB"
+ "04DE8EF92E8EFC141FBECAA6287C59474E6BC05D99B2964FA090C3A2"
+ "233BA186515BE7ED1F612970CEE2D7AFB81BDD762170481CD0069127"
+ "D5B05AA993B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934028492"
+ "36C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BDF8FF9406"
+ "AD9E530EE5DB382F413001AEB06A53ED9027D831179727B0865A8918"
+ "DA3EDBEBCF9B14ED44CE6CBACED4BB1BDB7F1447E6CC254B33205151"
+ "2BD7AF426FB8F401378CD2BF5983CA01C64B92ECF032EA15D1721D03"
+ "F482D7CE6E74FEF6D55E702F46980C82B5A84031900B1C9E59E7C97F"
+ "BEC7E8F323A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AA"
+ "CC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE32806A1D58B"
+ "B7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55CDA56C9EC2EF29632"
+ "387FE8D76E3C0468043E8F663F4860EE12BF2D5B0B7474D6E694F91E"
+ "6DCC4024FFFFFFFFFFFFFFFF",
"8A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B"
"302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9"
"A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE6"
"49286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8"
"FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D"
"670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C"
"180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF695581718"
"3995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D"
"04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7D"
"B3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D226"
"1AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
"BBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFC"
"E0FD108E4B82D120A92108011A723C12A787E6D788719A10BDBA5B26"
"99C327186AF4E23C1A946834B6150BDA2583E9CA2AD44CE8DBBBC2DB"
"04DE8EF92E8EFC141FBECAA6287C59474E6BC05D99B2964FA090C3A2"
"233BA186515BE7ED1F612970CEE2D7AFB81BDD762170481CD0069127"
"D5B05AA993B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934028492"
"36C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BDF8FF9406"
"AD9E530EE5DB382F413001AEB06A53ED9027D831179727B0865A8918"
"DA3EDBEBCF9B14ED44CE6CBACED4BB1BDB7F1447E6CC254B33205151"
"2BD7AF426FB8F401378CD2BF5983CA01C64B92ECF032EA15D1721D03"
"F482D7CE6E74FEF6D55E702F46980C82B5A84031900B1C9E59E7C97F"
"BEC7E8F323A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AA"
"CC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE32806A1D58B"
"B7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55CDA56C9EC2EF29632"
"387FE8D76E3C0468043E8F663F4860EE12BF2D5B0B7474D6E694F91E"
"6DCC4024FFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
@ -130,48 +130,48 @@ primes = {
18: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ "15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
+ "ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
+ "ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
+ "F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
+ "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
+ "43DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D7"
+ "88719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA"
+ "2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6"
+ "287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED"
+ "1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA9"
+ "93B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934028492"
+ "36C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BD"
+ "F8FF9406AD9E530EE5DB382F413001AEB06A53ED9027D831"
+ "179727B0865A8918DA3EDBEBCF9B14ED44CE6CBACED4BB1B"
+ "DB7F1447E6CC254B332051512BD7AF426FB8F401378CD2BF"
+ "5983CA01C64B92ECF032EA15D1721D03F482D7CE6E74FEF6"
+ "D55E702F46980C82B5A84031900B1C9E59E7C97FBEC7E8F3"
+ "23A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AA"
+ "CC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE328"
+ "06A1D58BB7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55C"
+ "DA56C9EC2EF29632387FE8D76E3C0468043E8F663F4860EE"
+ "12BF2D5B0B7474D6E694F91E6DBE115974A3926F12FEE5E4"
+ "38777CB6A932DF8CD8BEC4D073B931BA3BC832B68D9DD300"
+ "741FA7BF8AFC47ED2576F6936BA424663AAB639C5AE4F568"
+ "3423B4742BF1C978238F16CBE39D652DE3FDB8BEFC848AD9"
+ "22222E04A4037C0713EB57A81A23F0C73473FC646CEA306B"
+ "4BCBC8862F8385DDFA9D4B7FA2C087E879683303ED5BDD3A"
+ "062B3CF5B3A278A66D2A13F83F44F82DDF310EE074AB6A36"
+ "4597E899A0255DC164F31CC50846851DF9AB48195DED7EA1"
+ "B1D510BD7EE74D73FAF36BC31ECFA268359046F4EB879F92"
+ "4009438B481C6CD7889A002ED5EE382BC9190DA6FC026E47"
+ "9558E4475677E9AA9E3050E2765694DFC81F56E880B96E71"
+ "60C980DD98EDD3DFFFFFFFFFFFFFFFFF",
"29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
"EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
"E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
"EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
"C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
"83655D23DCA3AD961C62F356208552BB9ED529077096966D"
"670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
"E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
"DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
"15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
"ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
"ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
"F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
"BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
"43DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D7"
"88719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA"
"2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6"
"287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED"
"1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA9"
"93B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934028492"
"36C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BD"
"F8FF9406AD9E530EE5DB382F413001AEB06A53ED9027D831"
"179727B0865A8918DA3EDBEBCF9B14ED44CE6CBACED4BB1B"
"DB7F1447E6CC254B332051512BD7AF426FB8F401378CD2BF"
"5983CA01C64B92ECF032EA15D1721D03F482D7CE6E74FEF6"
"D55E702F46980C82B5A84031900B1C9E59E7C97FBEC7E8F3"
"23A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AA"
"CC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE328"
"06A1D58BB7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55C"
"DA56C9EC2EF29632387FE8D76E3C0468043E8F663F4860EE"
"12BF2D5B0B7474D6E694F91E6DBE115974A3926F12FEE5E4"
"38777CB6A932DF8CD8BEC4D073B931BA3BC832B68D9DD300"
"741FA7BF8AFC47ED2576F6936BA424663AAB639C5AE4F568"
"3423B4742BF1C978238F16CBE39D652DE3FDB8BEFC848AD9"
"22222E04A4037C0713EB57A81A23F0C73473FC646CEA306B"
"4BCBC8862F8385DDFA9D4B7FA2C087E879683303ED5BDD3A"
"062B3CF5B3A278A66D2A13F83F44F82DDF310EE074AB6A36"
"4597E899A0255DC164F31CC50846851DF9AB48195DED7EA1"
"B1D510BD7EE74D73FAF36BC31ECFA268359046F4EB879F92"
"4009438B481C6CD7889A002ED5EE382BC9190DA6FC026E47"
"9558E4475677E9AA9E3050E2765694DFC81F56E880B96E71"
"60C980DD98EDD3DFFFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,

View File

@ -87,22 +87,20 @@ def _validator(
# Checks if there are 3 unique rotors
if (unique_rotsel := len(set(rotsel))) < 3:
raise Exception(f"Please use 3 unique rotors (not {unique_rotsel})")
msg = f"Please use 3 unique rotors (not {unique_rotsel})"
raise Exception(msg)
# Checks if rotor positions are valid
rotorpos1, rotorpos2, rotorpos3 = rotpos
if not 0 < rotorpos1 <= len(abc):
raise ValueError(
"First rotor position is not within range of 1..26 (" f"{rotorpos1}"
)
msg = f"First rotor position is not within range of 1..26 ({rotorpos1}"
raise ValueError(msg)
if not 0 < rotorpos2 <= len(abc):
raise ValueError(
"Second rotor position is not within range of 1..26 (" f"{rotorpos2})"
)
msg = f"Second rotor position is not within range of 1..26 ({rotorpos2})"
raise ValueError(msg)
if not 0 < rotorpos3 <= len(abc):
raise ValueError(
"Third rotor position is not within range of 1..26 (" f"{rotorpos3})"
)
msg = f"Third rotor position is not within range of 1..26 ({rotorpos3})"
raise ValueError(msg)
# Validates string and returns dict
pbdict = _plugboard(pb)
@ -130,9 +128,11 @@ def _plugboard(pbstring: str) -> dict[str, str]:
# a) is type string
# b) has even length (so pairs can be made)
if not isinstance(pbstring, str):
raise TypeError(f"Plugboard setting isn't type string ({type(pbstring)})")
msg = f"Plugboard setting isn't type string ({type(pbstring)})"
raise TypeError(msg)
elif len(pbstring) % 2 != 0:
raise Exception(f"Odd number of symbols ({len(pbstring)})")
msg = f"Odd number of symbols ({len(pbstring)})"
raise Exception(msg)
elif pbstring == "":
return {}
@ -142,9 +142,11 @@ def _plugboard(pbstring: str) -> dict[str, str]:
tmppbl = set()
for i in pbstring:
if i not in abc:
raise Exception(f"'{i}' not in list of symbols")
msg = f"'{i}' not in list of symbols"
raise Exception(msg)
elif i in tmppbl:
raise Exception(f"Duplicate symbol ({i})")
msg = f"Duplicate symbol ({i})"
raise Exception(msg)
else:
tmppbl.add(i)
del tmppbl

View File

@ -104,10 +104,11 @@ class HillCipher:
req_l = len(self.key_string)
if greatest_common_divisor(det, len(self.key_string)) != 1:
raise ValueError(
f"determinant modular {req_l} of encryption key({det}) is not co prime "
f"w.r.t {req_l}.\nTry another key."
msg = (
f"determinant modular {req_l} of encryption key({det}) "
f"is not co prime w.r.t {req_l}.\nTry another key."
)
raise ValueError(msg)
def process_text(self, text: str) -> str:
"""

View File

@ -1,7 +1,11 @@
def mixed_keyword(key: str = "college", pt: str = "UNIVERSITY") -> str:
"""
from string import ascii_uppercase
For key:hello
def mixed_keyword(
keyword: str, plaintext: str, verbose: bool = False, alphabet: str = ascii_uppercase
) -> str:
"""
For keyword: hello
H E L O
A B C D
@ -12,57 +16,60 @@ def mixed_keyword(key: str = "college", pt: str = "UNIVERSITY") -> str:
Y Z
and map vertically
>>> mixed_keyword("college", "UNIVERSITY") # doctest: +NORMALIZE_WHITESPACE
>>> mixed_keyword("college", "UNIVERSITY", True) # doctest: +NORMALIZE_WHITESPACE
{'A': 'C', 'B': 'A', 'C': 'I', 'D': 'P', 'E': 'U', 'F': 'Z', 'G': 'O', 'H': 'B',
'I': 'J', 'J': 'Q', 'K': 'V', 'L': 'L', 'M': 'D', 'N': 'K', 'O': 'R', 'P': 'W',
'Q': 'E', 'R': 'F', 'S': 'M', 'T': 'S', 'U': 'X', 'V': 'G', 'W': 'H', 'X': 'N',
'Y': 'T', 'Z': 'Y'}
'XKJGUFMJST'
>>> mixed_keyword("college", "UNIVERSITY", False) # doctest: +NORMALIZE_WHITESPACE
'XKJGUFMJST'
"""
key = key.upper()
pt = pt.upper()
temp = []
for i in key:
if i not in temp:
temp.append(i)
len_temp = len(temp)
# print(temp)
alpha = []
modalpha = []
for j in range(65, 91):
t = chr(j)
alpha.append(t)
if t not in temp:
temp.append(t)
# print(temp)
r = int(26 / 4)
# print(r)
k = 0
for _ in range(r):
s = []
for _ in range(len_temp):
s.append(temp[k])
if k >= 25:
keyword = keyword.upper()
plaintext = plaintext.upper()
alphabet_set = set(alphabet)
# create a list of unique characters in the keyword - their order matters
# it determines how we will map plaintext characters to the ciphertext
unique_chars = []
for char in keyword:
if char in alphabet_set and char not in unique_chars:
unique_chars.append(char)
# the number of those unique characters will determine the number of rows
num_unique_chars_in_keyword = len(unique_chars)
# create a shifted version of the alphabet
shifted_alphabet = unique_chars + [
char for char in alphabet if char not in unique_chars
]
# create a modified alphabet by splitting the shifted alphabet into rows
modified_alphabet = [
shifted_alphabet[k : k + num_unique_chars_in_keyword]
for k in range(0, 26, num_unique_chars_in_keyword)
]
# map the alphabet characters to the modified alphabet characters
# going 'vertically' through the modified alphabet - consider columns first
mapping = {}
letter_index = 0
for column in range(num_unique_chars_in_keyword):
for row in modified_alphabet:
# if current row (the last one) is too short, break out of loop
if len(row) <= column:
break
k += 1
modalpha.append(s)
# print(modalpha)
d = {}
j = 0
k = 0
for j in range(len_temp):
for m in modalpha:
if not len(m) - 1 >= j:
break
d[alpha[k]] = m[j]
if not k < 25:
break
k += 1
print(d)
cypher = ""
for i in pt:
cypher += d[i]
return cypher
# map current letter to letter in modified alphabet
mapping[alphabet[letter_index]] = row[column]
letter_index += 1
if verbose:
print(mapping)
# create the encrypted text by mapping the plaintext to the modified alphabet
return "".join(mapping[char] if char in mapping else char for char in plaintext)
print(mixed_keyword("college", "UNIVERSITY"))
if __name__ == "__main__":
# example use
print(mixed_keyword("college", "UNIVERSITY"))

View File

@ -76,10 +76,11 @@ def encrypt_and_write_to_file(
key_size, n, e = read_key_file(key_filename)
if key_size < block_size * 8:
sys.exit(
"ERROR: Block size is %s bits and key size is %s bits. The RSA cipher "
"ERROR: Block size is {} bits and key size is {} bits. The RSA cipher "
"requires the block size to be equal to or greater than the key size. "
"Either decrease the block size or use different keys."
% (block_size * 8, key_size)
"Either decrease the block size or use different keys.".format(
block_size * 8, key_size
)
)
encrypted_blocks = [str(i) for i in encrypt_message(message, (n, e), block_size)]
@ -101,10 +102,11 @@ def read_from_file_and_decrypt(message_filename: str, key_filename: str) -> str:
if key_size < block_size * 8:
sys.exit(
"ERROR: Block size is %s bits and key size is %s bits. The RSA cipher "
"ERROR: Block size is {} bits and key size is {} bits. The RSA cipher "
"requires the block size to be equal to or greater than the key size. "
"Did you specify the correct key file and encrypted file?"
% (block_size * 8, key_size)
"Did you specify the correct key file and encrypted file?".format(
block_size * 8, key_size
)
)
encrypted_blocks = []

View File

@ -150,7 +150,7 @@ def reverse_bwt(bwt_string: str, idx_original_string: int) -> str:
raise ValueError("The parameter idx_original_string must not be lower than 0.")
if idx_original_string >= len(bwt_string):
raise ValueError(
"The parameter idx_original_string must be lower than" " len(bwt_string)."
"The parameter idx_original_string must be lower than len(bwt_string)."
)
ordered_rotations = [""] * len(bwt_string)

View File

@ -77,15 +77,17 @@ def length_conversion(value: float, from_type: str, to_type: str) -> float:
to_sanitized = UNIT_SYMBOL.get(to_sanitized, to_sanitized)
if from_sanitized not in METRIC_CONVERSION:
raise ValueError(
msg = (
f"Invalid 'from_type' value: {from_type!r}.\n"
f"Conversion abbreviations are: {', '.join(METRIC_CONVERSION)}"
)
raise ValueError(msg)
if to_sanitized not in METRIC_CONVERSION:
raise ValueError(
msg = (
f"Invalid 'to_type' value: {to_type!r}.\n"
f"Conversion abbreviations are: {', '.join(METRIC_CONVERSION)}"
)
raise ValueError(msg)
from_exponent = METRIC_CONVERSION[from_sanitized]
to_exponent = METRIC_CONVERSION[to_sanitized]
exponent = 1

View File

@ -0,0 +1,114 @@
"""
Conversion of energy units.
Available units: joule, kilojoule, megajoule, gigajoule,\
wattsecond, watthour, kilowatthour, newtonmeter, calorie_nutr,\
kilocalorie_nutr, electronvolt, britishthermalunit_it, footpound
USAGE :
-> Import this file into their respective project.
-> Use the function energy_conversion() for conversion of energy units.
-> Parameters :
-> from_type : From which type you want to convert
-> to_type : To which type you want to convert
-> value : the value which you want to convert
REFERENCES :
-> Wikipedia reference: https://en.wikipedia.org/wiki/Units_of_energy
-> Wikipedia reference: https://en.wikipedia.org/wiki/Joule
-> Wikipedia reference: https://en.wikipedia.org/wiki/Kilowatt-hour
-> Wikipedia reference: https://en.wikipedia.org/wiki/Newton-metre
-> Wikipedia reference: https://en.wikipedia.org/wiki/Calorie
-> Wikipedia reference: https://en.wikipedia.org/wiki/Electronvolt
-> Wikipedia reference: https://en.wikipedia.org/wiki/British_thermal_unit
-> Wikipedia reference: https://en.wikipedia.org/wiki/Foot-pound_(energy)
-> Unit converter reference: https://www.unitconverters.net/energy-converter.html
"""
ENERGY_CONVERSION: dict[str, float] = {
"joule": 1.0,
"kilojoule": 1_000,
"megajoule": 1_000_000,
"gigajoule": 1_000_000_000,
"wattsecond": 1.0,
"watthour": 3_600,
"kilowatthour": 3_600_000,
"newtonmeter": 1.0,
"calorie_nutr": 4_186.8,
"kilocalorie_nutr": 4_186_800.00,
"electronvolt": 1.602_176_634e-19,
"britishthermalunit_it": 1_055.055_85,
"footpound": 1.355_818,
}
def energy_conversion(from_type: str, to_type: str, value: float) -> float:
"""
Conversion of energy units.
>>> energy_conversion("joule", "joule", 1)
1.0
>>> energy_conversion("joule", "kilojoule", 1)
0.001
>>> energy_conversion("joule", "megajoule", 1)
1e-06
>>> energy_conversion("joule", "gigajoule", 1)
1e-09
>>> energy_conversion("joule", "wattsecond", 1)
1.0
>>> energy_conversion("joule", "watthour", 1)
0.0002777777777777778
>>> energy_conversion("joule", "kilowatthour", 1)
2.7777777777777776e-07
>>> energy_conversion("joule", "newtonmeter", 1)
1.0
>>> energy_conversion("joule", "calorie_nutr", 1)
0.00023884589662749592
>>> energy_conversion("joule", "kilocalorie_nutr", 1)
2.388458966274959e-07
>>> energy_conversion("joule", "electronvolt", 1)
6.241509074460763e+18
>>> energy_conversion("joule", "britishthermalunit_it", 1)
0.0009478171226670134
>>> energy_conversion("joule", "footpound", 1)
0.7375621211696556
>>> energy_conversion("joule", "megajoule", 1000)
0.001
>>> energy_conversion("calorie_nutr", "kilocalorie_nutr", 1000)
1.0
>>> energy_conversion("kilowatthour", "joule", 10)
36000000.0
>>> energy_conversion("britishthermalunit_it", "footpound", 1)
778.1692306784539
>>> energy_conversion("watthour", "joule", "a") # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for /: 'str' and 'float'
>>> energy_conversion("wrongunit", "joule", 1) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: Incorrect 'from_type' or 'to_type' value: 'wrongunit', 'joule'
Valid values are: joule, ... footpound
>>> energy_conversion("joule", "wrongunit", 1) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: Incorrect 'from_type' or 'to_type' value: 'joule', 'wrongunit'
Valid values are: joule, ... footpound
>>> energy_conversion("123", "abc", 1) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: Incorrect 'from_type' or 'to_type' value: '123', 'abc'
Valid values are: joule, ... footpound
"""
if to_type not in ENERGY_CONVERSION or from_type not in ENERGY_CONVERSION:
msg = (
f"Incorrect 'from_type' or 'to_type' value: {from_type!r}, {to_type!r}\n"
f"Valid values are: {', '.join(ENERGY_CONVERSION)}"
)
raise ValueError(msg)
return value * ENERGY_CONVERSION[from_type] / ENERGY_CONVERSION[to_type]
if __name__ == "__main__":
import doctest
doctest.testmod()

View File

@ -22,9 +22,13 @@ REFERENCES :
-> Wikipedia reference: https://en.wikipedia.org/wiki/Millimeter
"""
from collections import namedtuple
from typing import NamedTuple
class FromTo(NamedTuple):
from_factor: float
to_factor: float
from_to = namedtuple("from_to", "from_ to")
TYPE_CONVERSION = {
"millimeter": "mm",
@ -40,14 +44,14 @@ TYPE_CONVERSION = {
}
METRIC_CONVERSION = {
"mm": from_to(0.001, 1000),
"cm": from_to(0.01, 100),
"m": from_to(1, 1),
"km": from_to(1000, 0.001),
"in": from_to(0.0254, 39.3701),
"ft": from_to(0.3048, 3.28084),
"yd": from_to(0.9144, 1.09361),
"mi": from_to(1609.34, 0.000621371),
"mm": FromTo(0.001, 1000),
"cm": FromTo(0.01, 100),
"m": FromTo(1, 1),
"km": FromTo(1000, 0.001),
"in": FromTo(0.0254, 39.3701),
"ft": FromTo(0.3048, 3.28084),
"yd": FromTo(0.9144, 1.09361),
"mi": FromTo(1609.34, 0.000621371),
}
@ -104,16 +108,22 @@ def length_conversion(value: float, from_type: str, to_type: str) -> float:
new_to = to_type.lower().rstrip("s")
new_to = TYPE_CONVERSION.get(new_to, new_to)
if new_from not in METRIC_CONVERSION:
raise ValueError(
msg = (
f"Invalid 'from_type' value: {from_type!r}.\n"
f"Conversion abbreviations are: {', '.join(METRIC_CONVERSION)}"
)
raise ValueError(msg)
if new_to not in METRIC_CONVERSION:
raise ValueError(
msg = (
f"Invalid 'to_type' value: {to_type!r}.\n"
f"Conversion abbreviations are: {', '.join(METRIC_CONVERSION)}"
)
return value * METRIC_CONVERSION[new_from].from_ * METRIC_CONVERSION[new_to].to
raise ValueError(msg)
return (
value
* METRIC_CONVERSION[new_from].from_factor
* METRIC_CONVERSION[new_to].to_factor
)
if __name__ == "__main__":

View File

@ -96,7 +96,7 @@ def add_si_prefix(value: float) -> str:
for name_prefix, value_prefix in prefixes.items():
numerical_part = value / (10**value_prefix)
if numerical_part > 1:
return f"{str(numerical_part)} {name_prefix}"
return f"{numerical_part!s} {name_prefix}"
return str(value)
@ -111,7 +111,7 @@ def add_binary_prefix(value: float) -> str:
for prefix in BinaryUnit:
numerical_part = value / (2**prefix.value)
if numerical_part > 1:
return f"{str(numerical_part)} {prefix.name}"
return f"{numerical_part!s} {prefix.name}"
return str(value)

View File

@ -19,19 +19,23 @@ REFERENCES :
-> https://www.unitconverters.net/pressure-converter.html
"""
from collections import namedtuple
from typing import NamedTuple
class FromTo(NamedTuple):
from_factor: float
to_factor: float
from_to = namedtuple("from_to", "from_ to")
PRESSURE_CONVERSION = {
"atm": from_to(1, 1),
"pascal": from_to(0.0000098, 101325),
"bar": from_to(0.986923, 1.01325),
"kilopascal": from_to(0.00986923, 101.325),
"megapascal": from_to(9.86923, 0.101325),
"psi": from_to(0.068046, 14.6959),
"inHg": from_to(0.0334211, 29.9213),
"torr": from_to(0.00131579, 760),
"atm": FromTo(1, 1),
"pascal": FromTo(0.0000098, 101325),
"bar": FromTo(0.986923, 1.01325),
"kilopascal": FromTo(0.00986923, 101.325),
"megapascal": FromTo(9.86923, 0.101325),
"psi": FromTo(0.068046, 14.6959),
"inHg": FromTo(0.0334211, 29.9213),
"torr": FromTo(0.00131579, 760),
}
@ -71,7 +75,9 @@ def pressure_conversion(value: float, from_type: str, to_type: str) -> float:
+ ", ".join(PRESSURE_CONVERSION)
)
return (
value * PRESSURE_CONVERSION[from_type].from_ * PRESSURE_CONVERSION[to_type].to
value
* PRESSURE_CONVERSION[from_type].from_factor
* PRESSURE_CONVERSION[to_type].to_factor
)

View File

@ -121,8 +121,8 @@ def rgb_to_hsv(red: int, green: int, blue: int) -> list[float]:
float_red = red / 255
float_green = green / 255
float_blue = blue / 255
value = max(max(float_red, float_green), float_blue)
chroma = value - min(min(float_red, float_green), float_blue)
value = max(float_red, float_green, float_blue)
chroma = value - min(float_red, float_green, float_blue)
saturation = 0 if value == 0 else chroma / value
if chroma == 0:

View File

@ -57,10 +57,11 @@ def convert_speed(speed: float, unit_from: str, unit_to: str) -> float:
115.078
"""
if unit_to not in speed_chart or unit_from not in speed_chart_inverse:
raise ValueError(
msg = (
f"Incorrect 'from_type' or 'to_type' value: {unit_from!r}, {unit_to!r}\n"
f"Valid values are: {', '.join(speed_chart_inverse)}"
)
raise ValueError(msg)
return round(speed * speed_chart[unit_from] * speed_chart_inverse[unit_to], 3)

View File

@ -18,35 +18,39 @@ REFERENCES :
-> Wikipedia reference: https://en.wikipedia.org/wiki/Cup_(unit)
"""
from collections import namedtuple
from typing import NamedTuple
class FromTo(NamedTuple):
from_factor: float
to_factor: float
from_to = namedtuple("from_to", "from_ to")
METRIC_CONVERSION = {
"cubicmeter": from_to(1, 1),
"litre": from_to(0.001, 1000),
"kilolitre": from_to(1, 1),
"gallon": from_to(0.00454, 264.172),
"cubicyard": from_to(0.76455, 1.30795),
"cubicfoot": from_to(0.028, 35.3147),
"cup": from_to(0.000236588, 4226.75),
"cubic meter": FromTo(1, 1),
"litre": FromTo(0.001, 1000),
"kilolitre": FromTo(1, 1),
"gallon": FromTo(0.00454, 264.172),
"cubic yard": FromTo(0.76455, 1.30795),
"cubic foot": FromTo(0.028, 35.3147),
"cup": FromTo(0.000236588, 4226.75),
}
def volume_conversion(value: float, from_type: str, to_type: str) -> float:
"""
Conversion between volume units.
>>> volume_conversion(4, "cubicmeter", "litre")
>>> volume_conversion(4, "cubic meter", "litre")
4000
>>> volume_conversion(1, "litre", "gallon")
0.264172
>>> volume_conversion(1, "kilolitre", "cubicmeter")
>>> volume_conversion(1, "kilolitre", "cubic meter")
1
>>> volume_conversion(3, "gallon", "cubicyard")
>>> volume_conversion(3, "gallon", "cubic yard")
0.017814279
>>> volume_conversion(2, "cubicyard", "litre")
>>> volume_conversion(2, "cubic yard", "litre")
1529.1
>>> volume_conversion(4, "cubicfoot", "cup")
>>> volume_conversion(4, "cubic foot", "cup")
473.396
>>> volume_conversion(1, "cup", "kilolitre")
0.000236588
@ -54,7 +58,7 @@ def volume_conversion(value: float, from_type: str, to_type: str) -> float:
Traceback (most recent call last):
...
ValueError: Invalid 'from_type' value: 'wrongUnit' Supported values are:
cubicmeter, litre, kilolitre, gallon, cubicyard, cubicfoot, cup
cubic meter, litre, kilolitre, gallon, cubic yard, cubic foot, cup
"""
if from_type not in METRIC_CONVERSION:
raise ValueError(
@ -66,7 +70,11 @@ def volume_conversion(value: float, from_type: str, to_type: str) -> float:
f"Invalid 'to_type' value: {to_type!r}. Supported values are:\n"
+ ", ".join(METRIC_CONVERSION)
)
return value * METRIC_CONVERSION[from_type].from_ * METRIC_CONVERSION[to_type].to
return (
value
* METRIC_CONVERSION[from_type].from_factor
* METRIC_CONVERSION[to_type].to_factor
)
if __name__ == "__main__":

View File

@ -299,10 +299,11 @@ def weight_conversion(from_type: str, to_type: str, value: float) -> float:
1.999999998903455
"""
if to_type not in KILOGRAM_CHART or from_type not in WEIGHT_TYPE_CHART:
raise ValueError(
msg = (
f"Invalid 'from_type' or 'to_type' value: {from_type!r}, {to_type!r}\n"
f"Supported values are: {', '.join(WEIGHT_TYPE_CHART)}"
)
raise ValueError(msg)
return value * KILOGRAM_CHART[to_type] * WEIGHT_TYPE_CHART[from_type]

View File

@ -1,7 +1,6 @@
def permute(nums: list[int]) -> list[list[int]]:
"""
Return all permutations.
>>> from itertools import permutations
>>> numbers= [1,2,3]
>>> all(list(nums) in permute(numbers) for nums in permutations(numbers))
@ -20,7 +19,32 @@ def permute(nums: list[int]) -> list[list[int]]:
return result
def permute2(nums):
"""
Return all permutations of the given list.
>>> permute2([1, 2, 3])
[[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 2, 1], [3, 1, 2]]
"""
def backtrack(start):
if start == len(nums) - 1:
output.append(nums[:])
else:
for i in range(start, len(nums)):
nums[start], nums[i] = nums[i], nums[start]
backtrack(start + 1)
nums[start], nums[i] = nums[i], nums[start] # backtrack
output = []
backtrack(0)
return output
if __name__ == "__main__":
import doctest
# use res to print the data in permute2 function
res = permute2([1, 2, 3])
print(res)
doctest.testmod()

View File

@ -0,0 +1,98 @@
"""
Calculate the Product Sum from a Special Array.
reference: https://dev.to/sfrasica/algorithms-product-sum-from-an-array-dc6
Python doctests can be run with the following command:
python -m doctest -v product_sum.py
Calculate the product sum of a "special" array which can contain integers or nested
arrays. The product sum is obtained by adding all elements and multiplying by their
respective depths.
For example, in the array [x, y], the product sum is (x + y). In the array [x, [y, z]],
the product sum is x + 2 * (y + z). In the array [x, [y, [z]]],
the product sum is x + 2 * (y + 3z).
Example Input:
[5, 2, [-7, 1], 3, [6, [-13, 8], 4]]
Output: 12
"""
def product_sum(arr: list[int | list], depth: int) -> int:
"""
Recursively calculates the product sum of an array.
The product sum of an array is defined as the sum of its elements multiplied by
their respective depths. If an element is a list, its product sum is calculated
recursively by multiplying the sum of its elements with its depth plus one.
Args:
arr: The array of integers and nested lists.
depth: The current depth level.
Returns:
int: The product sum of the array.
Examples:
>>> product_sum([1, 2, 3], 1)
6
>>> product_sum([-1, 2, [-3, 4]], 2)
8
>>> product_sum([1, 2, 3], -1)
-6
>>> product_sum([1, 2, 3], 0)
0
>>> product_sum([1, 2, 3], 7)
42
>>> product_sum((1, 2, 3), 7)
42
>>> product_sum({1, 2, 3}, 7)
42
>>> product_sum([1, -1], 1)
0
>>> product_sum([1, -2], 1)
-1
>>> product_sum([-3.5, [1, [0.5]]], 1)
1.5
"""
total_sum = 0
for ele in arr:
total_sum += product_sum(ele, depth + 1) if isinstance(ele, list) else ele
return total_sum * depth
def product_sum_array(array: list[int | list]) -> int:
"""
Calculates the product sum of an array.
Args:
array (List[Union[int, List]]): The array of integers and nested lists.
Returns:
int: The product sum of the array.
Examples:
>>> product_sum_array([1, 2, 3])
6
>>> product_sum_array([1, [2, 3]])
11
>>> product_sum_array([1, [2, [3, 4]]])
47
>>> product_sum_array([0])
0
>>> product_sum_array([-3.5, [1, [0.5]]])
1.5
>>> product_sum_array([1, -2])
-1
"""
return product_sum(array, 1)
if __name__ == "__main__":
import doctest
doctest.testmod()

View File

@ -78,10 +78,8 @@ class Node:
return pformat({f"{self.value}": (self.left, self.right)}, indent=1)
@property
def is_right(self) -> bool:
if self.parent and self.parent.right:
return self == self.parent.right
return False
def is_right(self):
return self.parent and self is self.parent.right
class BinarySearchTree:
@ -98,12 +96,12 @@ class BinarySearchTree:
if new_children is not None: # reset its kids
new_children.parent = node.parent
if node.parent is not None: # reset its parent
if node.is_right: # If it is the right children
if node.is_right: # If it is the right child
node.parent.right = new_children
else:
node.parent.left = new_children
else:
self.root = None
self.root = new_children
def empty(self) -> bool:
return self.root is None

View File

@ -77,7 +77,8 @@ class BinarySearchTree:
elif label > node.label:
node.right = self._put(node.right, label, node)
else:
raise Exception(f"Node with label {label} already exists")
msg = f"Node with label {label} already exists"
raise Exception(msg)
return node
@ -100,7 +101,8 @@ class BinarySearchTree:
def _search(self, node: Node | None, label: int) -> Node:
if node is None:
raise Exception(f"Node with label {label} does not exist")
msg = f"Node with label {label} does not exist"
raise Exception(msg)
else:
if label < node.label:
node = self._search(node.left, label)

View File

@ -31,7 +31,8 @@ def binary_tree_mirror(binary_tree: dict, root: int = 1) -> dict:
if not binary_tree:
raise ValueError("binary tree cannot be empty")
if root not in binary_tree:
raise ValueError(f"root {root} is not present in the binary_tree")
msg = f"root {root} is not present in the binary_tree"
raise ValueError(msg)
binary_tree_mirror_dictionary = dict(binary_tree)
binary_tree_mirror_dict(binary_tree_mirror_dictionary, root)
return binary_tree_mirror_dictionary

View File

@ -58,6 +58,19 @@ def inorder(root: Node | None) -> list[int]:
return [*inorder(root.left), root.data, *inorder(root.right)] if root else []
def reverse_inorder(root: Node | None) -> list[int]:
"""
Reverse in-order traversal visits right subtree, root node, left subtree.
>>> reverse_inorder(make_tree())
[3, 1, 5, 2, 4]
"""
return (
[*reverse_inorder(root.right), root.data, *reverse_inorder(root.left)]
if root
else []
)
def height(root: Node | None) -> int:
"""
Recursive function for calculating the height of the binary tree.
@ -161,15 +174,12 @@ def zigzag(root: Node | None) -> Sequence[Node | None] | list[Any]:
def main() -> None: # Main function for testing.
"""
Create binary tree.
"""
# Create binary tree.
root = make_tree()
"""
All Traversals of the binary are as follows:
"""
# All Traversals of the binary are as follows:
print(f"In-order Traversal: {inorder(root)}")
print(f"Reverse In-order Traversal: {reverse_inorder(root)}")
print(f"Pre-order Traversal: {preorder(root)}")
print(f"Post-order Traversal: {postorder(root)}", "\n")

View File

@ -39,8 +39,8 @@ Space: O(1)
from __future__ import annotations
from collections import namedtuple
from dataclasses import dataclass
from typing import NamedTuple
@dataclass
@ -50,7 +50,9 @@ class TreeNode:
right: TreeNode | None = None
CoinsDistribResult = namedtuple("CoinsDistribResult", "moves excess")
class CoinsDistribResult(NamedTuple):
moves: int
excess: int
def distribute_coins(root: TreeNode | None) -> int:
@ -79,7 +81,7 @@ def distribute_coins(root: TreeNode | None) -> int:
# Validation
def count_nodes(node: TreeNode | None) -> int:
"""
>>> count_nodes(None):
>>> count_nodes(None)
0
"""
if node is None:
@ -89,7 +91,7 @@ def distribute_coins(root: TreeNode | None) -> int:
def count_coins(node: TreeNode | None) -> int:
"""
>>> count_coins(None):
>>> count_coins(None)
0
"""
if node is None:

View File

@ -152,7 +152,7 @@ class RedBlackTree:
self.grandparent.color = 1
self.grandparent._insert_repair()
def remove(self, label: int) -> RedBlackTree:
def remove(self, label: int) -> RedBlackTree: # noqa: PLR0912
"""Remove label from this tree."""
if self.label == label:
if self.left and self.right:

View File

@ -7,7 +7,8 @@ class SegmentTree:
self.st = [0] * (
4 * self.N
) # approximate the overall size of segment tree with array N
self.build(1, 0, self.N - 1)
if self.N:
self.build(1, 0, self.N - 1)
def left(self, idx):
return idx * 2

View File

@ -56,7 +56,8 @@ def find_python_set(node: Node) -> set:
for s in sets:
if node.data in s:
return s
raise ValueError(f"{node.data} is not in {sets}")
msg = f"{node.data} is not in {sets}"
raise ValueError(msg)
def test_disjoint_set() -> None:

View File

@ -1,9 +1,28 @@
from __future__ import annotations
from abc import abstractmethod
from collections.abc import Iterable
from typing import Generic, Protocol, TypeVar
class Heap:
class Comparable(Protocol):
@abstractmethod
def __lt__(self: T, other: T) -> bool:
pass
@abstractmethod
def __gt__(self: T, other: T) -> bool:
pass
@abstractmethod
def __eq__(self: T, other: object) -> bool:
pass
T = TypeVar("T", bound=Comparable)
class Heap(Generic[T]):
"""A Max Heap Implementation
>>> unsorted = [103, 9, 1, 7, 11, 15, 25, 201, 209, 107, 5]
@ -27,7 +46,7 @@ class Heap:
"""
def __init__(self) -> None:
self.h: list[float] = []
self.h: list[T] = []
self.heap_size: int = 0
def __repr__(self) -> str:
@ -79,7 +98,7 @@ class Heap:
# fix the subsequent violation recursively if any
self.max_heapify(violation)
def build_max_heap(self, collection: Iterable[float]) -> None:
def build_max_heap(self, collection: Iterable[T]) -> None:
"""build max heap from an unsorted array"""
self.h = list(collection)
self.heap_size = len(self.h)
@ -88,7 +107,7 @@ class Heap:
for i in range(self.heap_size // 2 - 1, -1, -1):
self.max_heapify(i)
def extract_max(self) -> float:
def extract_max(self) -> T:
"""get and remove max from heap"""
if self.heap_size >= 2:
me = self.h[0]
@ -102,7 +121,7 @@ class Heap:
else:
raise Exception("Empty heap")
def insert(self, value: float) -> None:
def insert(self, value: T) -> None:
"""insert a new value into the max heap"""
self.h.append(value)
idx = (self.heap_size - 1) // 2
@ -144,7 +163,7 @@ if __name__ == "__main__":
]:
print(f"unsorted array: {unsorted}")
heap = Heap()
heap: Heap[int] = Heap()
heap.build_max_heap(unsorted)
print(f"after build heap: {heap}")

View File

@ -94,25 +94,25 @@ def test_circular_linked_list() -> None:
try:
circular_linked_list.delete_front()
raise AssertionError() # This should not happen
raise AssertionError # This should not happen
except IndexError:
assert True # This should happen
try:
circular_linked_list.delete_tail()
raise AssertionError() # This should not happen
raise AssertionError # This should not happen
except IndexError:
assert True # This should happen
try:
circular_linked_list.delete_nth(-1)
raise AssertionError()
raise AssertionError
except IndexError:
assert True
try:
circular_linked_list.delete_nth(0)
raise AssertionError()
raise AssertionError
except IndexError:
assert True

View File

@ -198,13 +198,13 @@ def test_doubly_linked_list() -> None:
try:
linked_list.delete_head()
raise AssertionError() # This should not happen.
raise AssertionError # This should not happen.
except IndexError:
assert True # This should happen.
try:
linked_list.delete_tail()
raise AssertionError() # This should not happen.
raise AssertionError # This should not happen.
except IndexError:
assert True # This should happen.

View File

@ -353,13 +353,13 @@ def test_singly_linked_list() -> None:
try:
linked_list.delete_head()
raise AssertionError() # This should not happen.
raise AssertionError # This should not happen.
except IndexError:
assert True # This should happen.
try:
linked_list.delete_tail()
raise AssertionError() # This should not happen.
raise AssertionError # This should not happen.
except IndexError:
assert True # This should happen.

View File

@ -32,7 +32,7 @@ class Deque:
the number of nodes
"""
__slots__ = ["_front", "_back", "_len"]
__slots__ = ("_front", "_back", "_len")
@dataclass
class _Node:
@ -54,7 +54,7 @@ class Deque:
the current node of the iteration.
"""
__slots__ = ["_cur"]
__slots__ = ("_cur",)
def __init__(self, cur: Deque._Node | None) -> None:
self._cur = cur

View File

@ -0,0 +1,141 @@
"""Queue represented by a Python list"""
from collections.abc import Iterable
from typing import Generic, TypeVar
_T = TypeVar("_T")
class QueueByList(Generic[_T]):
def __init__(self, iterable: Iterable[_T] | None = None) -> None:
"""
>>> QueueByList()
Queue(())
>>> QueueByList([10, 20, 30])
Queue((10, 20, 30))
>>> QueueByList((i**2 for i in range(1, 4)))
Queue((1, 4, 9))
"""
self.entries: list[_T] = list(iterable or [])
def __len__(self) -> int:
"""
>>> len(QueueByList())
0
>>> from string import ascii_lowercase
>>> len(QueueByList(ascii_lowercase))
26
>>> queue = QueueByList()
>>> for i in range(1, 11):
... queue.put(i)
>>> len(queue)
10
>>> for i in range(2):
... queue.get()
1
2
>>> len(queue)
8
"""
return len(self.entries)
def __repr__(self) -> str:
"""
>>> queue = QueueByList()
>>> queue
Queue(())
>>> str(queue)
'Queue(())'
>>> queue.put(10)
>>> queue
Queue((10,))
>>> queue.put(20)
>>> queue.put(30)
>>> queue
Queue((10, 20, 30))
"""
return f"Queue({tuple(self.entries)})"
def put(self, item: _T) -> None:
"""Put `item` to the Queue
>>> queue = QueueByList()
>>> queue.put(10)
>>> queue.put(20)
>>> len(queue)
2
>>> queue
Queue((10, 20))
"""
self.entries.append(item)
def get(self) -> _T:
"""
Get `item` from the Queue
>>> queue = QueueByList((10, 20, 30))
>>> queue.get()
10
>>> queue.put(40)
>>> queue.get()
20
>>> queue.get()
30
>>> len(queue)
1
>>> queue.get()
40
>>> queue.get()
Traceback (most recent call last):
...
IndexError: Queue is empty
"""
if not self.entries:
raise IndexError("Queue is empty")
return self.entries.pop(0)
def rotate(self, rotation: int) -> None:
"""Rotate the items of the Queue `rotation` times
>>> queue = QueueByList([10, 20, 30, 40])
>>> queue
Queue((10, 20, 30, 40))
>>> queue.rotate(1)
>>> queue
Queue((20, 30, 40, 10))
>>> queue.rotate(2)
>>> queue
Queue((40, 10, 20, 30))
"""
put = self.entries.append
get = self.entries.pop
for _ in range(rotation):
put(get(0))
def get_front(self) -> _T:
"""Get the front item from the Queue
>>> queue = QueueByList((10, 20, 30))
>>> queue.get_front()
10
>>> queue
Queue((10, 20, 30))
>>> queue.get()
10
>>> queue.get_front()
20
"""
return self.entries[0]
if __name__ == "__main__":
from doctest import testmod
testmod()

View File

@ -1,52 +0,0 @@
"""Queue represented by a Python list"""
class Queue:
def __init__(self):
self.entries = []
self.length = 0
self.front = 0
def __str__(self):
printed = "<" + str(self.entries)[1:-1] + ">"
return printed
"""Enqueues {@code item}
@param item
item to enqueue"""
def put(self, item):
self.entries.append(item)
self.length = self.length + 1
"""Dequeues {@code item}
@requirement: |self.length| > 0
@return dequeued
item that was dequeued"""
def get(self):
self.length = self.length - 1
dequeued = self.entries[self.front]
# self.front-=1
# self.entries = self.entries[self.front:]
self.entries = self.entries[1:]
return dequeued
"""Rotates the queue {@code rotation} times
@param rotation
number of times to rotate queue"""
def rotate(self, rotation):
for _ in range(rotation):
self.put(self.get())
"""Enqueues {@code item}
@return item at front of self.entries"""
def get_front(self):
return self.entries[0]
"""Returns the length of this.entries"""
def size(self):
return self.length

View File

@ -4,9 +4,26 @@ https://en.wikipedia.org/wiki/Reverse_Polish_notation
https://en.wikipedia.org/wiki/Shunting-yard_algorithm
"""
from typing import Literal
from .balanced_parentheses import balanced_parentheses
from .stack import Stack
PRECEDENCES: dict[str, int] = {
"+": 1,
"-": 1,
"*": 2,
"/": 2,
"^": 3,
}
ASSOCIATIVITIES: dict[str, Literal["LR", "RL"]] = {
"+": "LR",
"-": "LR",
"*": "LR",
"/": "LR",
"^": "RL",
}
def precedence(char: str) -> int:
"""
@ -14,7 +31,15 @@ def precedence(char: str) -> int:
order of operation.
https://en.wikipedia.org/wiki/Order_of_operations
"""
return {"+": 1, "-": 1, "*": 2, "/": 2, "^": 3}.get(char, -1)
return PRECEDENCES.get(char, -1)
def associativity(char: str) -> Literal["LR", "RL"]:
"""
Return the associativity of the operator `char`.
https://en.wikipedia.org/wiki/Operator_associativity
"""
return ASSOCIATIVITIES[char]
def infix_to_postfix(expression_str: str) -> str:
@ -35,6 +60,8 @@ def infix_to_postfix(expression_str: str) -> str:
'a b c * + d e * f + g * +'
>>> infix_to_postfix("x^y/(5*z)+2")
'x y ^ 5 z * / 2 +'
>>> infix_to_postfix("2^3^2")
'2 3 2 ^ ^'
"""
if not balanced_parentheses(expression_str):
raise ValueError("Mismatched parentheses")
@ -50,9 +77,26 @@ def infix_to_postfix(expression_str: str) -> str:
postfix.append(stack.pop())
stack.pop()
else:
while not stack.is_empty() and precedence(char) <= precedence(stack.peek()):
while True:
if stack.is_empty():
stack.push(char)
break
char_precedence = precedence(char)
tos_precedence = precedence(stack.peek())
if char_precedence > tos_precedence:
stack.push(char)
break
if char_precedence < tos_precedence:
postfix.append(stack.pop())
continue
# Precedences are equal
if associativity(char) == "RL":
stack.push(char)
break
postfix.append(stack.pop())
stack.push(char)
while not stack.is_empty():
postfix.append(stack.pop())
return " ".join(postfix)

View File

@ -92,13 +92,13 @@ def test_stack() -> None:
try:
_ = stack.pop()
raise AssertionError() # This should not happen
raise AssertionError # This should not happen
except StackUnderflowError:
assert True # This should happen
try:
_ = stack.peek()
raise AssertionError() # This should not happen
raise AssertionError # This should not happen
except StackUnderflowError:
assert True # This should happen
@ -118,7 +118,7 @@ def test_stack() -> None:
try:
stack.push(200)
raise AssertionError() # This should not happen
raise AssertionError # This should not happen
except StackOverflowError:
assert True # This should happen

View File

@ -54,10 +54,17 @@ class RadixNode:
word (str): word to insert
>>> RadixNode("myprefix").insert("mystring")
>>> root = RadixNode()
>>> root.insert_many(['myprefix', 'myprefixA', 'myprefixAA'])
>>> root.print_tree()
- myprefix (leaf)
-- A (leaf)
--- A (leaf)
"""
# Case 1: If the word is the prefix of the node
# Solution: We set the current node as leaf
if self.prefix == word:
if self.prefix == word and not self.is_leaf:
self.is_leaf = True
# Case 2: The node has no edges that have a prefix to the word
@ -156,7 +163,7 @@ class RadixNode:
del self.nodes[word[0]]
# We merge the current node with its only child
if len(self.nodes) == 1 and not self.is_leaf:
merging_node = list(self.nodes.values())[0]
merging_node = next(iter(self.nodes.values()))
self.is_leaf = merging_node.is_leaf
self.prefix += merging_node.prefix
self.nodes = merging_node.nodes
@ -165,7 +172,7 @@ class RadixNode:
incoming_node.is_leaf = False
# If there is 1 edge, we merge it with its child
else:
merging_node = list(incoming_node.nodes.values())[0]
merging_node = next(iter(incoming_node.nodes.values()))
incoming_node.is_leaf = merging_node.is_leaf
incoming_node.prefix += merging_node.prefix
incoming_node.nodes = merging_node.nodes

View File

@ -21,7 +21,8 @@ class Burkes:
self.max_threshold = int(self.get_greyscale(255, 255, 255))
if not self.min_threshold < threshold < self.max_threshold:
raise ValueError(f"Factor value should be from 0 to {self.max_threshold}")
msg = f"Factor value should be from 0 to {self.max_threshold}"
raise ValueError(msg)
self.input_img = input_img
self.threshold = threshold
@ -38,9 +39,18 @@ class Burkes:
def get_greyscale(cls, blue: int, green: int, red: int) -> float:
"""
>>> Burkes.get_greyscale(3, 4, 5)
3.753
4.185
>>> Burkes.get_greyscale(0, 0, 0)
0.0
>>> Burkes.get_greyscale(255, 255, 255)
255.0
"""
return 0.114 * blue + 0.587 * green + 0.2126 * red
"""
Formula from https://en.wikipedia.org/wiki/HSL_and_HSV
cf Lightness section, and Fig 13c.
We use the first of four possible.
"""
return 0.114 * blue + 0.587 * green + 0.299 * red
def process(self) -> None:
for y in range(self.height):
@ -48,10 +58,10 @@ class Burkes:
greyscale = int(self.get_greyscale(*self.input_img[y][x]))
if self.threshold > greyscale + self.error_table[y][x]:
self.output_img[y][x] = (0, 0, 0)
current_error = greyscale + self.error_table[x][y]
current_error = greyscale + self.error_table[y][x]
else:
self.output_img[y][x] = (255, 255, 255)
current_error = greyscale + self.error_table[x][y] - 255
current_error = greyscale + self.error_table[y][x] - 255
"""
Burkes error propagation (`*` is current pixel):

View File

@ -96,7 +96,7 @@ def test_nearest_neighbour(
def test_local_binary_pattern():
file_path: str = "digital_image_processing/image_data/lena.jpg"
file_path = "digital_image_processing/image_data/lena.jpg"
# Reading the image and converting it to grayscale.
image = imread(file_path, 0)

View File

@ -174,12 +174,12 @@ def _validate_input(points: list[Point] | list[list[float]]) -> list[Point]:
"""
if not hasattr(points, "__iter__"):
raise ValueError(
f"Expecting an iterable object but got an non-iterable type {points}"
)
msg = f"Expecting an iterable object but got an non-iterable type {points}"
raise ValueError(msg)
if not points:
raise ValueError(f"Expecting a list of points but got {points}")
msg = f"Expecting a list of points but got {points}"
raise ValueError(msg)
return _construct_points(points)
@ -266,7 +266,7 @@ def convex_hull_bf(points: list[Point]) -> list[Point]:
points_left_of_ij = points_right_of_ij = False
ij_part_of_convex_hull = True
for k in range(n):
if k != i and k != j:
if k not in {i, j}:
det_k = _det(points[i], points[j], points[k])
if det_k > 0:

View File

@ -0,0 +1,112 @@
"""
The maximum subarray problem is the task of finding the continuous subarray that has the
maximum sum within a given array of numbers. For example, given the array
[-2, 1, -3, 4, -1, 2, 1, -5, 4], the contiguous subarray with the maximum sum is
[4, -1, 2, 1], which has a sum of 6.
This divide-and-conquer algorithm finds the maximum subarray in O(n log n) time.
"""
from __future__ import annotations
import time
from collections.abc import Sequence
from random import randint
from matplotlib import pyplot as plt
def max_subarray(
arr: Sequence[float], low: int, high: int
) -> tuple[int | None, int | None, float]:
"""
Solves the maximum subarray problem using divide and conquer.
:param arr: the given array of numbers
:param low: the start index
:param high: the end index
:return: the start index of the maximum subarray, the end index of the
maximum subarray, and the maximum subarray sum
>>> nums = [-2, 1, -3, 4, -1, 2, 1, -5, 4]
>>> max_subarray(nums, 0, len(nums) - 1)
(3, 6, 6)
>>> nums = [2, 8, 9]
>>> max_subarray(nums, 0, len(nums) - 1)
(0, 2, 19)
>>> nums = [0, 0]
>>> max_subarray(nums, 0, len(nums) - 1)
(0, 0, 0)
>>> nums = [-1.0, 0.0, 1.0]
>>> max_subarray(nums, 0, len(nums) - 1)
(2, 2, 1.0)
>>> nums = [-2, -3, -1, -4, -6]
>>> max_subarray(nums, 0, len(nums) - 1)
(2, 2, -1)
>>> max_subarray([], 0, 0)
(None, None, 0)
"""
if not arr:
return None, None, 0
if low == high:
return low, high, arr[low]
mid = (low + high) // 2
left_low, left_high, left_sum = max_subarray(arr, low, mid)
right_low, right_high, right_sum = max_subarray(arr, mid + 1, high)
cross_left, cross_right, cross_sum = max_cross_sum(arr, low, mid, high)
if left_sum >= right_sum and left_sum >= cross_sum:
return left_low, left_high, left_sum
elif right_sum >= left_sum and right_sum >= cross_sum:
return right_low, right_high, right_sum
return cross_left, cross_right, cross_sum
def max_cross_sum(
arr: Sequence[float], low: int, mid: int, high: int
) -> tuple[int, int, float]:
left_sum, max_left = float("-inf"), -1
right_sum, max_right = float("-inf"), -1
summ: int | float = 0
for i in range(mid, low - 1, -1):
summ += arr[i]
if summ > left_sum:
left_sum = summ
max_left = i
summ = 0
for i in range(mid + 1, high + 1):
summ += arr[i]
if summ > right_sum:
right_sum = summ
max_right = i
return max_left, max_right, (left_sum + right_sum)
def time_max_subarray(input_size: int) -> float:
arr = [randint(1, input_size) for _ in range(input_size)]
start = time.time()
max_subarray(arr, 0, input_size - 1)
end = time.time()
return end - start
def plot_runtimes() -> None:
input_sizes = [10, 100, 1000, 10000, 50000, 100000, 200000, 300000, 400000, 500000]
runtimes = [time_max_subarray(input_size) for input_size in input_sizes]
print("No of Inputs\t\tTime Taken")
for input_size, runtime in zip(input_sizes, runtimes):
print(input_size, "\t\t", runtime)
plt.plot(input_sizes, runtimes)
plt.xlabel("Number of Inputs")
plt.ylabel("Time taken in seconds")
plt.show()
if __name__ == "__main__":
"""
A random simulation of this algorithm.
"""
from doctest import testmod
testmod()

View File

@ -1,78 +0,0 @@
"""
Given a array of length n, max_subarray_sum() finds
the maximum of sum of contiguous sub-array using divide and conquer method.
Time complexity : O(n log n)
Ref : INTRODUCTION TO ALGORITHMS THIRD EDITION
(section : 4, sub-section : 4.1, page : 70)
"""
def max_sum_from_start(array):
"""This function finds the maximum contiguous sum of array from 0 index
Parameters :
array (list[int]) : given array
Returns :
max_sum (int) : maximum contiguous sum of array from 0 index
"""
array_sum = 0
max_sum = float("-inf")
for num in array:
array_sum += num
if array_sum > max_sum:
max_sum = array_sum
return max_sum
def max_cross_array_sum(array, left, mid, right):
"""This function finds the maximum contiguous sum of left and right arrays
Parameters :
array, left, mid, right (list[int], int, int, int)
Returns :
(int) : maximum of sum of contiguous sum of left and right arrays
"""
max_sum_of_left = max_sum_from_start(array[left : mid + 1][::-1])
max_sum_of_right = max_sum_from_start(array[mid + 1 : right + 1])
return max_sum_of_left + max_sum_of_right
def max_subarray_sum(array, left, right):
"""Maximum contiguous sub-array sum, using divide and conquer method
Parameters :
array, left, right (list[int], int, int) :
given array, current left index and current right index
Returns :
int : maximum of sum of contiguous sub-array
"""
# base case: array has only one element
if left == right:
return array[right]
# Recursion
mid = (left + right) // 2
left_half_sum = max_subarray_sum(array, left, mid)
right_half_sum = max_subarray_sum(array, mid + 1, right)
cross_sum = max_cross_array_sum(array, left, mid, right)
return max(left_half_sum, right_half_sum, cross_sum)
if __name__ == "__main__":
array = [-2, -5, 6, -2, -3, 1, 5, -6]
array_length = len(array)
print(
"Maximum sum of contiguous subarray:",
max_subarray_sum(array, 0, array_length - 1),
)

View File

@ -112,17 +112,19 @@ def strassen(matrix1: list, matrix2: list) -> list:
[[139, 163], [121, 134], [100, 121]]
"""
if matrix_dimensions(matrix1)[1] != matrix_dimensions(matrix2)[0]:
raise Exception(
"Unable to multiply these matrices, please check the dimensions. \n"
f"Matrix A:{matrix1} \nMatrix B:{matrix2}"
msg = (
"Unable to multiply these matrices, please check the dimensions.\n"
f"Matrix A: {matrix1}\n"
f"Matrix B: {matrix2}"
)
raise Exception(msg)
dimension1 = matrix_dimensions(matrix1)
dimension2 = matrix_dimensions(matrix2)
if dimension1[0] == dimension1[1] and dimension2[0] == dimension2[1]:
return [matrix1, matrix2]
maximum = max(max(dimension1), max(dimension2))
maximum = max(*dimension1, *dimension2)
maxim = int(math.pow(2, math.ceil(math.log2(maximum))))
new_matrix1 = matrix1
new_matrix2 = matrix2

View File

@ -24,7 +24,7 @@ class Fibonacci:
return self.sequence[:index]
def main():
def main() -> None:
print(
"Fibonacci Series Using Dynamic Programming\n",
"Enter the index of the Fibonacci number you want to calculate ",

View File

@ -78,17 +78,18 @@ def knapsack_with_example_solution(w: int, wt: list, val: list):
num_items = len(wt)
if num_items != len(val):
raise ValueError(
"The number of weights must be the "
"same as the number of values.\nBut "
f"got {num_items} weights and {len(val)} values"
msg = (
"The number of weights must be the same as the number of values.\n"
f"But got {num_items} weights and {len(val)} values"
)
raise ValueError(msg)
for i in range(num_items):
if not isinstance(wt[i], int):
raise TypeError(
"All weights must be integers but "
f"got weight of type {type(wt[i])} at index {i}"
msg = (
"All weights must be integers but got weight of "
f"type {type(wt[i])} at index {i}"
)
raise TypeError(msg)
optimal_val, dp_table = knapsack(w, wt, val, num_items)
example_optional_set: set = set()

View File

@ -1,93 +0,0 @@
"""
author : Mayank Kumar Jha (mk9440)
"""
from __future__ import annotations
def find_max_sub_array(a, low, high):
if low == high:
return low, high, a[low]
else:
mid = (low + high) // 2
left_low, left_high, left_sum = find_max_sub_array(a, low, mid)
right_low, right_high, right_sum = find_max_sub_array(a, mid + 1, high)
cross_left, cross_right, cross_sum = find_max_cross_sum(a, low, mid, high)
if left_sum >= right_sum and left_sum >= cross_sum:
return left_low, left_high, left_sum
elif right_sum >= left_sum and right_sum >= cross_sum:
return right_low, right_high, right_sum
else:
return cross_left, cross_right, cross_sum
def find_max_cross_sum(a, low, mid, high):
left_sum, max_left = -999999999, -1
right_sum, max_right = -999999999, -1
summ = 0
for i in range(mid, low - 1, -1):
summ += a[i]
if summ > left_sum:
left_sum = summ
max_left = i
summ = 0
for i in range(mid + 1, high + 1):
summ += a[i]
if summ > right_sum:
right_sum = summ
max_right = i
return max_left, max_right, (left_sum + right_sum)
def max_sub_array(nums: list[int]) -> int:
"""
Finds the contiguous subarray which has the largest sum and return its sum.
>>> max_sub_array([-2, 1, -3, 4, -1, 2, 1, -5, 4])
6
An empty (sub)array has sum 0.
>>> max_sub_array([])
0
If all elements are negative, the largest subarray would be the empty array,
having the sum 0.
>>> max_sub_array([-1, -2, -3])
0
>>> max_sub_array([5, -2, -3])
5
>>> max_sub_array([31, -41, 59, 26, -53, 58, 97, -93, -23, 84])
187
"""
best = 0
current = 0
for i in nums:
current += i
current = max(current, 0)
best = max(best, current)
return best
if __name__ == "__main__":
"""
A random simulation of this algorithm.
"""
import time
from random import randint
from matplotlib import pyplot as plt
inputs = [10, 100, 1000, 10000, 50000, 100000, 200000, 300000, 400000, 500000]
tim = []
for i in inputs:
li = [randint(1, i) for j in range(i)]
strt = time.time()
(find_max_sub_array(li, 0, len(li) - 1))
end = time.time()
tim.append(end - strt)
print("No of Inputs Time Taken")
for i in range(len(inputs)):
print(inputs[i], "\t\t", tim[i])
plt.plot(inputs, tim)
plt.xlabel("Number of Inputs")
plt.ylabel("Time taken in seconds ")
plt.show()

View File

@ -0,0 +1,60 @@
"""
The maximum subarray sum problem is the task of finding the maximum sum that can be
obtained from a contiguous subarray within a given array of numbers. For example, given
the array [-2, 1, -3, 4, -1, 2, 1, -5, 4], the contiguous subarray with the maximum sum
is [4, -1, 2, 1], so the maximum subarray sum is 6.
Kadane's algorithm is a simple dynamic programming algorithm that solves the maximum
subarray sum problem in O(n) time and O(1) space.
Reference: https://en.wikipedia.org/wiki/Maximum_subarray_problem
"""
from collections.abc import Sequence
def max_subarray_sum(
arr: Sequence[float], allow_empty_subarrays: bool = False
) -> float:
"""
Solves the maximum subarray sum problem using Kadane's algorithm.
:param arr: the given array of numbers
:param allow_empty_subarrays: if True, then the algorithm considers empty subarrays
>>> max_subarray_sum([2, 8, 9])
19
>>> max_subarray_sum([0, 0])
0
>>> max_subarray_sum([-1.0, 0.0, 1.0])
1.0
>>> max_subarray_sum([1, 2, 3, 4, -2])
10
>>> max_subarray_sum([-2, 1, -3, 4, -1, 2, 1, -5, 4])
6
>>> max_subarray_sum([2, 3, -9, 8, -2])
8
>>> max_subarray_sum([-2, -3, -1, -4, -6])
-1
>>> max_subarray_sum([-2, -3, -1, -4, -6], allow_empty_subarrays=True)
0
>>> max_subarray_sum([])
0
"""
if not arr:
return 0
max_sum = 0 if allow_empty_subarrays else float("-inf")
curr_sum = 0.0
for num in arr:
curr_sum = max(0 if allow_empty_subarrays else num, curr_sum + num)
max_sum = max(max_sum, curr_sum)
return max_sum
if __name__ == "__main__":
from doctest import testmod
testmod()
nums = [-2, 1, -3, 4, -1, 2, 1, -5, 4]
print(f"{max_subarray_sum(nums) = }")

View File

@ -1,20 +0,0 @@
def max_subarray_sum(nums: list) -> int:
"""
>>> max_subarray_sum([6 , 9, -1, 3, -7, -5, 10])
17
"""
if not nums:
return 0
n = len(nums)
res, s, s_pre = nums[0], nums[0], nums[0]
for i in range(1, n):
s = max(nums[i], s_pre + nums[i])
s_pre = s
res = max(res, s)
return res
if __name__ == "__main__":
nums = [6, 9, -1, 3, -7, -5, 10]
print(max_subarray_sum(nums))

View File

@ -42,7 +42,8 @@ def min_steps_to_one(number: int) -> int:
"""
if number <= 0:
raise ValueError(f"n must be greater than 0. Got n = {number}")
msg = f"n must be greater than 0. Got n = {number}"
raise ValueError(msg)
table = [number + 1] * (number + 1)

View File

@ -0,0 +1,97 @@
"""
Regex matching check if a text matches pattern or not.
Pattern:
'.' Matches any single character.
'*' Matches zero or more of the preceding element.
More info:
https://medium.com/trick-the-interviwer/regular-expression-matching-9972eb74c03
"""
def recursive_match(text: str, pattern: str) -> bool:
"""
Recursive matching algorithm.
Time complexity: O(2 ^ (|text| + |pattern|))
Space complexity: Recursion depth is O(|text| + |pattern|).
:param text: Text to match.
:param pattern: Pattern to match.
:return: True if text matches pattern, False otherwise.
>>> recursive_match('abc', 'a.c')
True
>>> recursive_match('abc', 'af*.c')
True
>>> recursive_match('abc', 'a.c*')
True
>>> recursive_match('abc', 'a.c*d')
False
>>> recursive_match('aa', '.*')
True
"""
if not pattern:
return not text
if not text:
return pattern[-1] == "*" and recursive_match(text, pattern[:-2])
if text[-1] == pattern[-1] or pattern[-1] == ".":
return recursive_match(text[:-1], pattern[:-1])
if pattern[-1] == "*":
return recursive_match(text[:-1], pattern) or recursive_match(
text, pattern[:-2]
)
return False
def dp_match(text: str, pattern: str) -> bool:
"""
Dynamic programming matching algorithm.
Time complexity: O(|text| * |pattern|)
Space complexity: O(|text| * |pattern|)
:param text: Text to match.
:param pattern: Pattern to match.
:return: True if text matches pattern, False otherwise.
>>> dp_match('abc', 'a.c')
True
>>> dp_match('abc', 'af*.c')
True
>>> dp_match('abc', 'a.c*')
True
>>> dp_match('abc', 'a.c*d')
False
>>> dp_match('aa', '.*')
True
"""
m = len(text)
n = len(pattern)
dp = [[False for _ in range(n + 1)] for _ in range(m + 1)]
dp[0][0] = True
for j in range(1, n + 1):
dp[0][j] = pattern[j - 1] == "*" and dp[0][j - 2]
for i in range(1, m + 1):
for j in range(1, n + 1):
if pattern[j - 1] in {".", text[i - 1]}:
dp[i][j] = dp[i - 1][j - 1]
elif pattern[j - 1] == "*":
dp[i][j] = dp[i][j - 2]
if pattern[j - 2] in {".", text[i - 1]}:
dp[i][j] |= dp[i - 1][j]
else:
dp[i][j] = False
return dp[m][n]
if __name__ == "__main__":
import doctest
doctest.testmod()

View File

@ -177,13 +177,15 @@ def _enforce_args(n: int, prices: list):
the rod
"""
if n < 0:
raise ValueError(f"n must be greater than or equal to 0. Got n = {n}")
msg = f"n must be greater than or equal to 0. Got n = {n}"
raise ValueError(msg)
if n > len(prices):
raise ValueError(
"Each integral piece of rod must have a corresponding "
f"price. Got n = {n} but length of prices = {len(prices)}"
msg = (
"Each integral piece of rod must have a corresponding price. "
f"Got n = {n} but length of prices = {len(prices)}"
)
raise ValueError(msg)
def main():

View File

@ -0,0 +1,24 @@
# Tribonacci sequence using Dynamic Programming
def tribonacci(num: int) -> list[int]:
"""
Given a number, return first n Tribonacci Numbers.
>>> tribonacci(5)
[0, 0, 1, 1, 2]
>>> tribonacci(8)
[0, 0, 1, 1, 2, 4, 7, 13]
"""
dp = [0] * num
dp[2] = 1
for i in range(3, num):
dp[i] = dp[i - 1] + dp[i - 2] + dp[i - 3]
return dp
if __name__ == "__main__":
import doctest
doctest.testmod()

View File

@ -297,11 +297,13 @@ def _validate_list(_object: Any, var_name: str) -> None:
"""
if not isinstance(_object, list):
raise ValueError(f"{var_name} must be a list")
msg = f"{var_name} must be a list"
raise ValueError(msg)
else:
for x in _object:
if not isinstance(x, str):
raise ValueError(f"{var_name} must be a list of strings")
msg = f"{var_name} must be a list of strings"
raise ValueError(msg)
def _validate_dicts(
@ -384,14 +386,15 @@ def _validate_dict(
ValueError: mock_name nested dictionary all values must be float
"""
if not isinstance(_object, dict):
raise ValueError(f"{var_name} must be a dict")
msg = f"{var_name} must be a dict"
raise ValueError(msg)
if not all(isinstance(x, str) for x in _object):
raise ValueError(f"{var_name} all keys must be strings")
msg = f"{var_name} all keys must be strings"
raise ValueError(msg)
if not all(isinstance(x, value_type) for x in _object.values()):
nested_text = "nested dictionary " if nested else ""
raise ValueError(
f"{var_name} {nested_text}all values must be {value_type.__name__}"
)
msg = f"{var_name} {nested_text}all values must be {value_type.__name__}"
raise ValueError(msg)
if __name__ == "__main__":

View File

@ -1,7 +1,12 @@
# https://en.m.wikipedia.org/wiki/Electric_power
from __future__ import annotations
from collections import namedtuple
from typing import NamedTuple
class Result(NamedTuple):
name: str
value: float
def electric_power(voltage: float, current: float, power: float) -> tuple:
@ -10,11 +15,11 @@ def electric_power(voltage: float, current: float, power: float) -> tuple:
fundamental value of electrical system.
examples are below:
>>> electric_power(voltage=0, current=2, power=5)
result(name='voltage', value=2.5)
Result(name='voltage', value=2.5)
>>> electric_power(voltage=2, current=2, power=0)
result(name='power', value=4.0)
Result(name='power', value=4.0)
>>> electric_power(voltage=-2, current=3, power=0)
result(name='power', value=6.0)
Result(name='power', value=6.0)
>>> electric_power(voltage=2, current=4, power=2)
Traceback (most recent call last):
...
@ -28,9 +33,8 @@ def electric_power(voltage: float, current: float, power: float) -> tuple:
...
ValueError: Power cannot be negative in any electrical/electronics system
>>> electric_power(voltage=2.2, current=2.2, power=0)
result(name='power', value=4.84)
Result(name='power', value=4.84)
"""
result = namedtuple("result", "name value")
if (voltage, current, power).count(0) != 1:
raise ValueError("Only one argument must be 0")
elif power < 0:
@ -38,11 +42,11 @@ def electric_power(voltage: float, current: float, power: float) -> tuple:
"Power cannot be negative in any electrical/electronics system"
)
elif voltage == 0:
return result("voltage", power / current)
return Result("voltage", power / current)
elif current == 0:
return result("current", power / voltage)
return Result("current", power / voltage)
elif power == 0:
return result("power", float(round(abs(voltage * current), 2)))
return Result("power", float(round(abs(voltage * current), 2)))
else:
raise ValueError("Exactly one argument must be 0")

View File

@ -23,7 +23,8 @@ def resistor_parallel(resistors: list[float]) -> float:
index = 0
for resistor in resistors:
if resistor <= 0:
raise ValueError(f"Resistor at index {index} has a negative or zero value!")
msg = f"Resistor at index {index} has a negative or zero value!"
raise ValueError(msg)
first_sum += 1 / float(resistor)
index += 1
return 1 / first_sum
@ -47,7 +48,8 @@ def resistor_series(resistors: list[float]) -> float:
for resistor in resistors:
sum_r += resistor
if resistor < 0:
raise ValueError(f"Resistor at index {index} has a negative value!")
msg = f"Resistor at index {index} has a negative value!"
raise ValueError(msg)
index += 1
return sum_r

View File

@ -4,7 +4,7 @@ from __future__ import annotations
def simple_interest(
principal: float, daily_interest_rate: float, days_between_payments: int
principal: float, daily_interest_rate: float, days_between_payments: float
) -> float:
"""
>>> simple_interest(18000.0, 0.06, 3)
@ -42,7 +42,7 @@ def simple_interest(
def compound_interest(
principal: float,
nominal_annual_interest_rate_percentage: float,
number_of_compounding_periods: int,
number_of_compounding_periods: float,
) -> float:
"""
>>> compound_interest(10000.0, 0.05, 3)
@ -77,6 +77,43 @@ def compound_interest(
)
def apr_interest(
principal: float,
nominal_annual_percentage_rate: float,
number_of_years: float,
) -> float:
"""
>>> apr_interest(10000.0, 0.05, 3)
1618.223072263547
>>> apr_interest(10000.0, 0.05, 1)
512.6749646744732
>>> apr_interest(0.5, 0.05, 3)
0.08091115361317736
>>> apr_interest(10000.0, 0.06, -4)
Traceback (most recent call last):
...
ValueError: number_of_years must be > 0
>>> apr_interest(10000.0, -3.5, 3.0)
Traceback (most recent call last):
...
ValueError: nominal_annual_percentage_rate must be >= 0
>>> apr_interest(-5500.0, 0.01, 5)
Traceback (most recent call last):
...
ValueError: principal must be > 0
"""
if number_of_years <= 0:
raise ValueError("number_of_years must be > 0")
if nominal_annual_percentage_rate < 0:
raise ValueError("nominal_annual_percentage_rate must be >= 0")
if principal <= 0:
raise ValueError("principal must be > 0")
return compound_interest(
principal, nominal_annual_percentage_rate / 365, number_of_years * 365
)
if __name__ == "__main__":
import doctest

View File

@ -0,0 +1,42 @@
"""
Reference: https://www.investopedia.com/terms/p/presentvalue.asp
An algorithm that calculates the present value of a stream of yearly cash flows given...
1. The discount rate (as a decimal, not a percent)
2. An array of cash flows, with the index of the cash flow being the associated year
Note: This algorithm assumes that cash flows are paid at the end of the specified year
"""
def present_value(discount_rate: float, cash_flows: list[float]) -> float:
"""
>>> present_value(0.13, [10, 20.70, -293, 297])
4.69
>>> present_value(0.07, [-109129.39, 30923.23, 15098.93, 29734,39])
-42739.63
>>> present_value(0.07, [109129.39, 30923.23, 15098.93, 29734,39])
175519.15
>>> present_value(-1, [109129.39, 30923.23, 15098.93, 29734,39])
Traceback (most recent call last):
...
ValueError: Discount rate cannot be negative
>>> present_value(0.03, [])
Traceback (most recent call last):
...
ValueError: Cash flows list cannot be empty
"""
if discount_rate < 0:
raise ValueError("Discount rate cannot be negative")
if not cash_flows:
raise ValueError("Cash flows list cannot be empty")
present_value = sum(
cash_flow / ((1 + discount_rate) ** i) for i, cash_flow in enumerate(cash_flows)
)
return round(present_value, ndigits=2)
if __name__ == "__main__":
import doctest
doctest.testmod()

View File

@ -82,3 +82,4 @@ if __name__ == "__main__":
vertices = [(-175, -125), (0, 175), (175, -125)] # vertices of triangle
triangle(vertices[0], vertices[1], vertices[2], int(sys.argv[1]))
turtle.Screen().exitonclick()

View File

@ -21,6 +21,54 @@ MUTATION_PROBABILITY = 0.4
random.seed(random.randint(0, 1000))
def evaluate(item: str, main_target: str) -> tuple[str, float]:
"""
Evaluate how similar the item is with the target by just
counting each char in the right position
>>> evaluate("Helxo Worlx", "Hello World")
('Helxo Worlx', 9.0)
"""
score = len([g for position, g in enumerate(item) if g == main_target[position]])
return (item, float(score))
def crossover(parent_1: str, parent_2: str) -> tuple[str, str]:
"""Slice and combine two string at a random point."""
random_slice = random.randint(0, len(parent_1) - 1)
child_1 = parent_1[:random_slice] + parent_2[random_slice:]
child_2 = parent_2[:random_slice] + parent_1[random_slice:]
return (child_1, child_2)
def mutate(child: str, genes: list[str]) -> str:
"""Mutate a random gene of a child with another one from the list."""
child_list = list(child)
if random.uniform(0, 1) < MUTATION_PROBABILITY:
child_list[random.randint(0, len(child)) - 1] = random.choice(genes)
return "".join(child_list)
# Select, crossover and mutate a new population.
def select(
parent_1: tuple[str, float],
population_score: list[tuple[str, float]],
genes: list[str],
) -> list[str]:
"""Select the second parent and generate new population"""
pop = []
# Generate more children proportionally to the fitness score.
child_n = int(parent_1[1] * 100) + 1
child_n = 10 if child_n >= 10 else child_n
for _ in range(child_n):
parent_2 = population_score[random.randint(0, N_SELECTED)][0]
child_1, child_2 = crossover(parent_1[0], parent_2)
# Append new string to the population list.
pop.append(mutate(child_1, genes))
pop.append(mutate(child_2, genes))
return pop
def basic(target: str, genes: list[str], debug: bool = True) -> tuple[int, int, str]:
"""
Verify that the target contains no genes besides the ones inside genes variable.
@ -48,13 +96,13 @@ def basic(target: str, genes: list[str], debug: bool = True) -> tuple[int, int,
# Verify if N_POPULATION is bigger than N_SELECTED
if N_POPULATION < N_SELECTED:
raise ValueError(f"{N_POPULATION} must be bigger than {N_SELECTED}")
msg = f"{N_POPULATION} must be bigger than {N_SELECTED}"
raise ValueError(msg)
# Verify that the target contains no genes besides the ones inside genes variable.
not_in_genes_list = sorted({c for c in target if c not in genes})
if not_in_genes_list:
raise ValueError(
f"{not_in_genes_list} is not in genes list, evolution cannot converge"
)
msg = f"{not_in_genes_list} is not in genes list, evolution cannot converge"
raise ValueError(msg)
# Generate random starting population.
population = []
@ -70,17 +118,6 @@ def basic(target: str, genes: list[str], debug: bool = True) -> tuple[int, int,
total_population += len(population)
# Random population created. Now it's time to evaluate.
def evaluate(item: str, main_target: str = target) -> tuple[str, float]:
"""
Evaluate how similar the item is with the target by just
counting each char in the right position
>>> evaluate("Helxo Worlx", Hello World)
["Helxo Worlx", 9]
"""
score = len(
[g for position, g in enumerate(item) if g == main_target[position]]
)
return (item, float(score))
# Adding a bit of concurrency can make everything faster,
#
@ -94,7 +131,7 @@ def basic(target: str, genes: list[str], debug: bool = True) -> tuple[int, int,
#
# but with a simple algorithm like this, it will probably be slower.
# We just need to call evaluate for every item inside the population.
population_score = [evaluate(item) for item in population]
population_score = [evaluate(item, target) for item in population]
# Check if there is a matching evolution.
population_score = sorted(population_score, key=lambda x: x[1], reverse=True)
@ -121,41 +158,9 @@ def basic(target: str, genes: list[str], debug: bool = True) -> tuple[int, int,
(item, score / len(target)) for item, score in population_score
]
# Select, crossover and mutate a new population.
def select(parent_1: tuple[str, float]) -> list[str]:
"""Select the second parent and generate new population"""
pop = []
# Generate more children proportionally to the fitness score.
child_n = int(parent_1[1] * 100) + 1
child_n = 10 if child_n >= 10 else child_n
for _ in range(child_n):
parent_2 = population_score[ # noqa: B023
random.randint(0, N_SELECTED)
][0]
child_1, child_2 = crossover(parent_1[0], parent_2)
# Append new string to the population list.
pop.append(mutate(child_1))
pop.append(mutate(child_2))
return pop
def crossover(parent_1: str, parent_2: str) -> tuple[str, str]:
"""Slice and combine two string at a random point."""
random_slice = random.randint(0, len(parent_1) - 1)
child_1 = parent_1[:random_slice] + parent_2[random_slice:]
child_2 = parent_2[:random_slice] + parent_1[random_slice:]
return (child_1, child_2)
def mutate(child: str) -> str:
"""Mutate a random gene of a child with another one from the list."""
child_list = list(child)
if random.uniform(0, 1) < MUTATION_PROBABILITY:
child_list[random.randint(0, len(child)) - 1] = random.choice(genes)
return "".join(child_list)
# This is selection
for i in range(N_SELECTED):
population.extend(select(population_score[int(i)]))
population.extend(select(population_score[int(i)], population_score, genes))
# Check if the population has already reached the maximum value and if so,
# break the cycle. If this check is disabled, the algorithm will take
# forever to compute large strings, but will also calculate small strings in

View File

@ -28,9 +28,8 @@ def convert_to_2d(
TypeError: Input values must either be float or int: ['1', 2, 3, 10, 10]
"""
if not all(isinstance(val, (float, int)) for val in locals().values()):
raise TypeError(
"Input values must either be float or int: " f"{list(locals().values())}"
)
msg = f"Input values must either be float or int: {list(locals().values())}"
raise TypeError(msg)
projected_x = ((x * distance) / (z + distance)) * scale
projected_y = ((y * distance) / (z + distance)) * scale
return projected_x, projected_y
@ -71,10 +70,11 @@ def rotate(
input_variables = locals()
del input_variables["axis"]
if not all(isinstance(val, (float, int)) for val in input_variables.values()):
raise TypeError(
msg = (
"Input values except axis must either be float or int: "
f"{list(input_variables.values())}"
)
raise TypeError(msg)
angle = (angle % 360) / 450 * 180 / math.pi
if axis == "z":
new_x = x * math.cos(angle) - y * math.sin(angle)

View File

@ -26,8 +26,8 @@ def pass_and_relaxation(
cst_bwd: dict,
queue: PriorityQueue,
parent: dict,
shortest_distance: float | int,
) -> float | int:
shortest_distance: float,
) -> float:
for nxt, d in graph[v]:
if nxt in visited_forward:
continue

View File

@ -73,9 +73,10 @@ class Graph:
target_vertex_parent = self.parent.get(target_vertex)
if target_vertex_parent is None:
raise ValueError(
msg = (
f"No path from vertex: {self.source_vertex} to vertex: {target_vertex}"
)
raise ValueError(msg)
return self.shortest_path(target_vertex_parent) + f"->{target_vertex}"

View File

@ -0,0 +1,89 @@
"""
This script implements the Dijkstra algorithm on a binary grid.
The grid consists of 0s and 1s, where 1 represents
a walkable node and 0 represents an obstacle.
The algorithm finds the shortest path from a start node to a destination node.
Diagonal movement can be allowed or disallowed.
"""
from heapq import heappop, heappush
import numpy as np
def dijkstra(
grid: np.ndarray,
source: tuple[int, int],
destination: tuple[int, int],
allow_diagonal: bool,
) -> tuple[float | int, list[tuple[int, int]]]:
"""
Implements Dijkstra's algorithm on a binary grid.
Args:
grid (np.ndarray): A 2D numpy array representing the grid.
1 represents a walkable node and 0 represents an obstacle.
source (Tuple[int, int]): A tuple representing the start node.
destination (Tuple[int, int]): A tuple representing the
destination node.
allow_diagonal (bool): A boolean determining whether
diagonal movements are allowed.
Returns:
Tuple[Union[float, int], List[Tuple[int, int]]]:
The shortest distance from the start node to the destination node
and the shortest path as a list of nodes.
>>> dijkstra(np.array([[1, 1, 1], [0, 1, 0], [0, 1, 1]]), (0, 0), (2, 2), False)
(4.0, [(0, 0), (0, 1), (1, 1), (2, 1), (2, 2)])
>>> dijkstra(np.array([[1, 1, 1], [0, 1, 0], [0, 1, 1]]), (0, 0), (2, 2), True)
(2.0, [(0, 0), (1, 1), (2, 2)])
>>> dijkstra(np.array([[1, 1, 1], [0, 0, 1], [0, 1, 1]]), (0, 0), (2, 2), False)
(4.0, [(0, 0), (0, 1), (0, 2), (1, 2), (2, 2)])
"""
rows, cols = grid.shape
dx = [-1, 1, 0, 0]
dy = [0, 0, -1, 1]
if allow_diagonal:
dx += [-1, -1, 1, 1]
dy += [-1, 1, -1, 1]
queue, visited = [(0, source)], set()
matrix = np.full((rows, cols), np.inf)
matrix[source] = 0
predecessors = np.empty((rows, cols), dtype=object)
predecessors[source] = None
while queue:
(dist, (x, y)) = heappop(queue)
if (x, y) in visited:
continue
visited.add((x, y))
if (x, y) == destination:
path = []
while (x, y) != source:
path.append((x, y))
x, y = predecessors[x, y]
path.append(source) # add the source manually
path.reverse()
return matrix[destination], path
for i in range(len(dx)):
nx, ny = x + dx[i], y + dy[i]
if 0 <= nx < rows and 0 <= ny < cols:
next_node = grid[nx][ny]
if next_node == 1 and matrix[nx, ny] > dist + 1:
heappush(queue, (dist + 1, (nx, ny)))
matrix[nx, ny] = dist + 1
predecessors[nx, ny] = (x, y)
return np.inf, []
if __name__ == "__main__":
import doctest
doctest.testmod()

View File

@ -39,7 +39,7 @@ class DirectedGraph:
stack = []
visited = []
if s == -2:
s = list(self.graph)[0]
s = next(iter(self.graph))
stack.append(s)
visited.append(s)
ss = s
@ -87,7 +87,7 @@ class DirectedGraph:
d = deque()
visited = []
if s == -2:
s = list(self.graph)[0]
s = next(iter(self.graph))
d.append(s)
visited.append(s)
while d:
@ -114,7 +114,7 @@ class DirectedGraph:
stack = []
visited = []
if s == -2:
s = list(self.graph)[0]
s = next(iter(self.graph))
stack.append(s)
visited.append(s)
ss = s
@ -146,7 +146,7 @@ class DirectedGraph:
def cycle_nodes(self):
stack = []
visited = []
s = list(self.graph)[0]
s = next(iter(self.graph))
stack.append(s)
visited.append(s)
parent = -2
@ -199,7 +199,7 @@ class DirectedGraph:
def has_cycle(self):
stack = []
visited = []
s = list(self.graph)[0]
s = next(iter(self.graph))
stack.append(s)
visited.append(s)
parent = -2
@ -305,7 +305,7 @@ class Graph:
stack = []
visited = []
if s == -2:
s = list(self.graph)[0]
s = next(iter(self.graph))
stack.append(s)
visited.append(s)
ss = s
@ -353,7 +353,7 @@ class Graph:
d = deque()
visited = []
if s == -2:
s = list(self.graph)[0]
s = next(iter(self.graph))
d.append(s)
visited.append(s)
while d:
@ -371,7 +371,7 @@ class Graph:
def cycle_nodes(self):
stack = []
visited = []
s = list(self.graph)[0]
s = next(iter(self.graph))
stack.append(s)
visited.append(s)
parent = -2
@ -424,7 +424,7 @@ class Graph:
def has_cycle(self):
stack = []
visited = []
s = list(self.graph)[0]
s = next(iter(self.graph))
stack.append(s)
visited.append(s)
parent = -2

View File

@ -113,7 +113,7 @@ class PushRelabelExecutor(MaximumFlowAlgorithmExecutor):
vertices_list = [
i
for i in range(self.verticies_count)
if i != self.source_index and i != self.sink_index
if i not in {self.source_index, self.sink_index}
]
# move through list

View File

@ -20,7 +20,7 @@ def check_circuit_or_path(graph, max_node):
odd_degree_nodes = 0
odd_node = -1
for i in range(max_node):
if i not in graph.keys():
if i not in graph:
continue
if len(graph[i]) % 2 == 1:
odd_degree_nodes += 1

View File

@ -0,0 +1,589 @@
#!/usr/bin/env python3
"""
Author: Vikram Nithyanandam
Description:
The following implementation is a robust unweighted Graph data structure
implemented using an adjacency list. This vertices and edges of this graph can be
effectively initialized and modified while storing your chosen generic
value in each vertex.
Adjacency List: https://en.wikipedia.org/wiki/Adjacency_list
Potential Future Ideas:
- Add a flag to set edge weights on and set edge weights
- Make edge weights and vertex values customizable to store whatever the client wants
- Support multigraph functionality if the client wants it
"""
from __future__ import annotations
import random
import unittest
from pprint import pformat
from typing import Generic, TypeVar
T = TypeVar("T")
class GraphAdjacencyList(Generic[T]):
def __init__(
self, vertices: list[T], edges: list[list[T]], directed: bool = True
) -> None:
"""
Parameters:
- vertices: (list[T]) The list of vertex names the client wants to
pass in. Default is empty.
- edges: (list[list[T]]) The list of edges the client wants to
pass in. Each edge is a 2-element list. Default is empty.
- directed: (bool) Indicates if graph is directed or undirected.
Default is True.
"""
self.adj_list: dict[T, list[T]] = {} # dictionary of lists of T
self.directed = directed
# Falsey checks
edges = edges or []
vertices = vertices or []
for vertex in vertices:
self.add_vertex(vertex)
for edge in edges:
if len(edge) != 2:
msg = f"Invalid input: {edge} is the wrong length."
raise ValueError(msg)
self.add_edge(edge[0], edge[1])
def add_vertex(self, vertex: T) -> None:
"""
Adds a vertex to the graph. If the given vertex already exists,
a ValueError will be thrown.
"""
if self.contains_vertex(vertex):
msg = f"Incorrect input: {vertex} is already in the graph."
raise ValueError(msg)
self.adj_list[vertex] = []
def add_edge(self, source_vertex: T, destination_vertex: T) -> None:
"""
Creates an edge from source vertex to destination vertex. If any
given vertex doesn't exist or the edge already exists, a ValueError
will be thrown.
"""
if not (
self.contains_vertex(source_vertex)
and self.contains_vertex(destination_vertex)
):
msg = (
f"Incorrect input: Either {source_vertex} or "
f"{destination_vertex} does not exist"
)
raise ValueError(msg)
if self.contains_edge(source_vertex, destination_vertex):
msg = (
"Incorrect input: The edge already exists between "
f"{source_vertex} and {destination_vertex}"
)
raise ValueError(msg)
# add the destination vertex to the list associated with the source vertex
# and vice versa if not directed
self.adj_list[source_vertex].append(destination_vertex)
if not self.directed:
self.adj_list[destination_vertex].append(source_vertex)
def remove_vertex(self, vertex: T) -> None:
"""
Removes the given vertex from the graph and deletes all incoming and
outgoing edges from the given vertex as well. If the given vertex
does not exist, a ValueError will be thrown.
"""
if not self.contains_vertex(vertex):
msg = f"Incorrect input: {vertex} does not exist in this graph."
raise ValueError(msg)
if not self.directed:
# If not directed, find all neighboring vertices and delete all references
# of edges connecting to the given vertex
for neighbor in self.adj_list[vertex]:
self.adj_list[neighbor].remove(vertex)
else:
# If directed, search all neighbors of all vertices and delete all
# references of edges connecting to the given vertex
for edge_list in self.adj_list.values():
if vertex in edge_list:
edge_list.remove(vertex)
# Finally, delete the given vertex and all of its outgoing edge references
self.adj_list.pop(vertex)
def remove_edge(self, source_vertex: T, destination_vertex: T) -> None:
"""
Removes the edge between the two vertices. If any given vertex
doesn't exist or the edge does not exist, a ValueError will be thrown.
"""
if not (
self.contains_vertex(source_vertex)
and self.contains_vertex(destination_vertex)
):
msg = (
f"Incorrect input: Either {source_vertex} or "
f"{destination_vertex} does not exist"
)
raise ValueError(msg)
if not self.contains_edge(source_vertex, destination_vertex):
msg = (
"Incorrect input: The edge does NOT exist between "
f"{source_vertex} and {destination_vertex}"
)
raise ValueError(msg)
# remove the destination vertex from the list associated with the source
# vertex and vice versa if not directed
self.adj_list[source_vertex].remove(destination_vertex)
if not self.directed:
self.adj_list[destination_vertex].remove(source_vertex)
def contains_vertex(self, vertex: T) -> bool:
"""
Returns True if the graph contains the vertex, False otherwise.
"""
return vertex in self.adj_list
def contains_edge(self, source_vertex: T, destination_vertex: T) -> bool:
"""
Returns True if the graph contains the edge from the source_vertex to the
destination_vertex, False otherwise. If any given vertex doesn't exist, a
ValueError will be thrown.
"""
if not (
self.contains_vertex(source_vertex)
and self.contains_vertex(destination_vertex)
):
msg = (
f"Incorrect input: Either {source_vertex} "
f"or {destination_vertex} does not exist."
)
raise ValueError(msg)
return destination_vertex in self.adj_list[source_vertex]
def clear_graph(self) -> None:
"""
Clears all vertices and edges.
"""
self.adj_list = {}
def __repr__(self) -> str:
return pformat(self.adj_list)
class TestGraphAdjacencyList(unittest.TestCase):
def __assert_graph_edge_exists_check(
self,
undirected_graph: GraphAdjacencyList,
directed_graph: GraphAdjacencyList,
edge: list[int],
) -> None:
self.assertTrue(undirected_graph.contains_edge(edge[0], edge[1]))
self.assertTrue(undirected_graph.contains_edge(edge[1], edge[0]))
self.assertTrue(directed_graph.contains_edge(edge[0], edge[1]))
def __assert_graph_edge_does_not_exist_check(
self,
undirected_graph: GraphAdjacencyList,
directed_graph: GraphAdjacencyList,
edge: list[int],
) -> None:
self.assertFalse(undirected_graph.contains_edge(edge[0], edge[1]))
self.assertFalse(undirected_graph.contains_edge(edge[1], edge[0]))
self.assertFalse(directed_graph.contains_edge(edge[0], edge[1]))
def __assert_graph_vertex_exists_check(
self,
undirected_graph: GraphAdjacencyList,
directed_graph: GraphAdjacencyList,
vertex: int,
) -> None:
self.assertTrue(undirected_graph.contains_vertex(vertex))
self.assertTrue(directed_graph.contains_vertex(vertex))
def __assert_graph_vertex_does_not_exist_check(
self,
undirected_graph: GraphAdjacencyList,
directed_graph: GraphAdjacencyList,
vertex: int,
) -> None:
self.assertFalse(undirected_graph.contains_vertex(vertex))
self.assertFalse(directed_graph.contains_vertex(vertex))
def __generate_random_edges(
self, vertices: list[int], edge_pick_count: int
) -> list[list[int]]:
self.assertTrue(edge_pick_count <= len(vertices))
random_source_vertices: list[int] = random.sample(
vertices[0 : int(len(vertices) / 2)], edge_pick_count
)
random_destination_vertices: list[int] = random.sample(
vertices[int(len(vertices) / 2) :], edge_pick_count
)
random_edges: list[list[int]] = []
for source in random_source_vertices:
for dest in random_destination_vertices:
random_edges.append([source, dest])
return random_edges
def __generate_graphs(
self, vertex_count: int, min_val: int, max_val: int, edge_pick_count: int
) -> tuple[GraphAdjacencyList, GraphAdjacencyList, list[int], list[list[int]]]:
if max_val - min_val + 1 < vertex_count:
raise ValueError(
"Will result in duplicate vertices. Either increase range "
"between min_val and max_val or decrease vertex count."
)
# generate graph input
random_vertices: list[int] = random.sample(
range(min_val, max_val + 1), vertex_count
)
random_edges: list[list[int]] = self.__generate_random_edges(
random_vertices, edge_pick_count
)
# build graphs
undirected_graph = GraphAdjacencyList(
vertices=random_vertices, edges=random_edges, directed=False
)
directed_graph = GraphAdjacencyList(
vertices=random_vertices, edges=random_edges, directed=True
)
return undirected_graph, directed_graph, random_vertices, random_edges
def test_init_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
# test graph initialization with vertices and edges
for num in random_vertices:
self.__assert_graph_vertex_exists_check(
undirected_graph, directed_graph, num
)
for edge in random_edges:
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, edge
)
self.assertFalse(undirected_graph.directed)
self.assertTrue(directed_graph.directed)
def test_contains_vertex(self) -> None:
random_vertices: list[int] = random.sample(range(101), 20)
# Build graphs WITHOUT edges
undirected_graph = GraphAdjacencyList(
vertices=random_vertices, edges=[], directed=False
)
directed_graph = GraphAdjacencyList(
vertices=random_vertices, edges=[], directed=True
)
# Test contains_vertex
for num in range(101):
self.assertEqual(
num in random_vertices, undirected_graph.contains_vertex(num)
)
self.assertEqual(
num in random_vertices, directed_graph.contains_vertex(num)
)
def test_add_vertices(self) -> None:
random_vertices: list[int] = random.sample(range(101), 20)
# build empty graphs
undirected_graph: GraphAdjacencyList = GraphAdjacencyList(
vertices=[], edges=[], directed=False
)
directed_graph: GraphAdjacencyList = GraphAdjacencyList(
vertices=[], edges=[], directed=True
)
# run add_vertex
for num in random_vertices:
undirected_graph.add_vertex(num)
for num in random_vertices:
directed_graph.add_vertex(num)
# test add_vertex worked
for num in random_vertices:
self.__assert_graph_vertex_exists_check(
undirected_graph, directed_graph, num
)
def test_remove_vertices(self) -> None:
random_vertices: list[int] = random.sample(range(101), 20)
# build graphs WITHOUT edges
undirected_graph = GraphAdjacencyList(
vertices=random_vertices, edges=[], directed=False
)
directed_graph = GraphAdjacencyList(
vertices=random_vertices, edges=[], directed=True
)
# test remove_vertex worked
for num in random_vertices:
self.__assert_graph_vertex_exists_check(
undirected_graph, directed_graph, num
)
undirected_graph.remove_vertex(num)
directed_graph.remove_vertex(num)
self.__assert_graph_vertex_does_not_exist_check(
undirected_graph, directed_graph, num
)
def test_add_and_remove_vertices_repeatedly(self) -> None:
random_vertices1: list[int] = random.sample(range(51), 20)
random_vertices2: list[int] = random.sample(range(51, 101), 20)
# build graphs WITHOUT edges
undirected_graph = GraphAdjacencyList(
vertices=random_vertices1, edges=[], directed=False
)
directed_graph = GraphAdjacencyList(
vertices=random_vertices1, edges=[], directed=True
)
# test adding and removing vertices
for i, _ in enumerate(random_vertices1):
undirected_graph.add_vertex(random_vertices2[i])
directed_graph.add_vertex(random_vertices2[i])
self.__assert_graph_vertex_exists_check(
undirected_graph, directed_graph, random_vertices2[i]
)
undirected_graph.remove_vertex(random_vertices1[i])
directed_graph.remove_vertex(random_vertices1[i])
self.__assert_graph_vertex_does_not_exist_check(
undirected_graph, directed_graph, random_vertices1[i]
)
# remove all vertices
for i, _ in enumerate(random_vertices1):
undirected_graph.remove_vertex(random_vertices2[i])
directed_graph.remove_vertex(random_vertices2[i])
self.__assert_graph_vertex_does_not_exist_check(
undirected_graph, directed_graph, random_vertices2[i]
)
def test_contains_edge(self) -> None:
# generate graphs and graph input
vertex_count = 20
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(vertex_count, 0, 100, 4)
# generate all possible edges for testing
all_possible_edges: list[list[int]] = []
for i in range(vertex_count - 1):
for j in range(i + 1, vertex_count):
all_possible_edges.append([random_vertices[i], random_vertices[j]])
all_possible_edges.append([random_vertices[j], random_vertices[i]])
# test contains_edge function
for edge in all_possible_edges:
if edge in random_edges:
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, edge
)
elif [edge[1], edge[0]] in random_edges:
# since this edge exists for undirected but the reverse
# may not exist for directed
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, [edge[1], edge[0]]
)
else:
self.__assert_graph_edge_does_not_exist_check(
undirected_graph, directed_graph, edge
)
def test_add_edge(self) -> None:
# generate graph input
random_vertices: list[int] = random.sample(range(101), 15)
random_edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
# build graphs WITHOUT edges
undirected_graph = GraphAdjacencyList(
vertices=random_vertices, edges=[], directed=False
)
directed_graph = GraphAdjacencyList(
vertices=random_vertices, edges=[], directed=True
)
# run and test add_edge
for edge in random_edges:
undirected_graph.add_edge(edge[0], edge[1])
directed_graph.add_edge(edge[0], edge[1])
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, edge
)
def test_remove_edge(self) -> None:
# generate graph input and graphs
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
# run and test remove_edge
for edge in random_edges:
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, edge
)
undirected_graph.remove_edge(edge[0], edge[1])
directed_graph.remove_edge(edge[0], edge[1])
self.__assert_graph_edge_does_not_exist_check(
undirected_graph, directed_graph, edge
)
def test_add_and_remove_edges_repeatedly(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
# make some more edge options!
more_random_edges: list[list[int]] = []
while len(more_random_edges) != len(random_edges):
edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
for edge in edges:
if len(more_random_edges) == len(random_edges):
break
elif edge not in more_random_edges and edge not in random_edges:
more_random_edges.append(edge)
for i, _ in enumerate(random_edges):
undirected_graph.add_edge(more_random_edges[i][0], more_random_edges[i][1])
directed_graph.add_edge(more_random_edges[i][0], more_random_edges[i][1])
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, more_random_edges[i]
)
undirected_graph.remove_edge(random_edges[i][0], random_edges[i][1])
directed_graph.remove_edge(random_edges[i][0], random_edges[i][1])
self.__assert_graph_edge_does_not_exist_check(
undirected_graph, directed_graph, random_edges[i]
)
def test_add_vertex_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
for vertex in random_vertices:
with self.assertRaises(ValueError):
undirected_graph.add_vertex(vertex)
with self.assertRaises(ValueError):
directed_graph.add_vertex(vertex)
def test_remove_vertex_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
for i in range(101):
if i not in random_vertices:
with self.assertRaises(ValueError):
undirected_graph.remove_vertex(i)
with self.assertRaises(ValueError):
directed_graph.remove_vertex(i)
def test_add_edge_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
for edge in random_edges:
with self.assertRaises(ValueError):
undirected_graph.add_edge(edge[0], edge[1])
with self.assertRaises(ValueError):
directed_graph.add_edge(edge[0], edge[1])
def test_remove_edge_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
more_random_edges: list[list[int]] = []
while len(more_random_edges) != len(random_edges):
edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
for edge in edges:
if len(more_random_edges) == len(random_edges):
break
elif edge not in more_random_edges and edge not in random_edges:
more_random_edges.append(edge)
for edge in more_random_edges:
with self.assertRaises(ValueError):
undirected_graph.remove_edge(edge[0], edge[1])
with self.assertRaises(ValueError):
directed_graph.remove_edge(edge[0], edge[1])
def test_contains_edge_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
for vertex in random_vertices:
with self.assertRaises(ValueError):
undirected_graph.contains_edge(vertex, 102)
with self.assertRaises(ValueError):
directed_graph.contains_edge(vertex, 102)
with self.assertRaises(ValueError):
undirected_graph.contains_edge(103, 102)
with self.assertRaises(ValueError):
directed_graph.contains_edge(103, 102)
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,608 @@
#!/usr/bin/env python3
"""
Author: Vikram Nithyanandam
Description:
The following implementation is a robust unweighted Graph data structure
implemented using an adjacency matrix. This vertices and edges of this graph can be
effectively initialized and modified while storing your chosen generic
value in each vertex.
Adjacency Matrix: https://mathworld.wolfram.com/AdjacencyMatrix.html
Potential Future Ideas:
- Add a flag to set edge weights on and set edge weights
- Make edge weights and vertex values customizable to store whatever the client wants
- Support multigraph functionality if the client wants it
"""
from __future__ import annotations
import random
import unittest
from pprint import pformat
from typing import Generic, TypeVar
T = TypeVar("T")
class GraphAdjacencyMatrix(Generic[T]):
def __init__(
self, vertices: list[T], edges: list[list[T]], directed: bool = True
) -> None:
"""
Parameters:
- vertices: (list[T]) The list of vertex names the client wants to
pass in. Default is empty.
- edges: (list[list[T]]) The list of edges the client wants to
pass in. Each edge is a 2-element list. Default is empty.
- directed: (bool) Indicates if graph is directed or undirected.
Default is True.
"""
self.directed = directed
self.vertex_to_index: dict[T, int] = {}
self.adj_matrix: list[list[int]] = []
# Falsey checks
edges = edges or []
vertices = vertices or []
for vertex in vertices:
self.add_vertex(vertex)
for edge in edges:
if len(edge) != 2:
msg = f"Invalid input: {edge} must have length 2."
raise ValueError(msg)
self.add_edge(edge[0], edge[1])
def add_edge(self, source_vertex: T, destination_vertex: T) -> None:
"""
Creates an edge from source vertex to destination vertex. If any
given vertex doesn't exist or the edge already exists, a ValueError
will be thrown.
"""
if not (
self.contains_vertex(source_vertex)
and self.contains_vertex(destination_vertex)
):
msg = (
f"Incorrect input: Either {source_vertex} or "
f"{destination_vertex} does not exist"
)
raise ValueError(msg)
if self.contains_edge(source_vertex, destination_vertex):
msg = (
"Incorrect input: The edge already exists between "
f"{source_vertex} and {destination_vertex}"
)
raise ValueError(msg)
# Get the indices of the corresponding vertices and set their edge value to 1.
u: int = self.vertex_to_index[source_vertex]
v: int = self.vertex_to_index[destination_vertex]
self.adj_matrix[u][v] = 1
if not self.directed:
self.adj_matrix[v][u] = 1
def remove_edge(self, source_vertex: T, destination_vertex: T) -> None:
"""
Removes the edge between the two vertices. If any given vertex
doesn't exist or the edge does not exist, a ValueError will be thrown.
"""
if not (
self.contains_vertex(source_vertex)
and self.contains_vertex(destination_vertex)
):
msg = (
f"Incorrect input: Either {source_vertex} or "
f"{destination_vertex} does not exist"
)
raise ValueError(msg)
if not self.contains_edge(source_vertex, destination_vertex):
msg = (
"Incorrect input: The edge does NOT exist between "
f"{source_vertex} and {destination_vertex}"
)
raise ValueError(msg)
# Get the indices of the corresponding vertices and set their edge value to 0.
u: int = self.vertex_to_index[source_vertex]
v: int = self.vertex_to_index[destination_vertex]
self.adj_matrix[u][v] = 0
if not self.directed:
self.adj_matrix[v][u] = 0
def add_vertex(self, vertex: T) -> None:
"""
Adds a vertex to the graph. If the given vertex already exists,
a ValueError will be thrown.
"""
if self.contains_vertex(vertex):
msg = f"Incorrect input: {vertex} already exists in this graph."
raise ValueError(msg)
# build column for vertex
for row in self.adj_matrix:
row.append(0)
# build row for vertex and update other data structures
self.adj_matrix.append([0] * (len(self.adj_matrix) + 1))
self.vertex_to_index[vertex] = len(self.adj_matrix) - 1
def remove_vertex(self, vertex: T) -> None:
"""
Removes the given vertex from the graph and deletes all incoming and
outgoing edges from the given vertex as well. If the given vertex
does not exist, a ValueError will be thrown.
"""
if not self.contains_vertex(vertex):
msg = f"Incorrect input: {vertex} does not exist in this graph."
raise ValueError(msg)
# first slide up the rows by deleting the row corresponding to
# the vertex being deleted.
start_index = self.vertex_to_index[vertex]
self.adj_matrix.pop(start_index)
# next, slide the columns to the left by deleting the values in
# the column corresponding to the vertex being deleted
for lst in self.adj_matrix:
lst.pop(start_index)
# final clean up
self.vertex_to_index.pop(vertex)
# decrement indices for vertices shifted by the deleted vertex in the adj matrix
for vertex in self.vertex_to_index:
if self.vertex_to_index[vertex] >= start_index:
self.vertex_to_index[vertex] = self.vertex_to_index[vertex] - 1
def contains_vertex(self, vertex: T) -> bool:
"""
Returns True if the graph contains the vertex, False otherwise.
"""
return vertex in self.vertex_to_index
def contains_edge(self, source_vertex: T, destination_vertex: T) -> bool:
"""
Returns True if the graph contains the edge from the source_vertex to the
destination_vertex, False otherwise. If any given vertex doesn't exist, a
ValueError will be thrown.
"""
if not (
self.contains_vertex(source_vertex)
and self.contains_vertex(destination_vertex)
):
msg = (
f"Incorrect input: Either {source_vertex} "
f"or {destination_vertex} does not exist."
)
raise ValueError(msg)
u = self.vertex_to_index[source_vertex]
v = self.vertex_to_index[destination_vertex]
return self.adj_matrix[u][v] == 1
def clear_graph(self) -> None:
"""
Clears all vertices and edges.
"""
self.vertex_to_index = {}
self.adj_matrix = []
def __repr__(self) -> str:
first = "Adj Matrix:\n" + pformat(self.adj_matrix)
second = "\nVertex to index mapping:\n" + pformat(self.vertex_to_index)
return first + second
class TestGraphMatrix(unittest.TestCase):
def __assert_graph_edge_exists_check(
self,
undirected_graph: GraphAdjacencyMatrix,
directed_graph: GraphAdjacencyMatrix,
edge: list[int],
) -> None:
self.assertTrue(undirected_graph.contains_edge(edge[0], edge[1]))
self.assertTrue(undirected_graph.contains_edge(edge[1], edge[0]))
self.assertTrue(directed_graph.contains_edge(edge[0], edge[1]))
def __assert_graph_edge_does_not_exist_check(
self,
undirected_graph: GraphAdjacencyMatrix,
directed_graph: GraphAdjacencyMatrix,
edge: list[int],
) -> None:
self.assertFalse(undirected_graph.contains_edge(edge[0], edge[1]))
self.assertFalse(undirected_graph.contains_edge(edge[1], edge[0]))
self.assertFalse(directed_graph.contains_edge(edge[0], edge[1]))
def __assert_graph_vertex_exists_check(
self,
undirected_graph: GraphAdjacencyMatrix,
directed_graph: GraphAdjacencyMatrix,
vertex: int,
) -> None:
self.assertTrue(undirected_graph.contains_vertex(vertex))
self.assertTrue(directed_graph.contains_vertex(vertex))
def __assert_graph_vertex_does_not_exist_check(
self,
undirected_graph: GraphAdjacencyMatrix,
directed_graph: GraphAdjacencyMatrix,
vertex: int,
) -> None:
self.assertFalse(undirected_graph.contains_vertex(vertex))
self.assertFalse(directed_graph.contains_vertex(vertex))
def __generate_random_edges(
self, vertices: list[int], edge_pick_count: int
) -> list[list[int]]:
self.assertTrue(edge_pick_count <= len(vertices))
random_source_vertices: list[int] = random.sample(
vertices[0 : int(len(vertices) / 2)], edge_pick_count
)
random_destination_vertices: list[int] = random.sample(
vertices[int(len(vertices) / 2) :], edge_pick_count
)
random_edges: list[list[int]] = []
for source in random_source_vertices:
for dest in random_destination_vertices:
random_edges.append([source, dest])
return random_edges
def __generate_graphs(
self, vertex_count: int, min_val: int, max_val: int, edge_pick_count: int
) -> tuple[GraphAdjacencyMatrix, GraphAdjacencyMatrix, list[int], list[list[int]]]:
if max_val - min_val + 1 < vertex_count:
raise ValueError(
"Will result in duplicate vertices. Either increase "
"range between min_val and max_val or decrease vertex count"
)
# generate graph input
random_vertices: list[int] = random.sample(
range(min_val, max_val + 1), vertex_count
)
random_edges: list[list[int]] = self.__generate_random_edges(
random_vertices, edge_pick_count
)
# build graphs
undirected_graph = GraphAdjacencyMatrix(
vertices=random_vertices, edges=random_edges, directed=False
)
directed_graph = GraphAdjacencyMatrix(
vertices=random_vertices, edges=random_edges, directed=True
)
return undirected_graph, directed_graph, random_vertices, random_edges
def test_init_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
# test graph initialization with vertices and edges
for num in random_vertices:
self.__assert_graph_vertex_exists_check(
undirected_graph, directed_graph, num
)
for edge in random_edges:
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, edge
)
self.assertFalse(undirected_graph.directed)
self.assertTrue(directed_graph.directed)
def test_contains_vertex(self) -> None:
random_vertices: list[int] = random.sample(range(101), 20)
# Build graphs WITHOUT edges
undirected_graph = GraphAdjacencyMatrix(
vertices=random_vertices, edges=[], directed=False
)
directed_graph = GraphAdjacencyMatrix(
vertices=random_vertices, edges=[], directed=True
)
# Test contains_vertex
for num in range(101):
self.assertEqual(
num in random_vertices, undirected_graph.contains_vertex(num)
)
self.assertEqual(
num in random_vertices, directed_graph.contains_vertex(num)
)
def test_add_vertices(self) -> None:
random_vertices: list[int] = random.sample(range(101), 20)
# build empty graphs
undirected_graph: GraphAdjacencyMatrix = GraphAdjacencyMatrix(
vertices=[], edges=[], directed=False
)
directed_graph: GraphAdjacencyMatrix = GraphAdjacencyMatrix(
vertices=[], edges=[], directed=True
)
# run add_vertex
for num in random_vertices:
undirected_graph.add_vertex(num)
for num in random_vertices:
directed_graph.add_vertex(num)
# test add_vertex worked
for num in random_vertices:
self.__assert_graph_vertex_exists_check(
undirected_graph, directed_graph, num
)
def test_remove_vertices(self) -> None:
random_vertices: list[int] = random.sample(range(101), 20)
# build graphs WITHOUT edges
undirected_graph = GraphAdjacencyMatrix(
vertices=random_vertices, edges=[], directed=False
)
directed_graph = GraphAdjacencyMatrix(
vertices=random_vertices, edges=[], directed=True
)
# test remove_vertex worked
for num in random_vertices:
self.__assert_graph_vertex_exists_check(
undirected_graph, directed_graph, num
)
undirected_graph.remove_vertex(num)
directed_graph.remove_vertex(num)
self.__assert_graph_vertex_does_not_exist_check(
undirected_graph, directed_graph, num
)
def test_add_and_remove_vertices_repeatedly(self) -> None:
random_vertices1: list[int] = random.sample(range(51), 20)
random_vertices2: list[int] = random.sample(range(51, 101), 20)
# build graphs WITHOUT edges
undirected_graph = GraphAdjacencyMatrix(
vertices=random_vertices1, edges=[], directed=False
)
directed_graph = GraphAdjacencyMatrix(
vertices=random_vertices1, edges=[], directed=True
)
# test adding and removing vertices
for i, _ in enumerate(random_vertices1):
undirected_graph.add_vertex(random_vertices2[i])
directed_graph.add_vertex(random_vertices2[i])
self.__assert_graph_vertex_exists_check(
undirected_graph, directed_graph, random_vertices2[i]
)
undirected_graph.remove_vertex(random_vertices1[i])
directed_graph.remove_vertex(random_vertices1[i])
self.__assert_graph_vertex_does_not_exist_check(
undirected_graph, directed_graph, random_vertices1[i]
)
# remove all vertices
for i, _ in enumerate(random_vertices1):
undirected_graph.remove_vertex(random_vertices2[i])
directed_graph.remove_vertex(random_vertices2[i])
self.__assert_graph_vertex_does_not_exist_check(
undirected_graph, directed_graph, random_vertices2[i]
)
def test_contains_edge(self) -> None:
# generate graphs and graph input
vertex_count = 20
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(vertex_count, 0, 100, 4)
# generate all possible edges for testing
all_possible_edges: list[list[int]] = []
for i in range(vertex_count - 1):
for j in range(i + 1, vertex_count):
all_possible_edges.append([random_vertices[i], random_vertices[j]])
all_possible_edges.append([random_vertices[j], random_vertices[i]])
# test contains_edge function
for edge in all_possible_edges:
if edge in random_edges:
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, edge
)
elif [edge[1], edge[0]] in random_edges:
# since this edge exists for undirected but the reverse may
# not exist for directed
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, [edge[1], edge[0]]
)
else:
self.__assert_graph_edge_does_not_exist_check(
undirected_graph, directed_graph, edge
)
def test_add_edge(self) -> None:
# generate graph input
random_vertices: list[int] = random.sample(range(101), 15)
random_edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
# build graphs WITHOUT edges
undirected_graph = GraphAdjacencyMatrix(
vertices=random_vertices, edges=[], directed=False
)
directed_graph = GraphAdjacencyMatrix(
vertices=random_vertices, edges=[], directed=True
)
# run and test add_edge
for edge in random_edges:
undirected_graph.add_edge(edge[0], edge[1])
directed_graph.add_edge(edge[0], edge[1])
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, edge
)
def test_remove_edge(self) -> None:
# generate graph input and graphs
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
# run and test remove_edge
for edge in random_edges:
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, edge
)
undirected_graph.remove_edge(edge[0], edge[1])
directed_graph.remove_edge(edge[0], edge[1])
self.__assert_graph_edge_does_not_exist_check(
undirected_graph, directed_graph, edge
)
def test_add_and_remove_edges_repeatedly(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
# make some more edge options!
more_random_edges: list[list[int]] = []
while len(more_random_edges) != len(random_edges):
edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
for edge in edges:
if len(more_random_edges) == len(random_edges):
break
elif edge not in more_random_edges and edge not in random_edges:
more_random_edges.append(edge)
for i, _ in enumerate(random_edges):
undirected_graph.add_edge(more_random_edges[i][0], more_random_edges[i][1])
directed_graph.add_edge(more_random_edges[i][0], more_random_edges[i][1])
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, more_random_edges[i]
)
undirected_graph.remove_edge(random_edges[i][0], random_edges[i][1])
directed_graph.remove_edge(random_edges[i][0], random_edges[i][1])
self.__assert_graph_edge_does_not_exist_check(
undirected_graph, directed_graph, random_edges[i]
)
def test_add_vertex_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
for vertex in random_vertices:
with self.assertRaises(ValueError):
undirected_graph.add_vertex(vertex)
with self.assertRaises(ValueError):
directed_graph.add_vertex(vertex)
def test_remove_vertex_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
for i in range(101):
if i not in random_vertices:
with self.assertRaises(ValueError):
undirected_graph.remove_vertex(i)
with self.assertRaises(ValueError):
directed_graph.remove_vertex(i)
def test_add_edge_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
for edge in random_edges:
with self.assertRaises(ValueError):
undirected_graph.add_edge(edge[0], edge[1])
with self.assertRaises(ValueError):
directed_graph.add_edge(edge[0], edge[1])
def test_remove_edge_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
more_random_edges: list[list[int]] = []
while len(more_random_edges) != len(random_edges):
edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
for edge in edges:
if len(more_random_edges) == len(random_edges):
break
elif edge not in more_random_edges and edge not in random_edges:
more_random_edges.append(edge)
for edge in more_random_edges:
with self.assertRaises(ValueError):
undirected_graph.remove_edge(edge[0], edge[1])
with self.assertRaises(ValueError):
directed_graph.remove_edge(edge[0], edge[1])
def test_contains_edge_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
for vertex in random_vertices:
with self.assertRaises(ValueError):
undirected_graph.contains_edge(vertex, 102)
with self.assertRaises(ValueError):
directed_graph.contains_edge(vertex, 102)
with self.assertRaises(ValueError):
undirected_graph.contains_edge(103, 102)
with self.assertRaises(ValueError):
directed_graph.contains_edge(103, 102)
if __name__ == "__main__":
unittest.main()

View File

@ -1,24 +0,0 @@
class Graph:
def __init__(self, vertex):
self.vertex = vertex
self.graph = [[0] * vertex for i in range(vertex)]
def add_edge(self, u, v):
self.graph[u - 1][v - 1] = 1
self.graph[v - 1][u - 1] = 1
def show(self):
for i in self.graph:
for j in i:
print(j, end=" ")
print(" ")
g = Graph(100)
g.add_edge(1, 4)
g.add_edge(4, 2)
g.add_edge(4, 5)
g.add_edge(2, 5)
g.add_edge(5, 3)
g.show()

View File

@ -58,8 +58,8 @@ class Node:
The heuristic here is the Manhattan Distance
Could elaborate to offer more than one choice
"""
dy = abs(self.pos_x - self.goal_x)
dx = abs(self.pos_y - self.goal_y)
dx = abs(self.pos_x - self.goal_x)
dy = abs(self.pos_y - self.goal_y)
return dx + dy
def __lt__(self, other) -> bool:

0
graphs/tests/__init__.py Normal file
View File

View File

@ -0,0 +1,48 @@
"""
Calculate the minimum waiting time using a greedy algorithm.
reference: https://www.youtube.com/watch?v=Sf3eiO12eJs
For doctests run following command:
python -m doctest -v minimum_waiting_time.py
The minimum_waiting_time function uses a greedy algorithm to calculate the minimum
time for queries to complete. It sorts the list in non-decreasing order, calculates
the waiting time for each query by multiplying its position in the list with the
sum of all remaining query times, and returns the total waiting time. A doctest
ensures that the function produces the correct output.
"""
def minimum_waiting_time(queries: list[int]) -> int:
"""
This function takes a list of query times and returns the minimum waiting time
for all queries to be completed.
Args:
queries: A list of queries measured in picoseconds
Returns:
total_waiting_time: Minimum waiting time measured in picoseconds
Examples:
>>> minimum_waiting_time([3, 2, 1, 2, 6])
17
>>> minimum_waiting_time([3, 2, 1])
4
>>> minimum_waiting_time([1, 2, 3, 4])
10
>>> minimum_waiting_time([5, 5, 5, 5])
30
>>> minimum_waiting_time([])
0
"""
n = len(queries)
if n in (0, 1):
return 0
return sum(query * (n - i - 1) for i, query in enumerate(sorted(queries)))
if __name__ == "__main__":
import doctest
doctest.testmod()

View File

@ -43,62 +43,43 @@ def points_to_polynomial(coordinates: list[list[int]]) -> str:
x = len(coordinates)
count_of_line = 0
matrix: list[list[float]] = []
# put the x and x to the power values in a matrix
while count_of_line < x:
count_in_line = 0
a = coordinates[count_of_line][0]
count_line: list[float] = []
while count_in_line < x:
count_line.append(a ** (x - (count_in_line + 1)))
count_in_line += 1
matrix.append(count_line)
count_of_line += 1
matrix: list[list[float]] = [
[
coordinates[count_of_line][0] ** (x - (count_in_line + 1))
for count_in_line in range(x)
]
for count_of_line in range(x)
]
count_of_line = 0
# put the y values into a vector
vector: list[float] = []
while count_of_line < x:
vector.append(coordinates[count_of_line][1])
count_of_line += 1
vector: list[float] = [coordinates[count_of_line][1] for count_of_line in range(x)]
count = 0
while count < x:
zahlen = 0
while zahlen < x:
if count == zahlen:
zahlen += 1
if zahlen == x:
break
bruch = matrix[zahlen][count] / matrix[count][count]
for count in range(x):
for number in range(x):
if count == number:
continue
fraction = matrix[number][count] / matrix[count][count]
for counting_columns, item in enumerate(matrix[count]):
# manipulating all the values in the matrix
matrix[zahlen][counting_columns] -= item * bruch
matrix[number][counting_columns] -= item * fraction
# manipulating the values in the vector
vector[zahlen] -= vector[count] * bruch
zahlen += 1
count += 1
vector[number] -= vector[count] * fraction
count = 0
# make solutions
solution: list[str] = []
while count < x:
solution.append(str(vector[count] / matrix[count][count]))
count += 1
solution: list[str] = [
str(vector[count] / matrix[count][count]) for count in range(x)
]
count = 0
solved = "f(x)="
while count < x:
for count in range(x):
remove_e: list[str] = solution[count].split("E")
if len(remove_e) > 1:
solution[count] = f"{remove_e[0]}*10^{remove_e[1]}"
solved += f"x^{x - (count + 1)}*{solution[count]}"
if count + 1 != x:
solved += "+"
count += 1
return solved

View File

@ -0,0 +1,89 @@
"""
Calculate the rank of a matrix.
See: https://en.wikipedia.org/wiki/Rank_(linear_algebra)
"""
def rank_of_matrix(matrix: list[list[int | float]]) -> int:
"""
Finds the rank of a matrix.
Args:
matrix: The matrix as a list of lists.
Returns:
The rank of the matrix.
Example:
>>> matrix1 = [[1, 2, 3],
... [4, 5, 6],
... [7, 8, 9]]
>>> rank_of_matrix(matrix1)
2
>>> matrix2 = [[1, 0, 0],
... [0, 1, 0],
... [0, 0, 0]]
>>> rank_of_matrix(matrix2)
2
>>> matrix3 = [[1, 2, 3, 4],
... [5, 6, 7, 8],
... [9, 10, 11, 12]]
>>> rank_of_matrix(matrix3)
2
>>> rank_of_matrix([[2,3,-1,-1],
... [1,-1,-2,4],
... [3,1,3,-2],
... [6,3,0,-7]])
4
>>> rank_of_matrix([[2,1,-3,-6],
... [3,-3,1,2],
... [1,1,1,2]])
3
>>> rank_of_matrix([[2,-1,0],
... [1,3,4],
... [4,1,-3]])
3
>>> rank_of_matrix([[3,2,1],
... [-6,-4,-2]])
1
>>> rank_of_matrix([[],[]])
0
>>> rank_of_matrix([[1]])
1
>>> rank_of_matrix([[]])
0
"""
rows = len(matrix)
columns = len(matrix[0])
rank = min(rows, columns)
for row in range(rank):
# Check if diagonal element is not zero
if matrix[row][row] != 0:
# Eliminate all the elements below the diagonal
for col in range(row + 1, rows):
multiplier = matrix[col][row] / matrix[row][row]
for i in range(row, columns):
matrix[col][i] -= multiplier * matrix[row][i]
else:
# Find a non-zero diagonal element to swap rows
reduce = True
for i in range(row + 1, rows):
if matrix[i][row] != 0:
matrix[row], matrix[i] = matrix[i], matrix[row]
reduce = False
break
if reduce:
rank -= 1
for i in range(rows):
matrix[i][row] = matrix[i][rank]
# Reduce the row pointer by one to stay on the same row
row -= 1
return rank
if __name__ == "__main__":
import doctest
doctest.testmod()

View File

@ -31,16 +31,18 @@ def schur_complement(
shape_c = np.shape(mat_c)
if shape_a[0] != shape_b[0]:
raise ValueError(
f"Expected the same number of rows for A and B. \
Instead found A of size {shape_a} and B of size {shape_b}"
msg = (
"Expected the same number of rows for A and B. "
f"Instead found A of size {shape_a} and B of size {shape_b}"
)
raise ValueError(msg)
if shape_b[1] != shape_c[1]:
raise ValueError(
f"Expected the same number of columns for B and C. \
Instead found B of size {shape_b} and C of size {shape_c}"
msg = (
"Expected the same number of columns for B and C. "
f"Instead found B of size {shape_b} and C of size {shape_c}"
)
raise ValueError(msg)
a_inv = pseudo_inv
if a_inv is None:

View File

@ -0,0 +1,311 @@
"""
Python implementation of the simplex algorithm for solving linear programs in
tabular form with
- `>=`, `<=`, and `=` constraints and
- each variable `x1, x2, ...>= 0`.
See https://gist.github.com/imengus/f9619a568f7da5bc74eaf20169a24d98 for how to
convert linear programs to simplex tableaus, and the steps taken in the simplex
algorithm.
Resources:
https://en.wikipedia.org/wiki/Simplex_algorithm
https://tinyurl.com/simplex4beginners
"""
from typing import Any
import numpy as np
class Tableau:
"""Operate on simplex tableaus
>>> t = Tableau(np.array([[-1,-1,0,0,-1],[1,3,1,0,4],[3,1,0,1,4.]]), 2)
Traceback (most recent call last):
...
ValueError: RHS must be > 0
"""
def __init__(self, tableau: np.ndarray, n_vars: int) -> None:
# Check if RHS is negative
if np.any(tableau[:, -1], where=tableau[:, -1] < 0):
raise ValueError("RHS must be > 0")
self.tableau = tableau
self.n_rows, _ = tableau.shape
# Number of decision variables x1, x2, x3...
self.n_vars = n_vars
# Number of artificial variables to be minimised
self.n_art_vars = len(np.where(tableau[self.n_vars : -1] == -1)[0])
# 2 if there are >= or == constraints (nonstandard), 1 otherwise (std)
self.n_stages = (self.n_art_vars > 0) + 1
# Number of slack variables added to make inequalities into equalities
self.n_slack = self.n_rows - self.n_stages
# Objectives for each stage
self.objectives = ["max"]
# In two stage simplex, first minimise then maximise
if self.n_art_vars:
self.objectives.append("min")
self.col_titles = [""]
# Index of current pivot row and column
self.row_idx = None
self.col_idx = None
# Does objective row only contain (non)-negative values?
self.stop_iter = False
@staticmethod
def generate_col_titles(*args: int) -> list[str]:
"""Generate column titles for tableau of specific dimensions
>>> Tableau.generate_col_titles(2, 3, 1)
['x1', 'x2', 's1', 's2', 's3', 'a1', 'RHS']
>>> Tableau.generate_col_titles()
Traceback (most recent call last):
...
ValueError: Must provide n_vars, n_slack, and n_art_vars
>>> Tableau.generate_col_titles(-2, 3, 1)
Traceback (most recent call last):
...
ValueError: All arguments must be non-negative integers
"""
if len(args) != 3:
raise ValueError("Must provide n_vars, n_slack, and n_art_vars")
if not all(x >= 0 and isinstance(x, int) for x in args):
raise ValueError("All arguments must be non-negative integers")
# decision | slack | artificial
string_starts = ["x", "s", "a"]
titles = []
for i in range(3):
for j in range(args[i]):
titles.append(string_starts[i] + str(j + 1))
titles.append("RHS")
return titles
def find_pivot(self, tableau: np.ndarray) -> tuple[Any, Any]:
"""Finds the pivot row and column.
>>> t = Tableau(np.array([[-2,1,0,0,0], [3,1,1,0,6], [1,2,0,1,7.]]), 2)
>>> t.find_pivot(t.tableau)
(1, 0)
"""
objective = self.objectives[-1]
# Find entries of highest magnitude in objective rows
sign = (objective == "min") - (objective == "max")
col_idx = np.argmax(sign * tableau[0, : self.n_vars])
# Choice is only valid if below 0 for maximise, and above for minimise
if sign * self.tableau[0, col_idx] <= 0:
self.stop_iter = True
return 0, 0
# Pivot row is chosen as having the lowest quotient when elements of
# the pivot column divide the right-hand side
# Slice excluding the objective rows
s = slice(self.n_stages, self.n_rows)
# RHS
dividend = tableau[s, -1]
# Elements of pivot column within slice
divisor = tableau[s, col_idx]
# Array filled with nans
nans = np.full(self.n_rows - self.n_stages, np.nan)
# If element in pivot column is greater than zeron_stages, return
# quotient or nan otherwise
quotients = np.divide(dividend, divisor, out=nans, where=divisor > 0)
# Arg of minimum quotient excluding the nan values. n_stages is added
# to compensate for earlier exclusion of objective columns
row_idx = np.nanargmin(quotients) + self.n_stages
return row_idx, col_idx
def pivot(self, tableau: np.ndarray, row_idx: int, col_idx: int) -> np.ndarray:
"""Pivots on value on the intersection of pivot row and column.
>>> t = Tableau(np.array([[-2,-3,0,0,0],[1,3,1,0,4],[3,1,0,1,4.]]), 2)
>>> t.pivot(t.tableau, 1, 0).tolist()
... # doctest: +NORMALIZE_WHITESPACE
[[0.0, 3.0, 2.0, 0.0, 8.0],
[1.0, 3.0, 1.0, 0.0, 4.0],
[0.0, -8.0, -3.0, 1.0, -8.0]]
"""
# Avoid changes to original tableau
piv_row = tableau[row_idx].copy()
piv_val = piv_row[col_idx]
# Entry becomes 1
piv_row *= 1 / piv_val
# Variable in pivot column becomes basic, ie the only non-zero entry
for idx, coeff in enumerate(tableau[:, col_idx]):
tableau[idx] += -coeff * piv_row
tableau[row_idx] = piv_row
return tableau
def change_stage(self, tableau: np.ndarray) -> np.ndarray:
"""Exits first phase of the two-stage method by deleting artificial
rows and columns, or completes the algorithm if exiting the standard
case.
>>> t = Tableau(np.array([
... [3, 3, -1, -1, 0, 0, 4],
... [2, 1, 0, 0, 0, 0, 0.],
... [1, 2, -1, 0, 1, 0, 2],
... [2, 1, 0, -1, 0, 1, 2]
... ]), 2)
>>> t.change_stage(t.tableau).tolist()
... # doctest: +NORMALIZE_WHITESPACE
[[2.0, 1.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 2.0, -1.0, 0.0, 1.0, 2.0],
[2.0, 1.0, 0.0, -1.0, 0.0, 2.0]]
"""
# Objective of original objective row remains
self.objectives.pop()
if not self.objectives:
return tableau
# Slice containing ids for artificial columns
s = slice(-self.n_art_vars - 1, -1)
# Delete the artificial variable columns
tableau = np.delete(tableau, s, axis=1)
# Delete the objective row of the first stage
tableau = np.delete(tableau, 0, axis=0)
self.n_stages = 1
self.n_rows -= 1
self.n_art_vars = 0
self.stop_iter = False
return tableau
def run_simplex(self) -> dict[Any, Any]:
"""Operate on tableau until objective function cannot be
improved further.
# Standard linear program:
Max: x1 + x2
ST: x1 + 3x2 <= 4
3x1 + x2 <= 4
>>> Tableau(np.array([[-1,-1,0,0,0],[1,3,1,0,4],[3,1,0,1,4.]]),
... 2).run_simplex()
{'P': 2.0, 'x1': 1.0, 'x2': 1.0}
# Optimal tableau input:
>>> Tableau(np.array([
... [0, 0, 0.25, 0.25, 2],
... [0, 1, 0.375, -0.125, 1],
... [1, 0, -0.125, 0.375, 1]
... ]), 2).run_simplex()
{'P': 2.0, 'x1': 1.0, 'x2': 1.0}
# Non-standard: >= constraints
Max: 2x1 + 3x2 + x3
ST: x1 + x2 + x3 <= 40
2x1 + x2 - x3 >= 10
- x2 + x3 >= 10
>>> Tableau(np.array([
... [2, 0, 0, 0, -1, -1, 0, 0, 20],
... [-2, -3, -1, 0, 0, 0, 0, 0, 0],
... [1, 1, 1, 1, 0, 0, 0, 0, 40],
... [2, 1, -1, 0, -1, 0, 1, 0, 10],
... [0, -1, 1, 0, 0, -1, 0, 1, 10.]
... ]), 3).run_simplex()
{'P': 70.0, 'x1': 10.0, 'x2': 10.0, 'x3': 20.0}
# Non standard: minimisation and equalities
Min: x1 + x2
ST: 2x1 + x2 = 12
6x1 + 5x2 = 40
>>> Tableau(np.array([
... [8, 6, 0, -1, 0, -1, 0, 0, 52],
... [1, 1, 0, 0, 0, 0, 0, 0, 0],
... [2, 1, 1, 0, 0, 0, 0, 0, 12],
... [2, 1, 0, -1, 0, 0, 1, 0, 12],
... [6, 5, 0, 0, 1, 0, 0, 0, 40],
... [6, 5, 0, 0, 0, -1, 0, 1, 40.]
... ]), 2).run_simplex()
{'P': 7.0, 'x1': 5.0, 'x2': 2.0}
"""
# Stop simplex algorithm from cycling.
for _ in range(100):
# Completion of each stage removes an objective. If both stages
# are complete, then no objectives are left
if not self.objectives:
self.col_titles = self.generate_col_titles(
self.n_vars, self.n_slack, self.n_art_vars
)
# Find the values of each variable at optimal solution
return self.interpret_tableau(self.tableau, self.col_titles)
row_idx, col_idx = self.find_pivot(self.tableau)
# If there are no more negative values in objective row
if self.stop_iter:
# Delete artificial variable columns and rows. Update attributes
self.tableau = self.change_stage(self.tableau)
else:
self.tableau = self.pivot(self.tableau, row_idx, col_idx)
return {}
def interpret_tableau(
self, tableau: np.ndarray, col_titles: list[str]
) -> dict[str, float]:
"""Given the final tableau, add the corresponding values of the basic
decision variables to the `output_dict`
>>> tableau = np.array([
... [0,0,0.875,0.375,5],
... [0,1,0.375,-0.125,1],
... [1,0,-0.125,0.375,1]
... ])
>>> t = Tableau(tableau, 2)
>>> t.interpret_tableau(tableau, ["x1", "x2", "s1", "s2", "RHS"])
{'P': 5.0, 'x1': 1.0, 'x2': 1.0}
"""
# P = RHS of final tableau
output_dict = {"P": abs(tableau[0, -1])}
for i in range(self.n_vars):
# Gives ids of nonzero entries in the ith column
nonzero = np.nonzero(tableau[:, i])
n_nonzero = len(nonzero[0])
# First entry in the nonzero ids
nonzero_rowidx = nonzero[0][0]
nonzero_val = tableau[nonzero_rowidx, i]
# If there is only one nonzero value in column, which is one
if n_nonzero == nonzero_val == 1:
rhs_val = tableau[nonzero_rowidx, -1]
output_dict[col_titles[i]] = rhs_val
# Check for basic variables
for title in col_titles:
# Don't add RHS or slack variables to output dict
if title[0] not in "R-s-a":
output_dict.setdefault(title, 0)
return output_dict
if __name__ == "__main__":
import doctest
doctest.testmod()

View File

@ -1,4 +1,4 @@
total_user,total_events,days
total_users,total_events,days
18231,0.0,1
22621,1.0,2
15675,0.0,3

1 total_user total_users total_events days
2 18231 0.0 1
3 22621 1.0 2
4 15675 0.0 3

View File

@ -1,6 +1,6 @@
"""
this is code for forecasting
but i modified it and used it for safety checker of data
but I modified it and used it for safety checker of data
for ex: you have an online shop and for some reason some data are
missing (the amount of data that u expected are not supposed to be)
then we can use it
@ -11,6 +11,8 @@ missing (the amount of data that u expected are not supposed to be)
u can just adjust it for ur own purpose
"""
from warnings import simplefilter
import numpy as np
import pandas as pd
from sklearn.preprocessing import Normalizer
@ -45,8 +47,10 @@ def sarimax_predictor(train_user: list, train_match: list, test_match: list) ->
>>> sarimax_predictor([4,2,6,8], [3,1,2,4], [2])
6.6666671111109626
"""
# Suppress the User Warning raised by SARIMAX due to insufficient observations
simplefilter("ignore", UserWarning)
order = (1, 2, 1)
seasonal_order = (1, 1, 0, 7)
seasonal_order = (1, 1, 1, 7)
model = SARIMAX(
train_user, exog=train_match, order=order, seasonal_order=seasonal_order
)
@ -102,6 +106,10 @@ def data_safety_checker(list_vote: list, actual_result: float) -> bool:
"""
safe = 0
not_safe = 0
if not isinstance(actual_result, float):
raise TypeError("Actual result should be float. Value passed is a list")
for i in list_vote:
if i > actual_result:
safe = not_safe + 1
@ -114,16 +122,11 @@ def data_safety_checker(list_vote: list, actual_result: float) -> bool:
if __name__ == "__main__":
# data_input_df = pd.read_csv("ex_data.csv", header=None)
data_input = [[18231, 0.0, 1], [22621, 1.0, 2], [15675, 0.0, 3], [23583, 1.0, 4]]
data_input_df = pd.DataFrame(
data_input, columns=["total_user", "total_even", "days"]
)
"""
data column = total user in a day, how much online event held in one day,
what day is that(sunday-saturday)
"""
data_input_df = pd.read_csv("ex_data.csv")
# start normalization
normalize_df = Normalizer().fit_transform(data_input_df.values)
@ -138,23 +141,23 @@ if __name__ == "__main__":
x_test = x[len(x) - 1 :]
# for linear regression & sarimax
trn_date = total_date[: len(total_date) - 1]
trn_user = total_user[: len(total_user) - 1]
trn_match = total_match[: len(total_match) - 1]
train_date = total_date[: len(total_date) - 1]
train_user = total_user[: len(total_user) - 1]
train_match = total_match[: len(total_match) - 1]
tst_date = total_date[len(total_date) - 1 :]
tst_user = total_user[len(total_user) - 1 :]
tst_match = total_match[len(total_match) - 1 :]
test_date = total_date[len(total_date) - 1 :]
test_user = total_user[len(total_user) - 1 :]
test_match = total_match[len(total_match) - 1 :]
# voting system with forecasting
res_vote = [
linear_regression_prediction(
trn_date, trn_user, trn_match, tst_date, tst_match
train_date, train_user, train_match, test_date, test_match
),
sarimax_predictor(trn_user, trn_match, tst_match),
support_vector_regressor(x_train, x_test, trn_user),
sarimax_predictor(train_user, train_match, test_match),
support_vector_regressor(x_train, x_test, train_user),
]
# check the safety of today's data
not_str = "" if data_safety_checker(res_vote, tst_user) else "not "
print("Today's data is {not_str}safe.")
not_str = "" if data_safety_checker(res_vote, test_user[0]) else "not "
print(f"Today's data is {not_str}safe.")

View File

@ -399,7 +399,7 @@ def main():
if input("Press any key to restart or 'q' for quit: ").strip().lower() == "q":
print("\n" + "GoodBye!".center(100, "-") + "\n")
break
system("cls" if name == "nt" else "clear")
system("cls" if name == "nt" else "clear") # noqa: S605
if __name__ == "__main__":

View File

@ -1,14 +1,55 @@
"""
Locally weighted linear regression, also called local regression, is a type of
non-parametric linear regression that prioritizes data closest to a given
prediction point. The algorithm estimates the vector of model coefficients β
using weighted least squares regression:
β = (XᵀWX)¹(XᵀWy),
where X is the design matrix, y is the response vector, and W is the diagonal
weight matrix.
This implementation calculates wᵢ, the weight of the ith training sample, using
the Gaussian weight:
wᵢ = exp(-xᵢ - x²/(2τ²)),
where xᵢ is the ith training sample, x is the prediction point, τ is the
"bandwidth", and x is the Euclidean norm (also called the 2-norm or the
norm). The bandwidth τ controls how quickly the weight of a training sample
decreases as its distance from the prediction point increases. One can think of
the Gaussian weight as a bell curve centered around the prediction point: a
training sample is weighted lower if it's farther from the center, and τ
controls the spread of the bell curve.
Other types of locally weighted regression such as locally estimated scatterplot
smoothing (LOESS) typically use different weight functions.
References:
- https://en.wikipedia.org/wiki/Local_regression
- https://en.wikipedia.org/wiki/Weighted_least_squares
- https://cs229.stanford.edu/notes2022fall/main_notes.pdf
"""
import matplotlib.pyplot as plt
import numpy as np
def weighted_matrix(
point: np.array, training_data_x: np.array, bandwidth: float
) -> np.array:
def weight_matrix(point: np.ndarray, x_train: np.ndarray, tau: float) -> np.ndarray:
"""
Calculate the weight for every point in the data set.
point --> the x value at which we want to make predictions
>>> weighted_matrix(
Calculate the weight of every point in the training data around a given
prediction point
Args:
point: x-value at which the prediction is being made
x_train: ndarray of x-values for training
tau: bandwidth value, controls how quickly the weight of training values
decreases as the distance from the prediction point increases
Returns:
m x m weight matrix around the prediction point, where m is the size of
the training set
>>> weight_matrix(
... np.array([1., 1.]),
... np.array([[16.99, 10.34], [21.01,23.68], [24.59,25.69]]),
... 0.6
@ -17,25 +58,30 @@ def weighted_matrix(
[0.00000000e+000, 0.00000000e+000, 0.00000000e+000],
[0.00000000e+000, 0.00000000e+000, 0.00000000e+000]])
"""
m, _ = np.shape(training_data_x) # m is the number of training samples
weights = np.eye(m) # Initializing weights as identity matrix
# calculating weights for all training examples [x(i)'s]
m = len(x_train) # Number of training samples
weights = np.eye(m) # Initialize weights as identity matrix
for j in range(m):
diff = point - training_data_x[j]
weights[j, j] = np.exp(diff @ diff.T / (-2.0 * bandwidth**2))
diff = point - x_train[j]
weights[j, j] = np.exp(diff @ diff.T / (-2.0 * tau**2))
return weights
def local_weight(
point: np.array,
training_data_x: np.array,
training_data_y: np.array,
bandwidth: float,
) -> np.array:
point: np.ndarray, x_train: np.ndarray, y_train: np.ndarray, tau: float
) -> np.ndarray:
"""
Calculate the local weights using the weight_matrix function on training data.
Return the weighted matrix.
Calculate the local weights at a given prediction point using the weight
matrix for that point
Args:
point: x-value at which the prediction is being made
x_train: ndarray of x-values for training
y_train: ndarray of y-values for training
tau: bandwidth value, controls how quickly the weight of training values
decreases as the distance from the prediction point increases
Returns:
ndarray of local weights
>>> local_weight(
... np.array([1., 1.]),
... np.array([[16.99, 10.34], [21.01,23.68], [24.59,25.69]]),
@ -45,19 +91,28 @@ def local_weight(
array([[0.00873174],
[0.08272556]])
"""
weight = weighted_matrix(point, training_data_x, bandwidth)
w = np.linalg.inv(training_data_x.T @ (weight @ training_data_x)) @ (
training_data_x.T @ weight @ training_data_y.T
weight_mat = weight_matrix(point, x_train, tau)
weight = np.linalg.inv(x_train.T @ weight_mat @ x_train) @ (
x_train.T @ weight_mat @ y_train.T
)
return w
return weight
def local_weight_regression(
training_data_x: np.array, training_data_y: np.array, bandwidth: float
) -> np.array:
x_train: np.ndarray, y_train: np.ndarray, tau: float
) -> np.ndarray:
"""
Calculate predictions for each data point on axis
Calculate predictions for each point in the training data
Args:
x_train: ndarray of x-values for training
y_train: ndarray of y-values for training
tau: bandwidth value, controls how quickly the weight of training values
decreases as the distance from the prediction point increases
Returns:
ndarray of predictions
>>> local_weight_regression(
... np.array([[16.99, 10.34], [21.01, 23.68], [24.59, 25.69]]),
... np.array([[1.01, 1.66, 3.5]]),
@ -65,77 +120,57 @@ def local_weight_regression(
... )
array([1.07173261, 1.65970737, 3.50160179])
"""
m, _ = np.shape(training_data_x)
ypred = np.zeros(m)
y_pred = np.zeros(len(x_train)) # Initialize array of predictions
for i, item in enumerate(x_train):
y_pred[i] = item @ local_weight(item, x_train, y_train, tau)
for i, item in enumerate(training_data_x):
ypred[i] = item @ local_weight(
item, training_data_x, training_data_y, bandwidth
)
return ypred
return y_pred
def load_data(
dataset_name: str, cola_name: str, colb_name: str
) -> tuple[np.array, np.array, np.array, np.array]:
dataset_name: str, x_name: str, y_name: str
) -> tuple[np.ndarray, np.ndarray, np.ndarray]:
"""
Load data from seaborn and split it into x and y points
>>> pass # No doctests, function is for demo purposes only
"""
import seaborn as sns
data = sns.load_dataset(dataset_name)
col_a = np.array(data[cola_name]) # total_bill
col_b = np.array(data[colb_name]) # tip
x_data = np.array(data[x_name])
y_data = np.array(data[y_name])
mcol_a = col_a.copy()
mcol_b = col_b.copy()
one = np.ones(len(y_data))
one = np.ones(np.shape(mcol_b)[0], dtype=int)
# pairing elements of one and x_data
x_train = np.column_stack((one, x_data))
# pairing elements of one and mcol_a
training_data_x = np.column_stack((one, mcol_a))
return training_data_x, mcol_b, col_a, col_b
def get_preds(training_data_x: np.array, mcol_b: np.array, tau: float) -> np.array:
"""
Get predictions with minimum error for each training data
>>> get_preds(
... np.array([[16.99, 10.34], [21.01, 23.68], [24.59, 25.69]]),
... np.array([[1.01, 1.66, 3.5]]),
... 0.6
... )
array([1.07173261, 1.65970737, 3.50160179])
"""
ypred = local_weight_regression(training_data_x, mcol_b, tau)
return ypred
return x_train, x_data, y_data
def plot_preds(
training_data_x: np.array,
predictions: np.array,
col_x: np.array,
col_y: np.array,
cola_name: str,
colb_name: str,
) -> plt.plot:
x_train: np.ndarray,
preds: np.ndarray,
x_data: np.ndarray,
y_data: np.ndarray,
x_name: str,
y_name: str,
) -> None:
"""
Plot predictions and display the graph
>>> pass # No doctests, function is for demo purposes only
"""
xsort = training_data_x.copy()
xsort.sort(axis=0)
plt.scatter(col_x, col_y, color="blue")
x_train_sorted = np.sort(x_train, axis=0)
plt.scatter(x_data, y_data, color="blue")
plt.plot(
xsort[:, 1],
predictions[training_data_x[:, 1].argsort(0)],
x_train_sorted[:, 1],
preds[x_train[:, 1].argsort(0)],
color="yellow",
linewidth=5,
)
plt.title("Local Weighted Regression")
plt.xlabel(cola_name)
plt.ylabel(colb_name)
plt.xlabel(x_name)
plt.ylabel(y_name)
plt.show()
@ -144,6 +179,7 @@ if __name__ == "__main__":
doctest.testmod()
training_data_x, mcol_b, col_a, col_b = load_data("tips", "total_bill", "tip")
predictions = get_preds(training_data_x, mcol_b, 0.5)
plot_preds(training_data_x, predictions, col_a, col_b, "total_bill", "tip")
# Demo with a dataset from the seaborn module
training_data_x, total_bill, tip = load_data("tips", "total_bill", "tip")
predictions = local_weight_regression(training_data_x, tip, 5)
plot_preds(training_data_x, predictions, total_bill, tip, "total_bill", "tip")

Some files were not shown because too many files have changed in this diff Show More