Commit Graph

18 Commits

Author SHA1 Message Date
Kim-R2O
423dd2b020
added daily horoscope scrapper script (#2167)
* added daily horoscope scrapper

* Update daily horoscope scrapper script

code refactoring, script editing

* Update web_programming/daily_horoscope.py

Co-authored-by: Christian Clauss <cclauss@me.com>
2020-07-10 17:26:27 +02:00
Christian Clauss
5f4da5d616
isort --profile black . (#2181)
* updating DIRECTORY.md

* isort --profile black .

* Black after

* updating DIRECTORY.md

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2020-07-06 09:44:19 +02:00
ocivo
a7b4311378
fix fetch_github_info __main__ bug (#2080)
* fix fetch_github_info __main__ bug

* Algorithms should not print

* Update fetch_github_info.py

* Update fetch_github_info.py

Co-authored-by: Christian Clauss <cclauss@me.com>
Co-authored-by: John Law <johnlaw.po@gmail.com>
2020-06-11 16:38:43 +02:00
Christian Clauss
fa358d614a
current_weather, weather_forecast, weather_onecall (#2048)
* current_weather, weather_forecast, weather_onecall

* updating DIRECTORY.md

* weather_forecast("Kolkata, India")

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2020-05-30 23:47:26 +05:30
Swapnanil Dutta
f8bfd0244d
Created weatherforecast.py (#2037)
* Created weatherforecast.py

Added weatherforecast.py to retrieve weather information of a location and return dictionary values.

* Update weatherforecast.py

* Update and rename weatherforecast.py to current_weather.py

Co-authored-by: Christian Clauss <cclauss@me.com>
2020-05-30 17:25:06 +02:00
Christian Clauss
08c8bb5ad5
Deal with maps (#1945)
* Deal with maps

Try with the search term "pizza" to see why this was done in #1932

* fixup! Format Python code with psf/black push

* Update armstrong_numbers.py

* updating DIRECTORY.md

* Update crawl_google_results.py

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2020-05-06 03:32:40 +02:00
Christian Clauss
6acd7fb5ce
Wrap lines that go beyond GitHub Editor (#1925)
* Wrap lines that go beyond GiHub Editor

* flake8 --count --select=E501 --max-line-length=127

* updating DIRECTORY.md

* Update strassen_matrix_multiplication.py

* fixup! Format Python code with psf/black push

* Update decision_tree.py

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
2020-05-01 23:36:35 +02:00
matkosoric
7f04e5cd34
contribution guidelines checks (#1787)
* spelling corrections

* review

* improved documentation, removed redundant variables, added testing

* added type hint

* camel case to snake case

* spelling fix

* review

* python --> Python # it is a brand name, not a snake

* explicit cast to int

* spaces in int list

* "!= None" to "is not None"

* Update comb_sort.py

* various spelling corrections in documentation & several variables naming conventions fix

* + char in file name

* import dependency - bug fix

Co-authored-by: John Law <johnlaw.po@gmail.com>
2020-03-04 13:40:28 +01:00
Muhammad Umer Farooq
2b19e84767
Create emails_from_url.py (#1756)
* Create emails_from_url.py

* Update emails_from_url.py

* Update emails_from_url.py

* 0 emails found:

* Update emails_from_url.py

* Use Python set() to remove duplicates

* Update emails_from_url.py

* Add type hints and doctests

Co-authored-by: vinayak <itssvinayak@gmail.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
2020-02-26 11:41:56 +01:00
nishithshowri006
51b769095f Create get_imdb_top_250_movies_csv.py (#1659)
* Create get_imdb_top_250_movies_csv.py

* Update get_imdb_top_250_movies_csv.py

* Update get_imdb_top_250_movies_csv.py

* get_imdb_top_250_movies()

Co-authored-by: Christian Clauss <cclauss@me.com>
2020-01-05 22:28:36 +01:00
vansh bhardwaj
4c75f863c8 added current stock price (#1590)
* added current stock price

* Ten lines or less
2019-11-23 13:54:06 +01:00
Mantas Zimnickas
12f69a86f5 Remove code with side effects from main (#1577)
* Remove code with side effects from main

When running tests withy pytest, some modules execute code in main scope
and open plot or browser windows.

Moves such code under `if __name__ == "__main__"`.

* fixup! Format Python code with psf/black push
2019-11-17 19:38:48 +01:00
Christian Clauss
5df8aec66c
GitHub Action formats our code with psf/black (#1569)
* GitHub Action formats our code with psf/black

@poyea Your review please.

* fixup! Format Python code with psf/black push
2019-11-14 19:59:43 +01:00
Sarath Kaul
dd7d2fa270 Panagram Script Added (#1564)
* Python Program that fetches top trending news

* Python Program that fetches top trending news

* Revisions in Fetch BBC News

* psf/black Changes

* Python Program to send slack message to a channel

* Slack Message Revision Changes

* Python Program to check Palindrome String

* Doctest Added

* Python Program to check whether a String is Panagram or not

* Python Program to check whether a String is Panagram or not

* Check Panagram Script Added

* Panagram Script Added

* Anagram Script Changes

* Anagram Alphabet Check Added

* Python Program to fetch github info
2019-11-14 11:22:07 +01:00
Sarath Kaul
b7e37a856f Added a new Python script and some changes in existing one (#1560)
* Python Program that fetches top trending news

* Python Program that fetches top trending news

* Revisions in Fetch BBC News

* psf/black Changes

* Python Program to send slack message to a channel

* Slack Message Revision Changes
2019-11-12 12:11:54 +01:00
Sarath Kaul
5ac4391420 Python Program that fetches top trending news (#1559)
* Python Program that fetches top trending news

* Python Program that fetches top trending news

* Revisions in Fetch BBC News
2019-11-12 09:27:38 +01:00
Orkun İncili
8b52e44230 Python program that listing top 'n' movie in imdb (#1477)
* Python program that listing top 'n' movie in imdb

* Update get_imdbtop.py
2019-10-27 21:48:38 +01:00
kunal kumar barman
43f99e56c9 Python program that surfs 3 site at a time (#1389)
* Python program that scrufs 3 site at a time

add input in the compiling time  like --  python3 project1.py (man)

* Update project1.py

* noqa: F401 and reformat with black

* Rename project1.py to web_programming/crawl_google_results.py

* Add beautifulsoup4 to requirements.txt

* Add fake_useragent to requirements.txt

* Update crawl_google_results.py

* headers={"UserAgent": UserAgent().random}

* html.parser, not lxml

* link, not links
2019-10-18 23:30:52 +02:00