Found 19 repositories(showing 19)
PedroAlmeidacode
No description available
farid141
Scrapping all job posts on jobstreet from given link. The result will be saved on csv file. You can add exclude_keyword and include_keyword to make job scanning in csv easier.
Scrapping linkedIn using loop for different cities and job combinations on linked in and scraping for remote, unperson, and hybrid jobs.
mandivson
The job postings have been scrapped from Indeed.com based on Job title, company, salary, location and Job link and output is stored in a csv file. The notifier and periodic scheduling are some other features.
Abdullah-Jalal
scrapping linked in jobs
LinconDash
The scrapper can be used to scrape specified real-time jobs posted on "LinkedIn" platform and cleans the scrapped data and export it to a csv file so that users can use the data on Excel to be able to apply jobs based on their interest
JanHamas
No description available
ZhYouness
No description available
abdullahaqeel2011-ai
A fully-automated LinkedIn Jobs Scraper built using Make.com, Apify, and Google Sheets to extract job listings and store them in a structured sheet.
Rishit-Shah
No description available
akat11
Scrapped linked-in job using Java
MMallah
A scrapper to identify and monitor Job add on linked in
arr773
Scrapping data off the linked in job search portal using selenium. Also created a spyder for automation
joygoround
simple jobscrapper with python : scrapping jobs from indeed & stackoverflow and save data(title, company, location, link) in cvs file
vignesh2914
Naukri Web scraping in which we can scrap the job data like - location , job role, job title and link of the job title that will scrapped and saved in Mysql DB . And converted as CSV file( able to download also)
farid141
Scrapping all job posts on indeed from given link. The result will be saved on csv file. You can add exclude_keyword and include_keyword to make job scanning in csv easier.
dnarwat
Scrapping jobs data from Glasdoor using selenium for another project. The code was taken from the jupyter notebook link found in this Medium post: https://towardsdatascience.com/selenium-tutorial-scraping-glassdoor-com-in-10-minutes-3d0915c6d905
dkhandal
Selenium Test: This is a quick 10-15 minutes test, hoping that you have all necessary tools ready on your computer. Once successful, please share screenshots / Excel file output for verification. Use Selenium in headless mode; Automate to scrap below details from www.bbc.com .. schedule this job to run every 5 minutes. Create an excel file and update all below details in to that document. 1) Fetch all links on home page, and store in Excel file 2) Open Each link and Fetch title of the topic and also fetch detailed description for each topic When you successful with above test please send screenshots. If you are successful till this point, you are 80% selected. Followed by that we will start assigning projects, In our long term projects.
gcl180
Le Wagon Messages 1 Content Lets Code Batch #6 (Brussels) 18 students (including 1 guest) Back to batch Leave a message Martin Van Aken Today at 08:12 Trying to send the live code from yesterday: # LIVECODE DAY 4 # IMPORTS # Imports for Data Analysis import requests from bs4 import BeautifulSoup # Imports for Web Scrapping # UTILS CODE # Get Parsed HTML from URL (once it works, make a function of it) url = "https://genius.com/artists/Eminem" def get_soup(url): response = requests.get(url) soup = BeautifulSoup(response.content, "html.parser") return soup # FUNCTIONALITIES # Extract Songs URLs from Artist's Page (once it works, make a function of it) # print(songs_html) def get_songs_url(artist_url): soup = get_soup(artist_url) klass = "mini_card_grid-song" element = "div" songs_html = soup.find_all(element, class_=klass) urls = [] for song_html in songs_html: href = song_html.find("a").attrs["href"] urls.append(href) return urls # Extract Lyrics from a Song's Page (once it works, make a function of it) def get_lyrics(song_url): song_soup = get_soup(song_url) lyrics_html = song_soup.find("div", class_="lyrics") return lyrics_html.text urls = get_songs_url(url) all_lyrics = [] for song_url in urls: all_lyrics.append(get_lyrics(song_url)) print(all_lyrics) # Collect the Lyrics of All the Songs of an Artist (once it works, make a function of it) def get_artists_lyrics(artist_url): pass # With All the Lyrics Collected Make a Dict Containing the Occurences of Each Word Tanguy De Bels Thu, Feb 20 '20 at 14:56 Hey guys, I hope you already retrieved the Internet in one DataFrame (yes, pandas is that good 😉)! In the meantime, find attached the materials for tonight’s livecode. Livecode-day4 Cheers ! Lucile Trussardi Thu, Feb 20 '20 at 10:24 Hallo allemaal 😎 Congratulation everyone, you already did more than half of your training ! To celebrate that, we are going to lunch at “Brasserie de la presse” 👉🏽 https://lewagon.typeform.com/to/vdyrbi Tanguy De Bels Wed, Feb 19 '20 at 17:20 Hey guys, Find attached today’s livecode (and my version 😉) Livecode-day3 Cheers ! Lucile Trussardi Wed, Feb 19 '20 at 09:54 Hallo allemaal 🙌🏻 Today, the Head chef propose you several delicious dishes for your lunch 🥘 Please, pick what you want by clicking on this link : https://lewagon.typeform.com/to/iGRBJO Lucile Trussardi Tue, Feb 18 '20 at 17:19 Well done everybody for today. 🎉🤠 It was the hardest day of the training so do not panic about it !! 💪🏼 Now that you have the basics, tomorrow we will be able to focus on topics even more related to your jobs. See you tomorrow morning at 9am 🥐 ☕️ Tanguy De Bels Tue, Feb 18 '20 at 16:55 Hey guys, You can find attached the livecode of the second day and my own version of it with the changes ;) Livecode-day2 Tanguy De Bels Tue, Feb 18 '20 at 11:11 Hey guys, You can find attached the slides of the lectures to make it a bit easier to take notes ;) On the second link there is the code you produced yesterday during the livecode, congratulations! Slides Livecode-day1 Lucile Trussardi Tue, Feb 18 '20 at 10:19 Hello Engie team 👋 I hope this morning class was not too hard ? Here is the menu for today’s lunch, please pick what you want Ps : Today we add one of the most famous Belgian dish (and it is not waffle) 😋 https://lewagon.typeform.com/to/cPhNw7 Tanguy De Bels Mon, Feb 17 '20 at 13:18 For those of you with issues regarding proxy or the setup. Please, reach asap your it helpdesk. Theyr were noticed and expect questions from you today. Infos: gem-is-helpdesk@engie.com BE: +32-(0)2-518.63.63 Lucile Trussardi Mon, Feb 17 '20 at 10:35 Hello Hello Everybody Welcome back to Let’s code training !! 🎉 Today, we are going to eat at Brasserie 28 😎 Please, find below the menu for the lunch : https://lewagon.typeform.com/to/O4gfbp Lucile Trussardi Thu, Feb 13 '20 at 10:25 Hello again Everyone and Welcome to our Wall chat 🙋 As I told you, this lunch we are going to eat in a very good Italian restaurant named “Ricotta & Parmesan” I hope you will enjoy it 🍕 To choose your dishes, please click just here https://lewagon.typeform.com/to/JAxkLs 👈🏼
All 19 repositories loaded