Found 245 repositories(showing 30)
Masudbro94
Open in app Get started ITNEXT Published in ITNEXT You have 2 free member-only stories left this month. Sign up for Medium and get an extra one Kush Kush Follow Apr 15, 2021 · 7 min read · Listen Save How you can Control your Android Device with Python Photo by Caspar Camille Rubin on Unsplash Photo by Caspar Camille Rubin on Unsplash Introduction A while back I was thinking of ways in which I could annoy my friends by spamming them with messages for a few minutes, and while doing some research I came across the Android Debug Bridge. In this quick guide I will show you how you can interface with it using Python and how to create 2 quick scripts. The ADB (Android Debug Bridge) is a command line tool (CLI) which can be used to control and communicate with an Android device. You can do many things such as install apps, debug apps, find hidden features and use a shell to interface with the device directly. To enable the ADB, your device must firstly have Developer Options unlocked and USB debugging enabled. To unlock developer options, you can go to your devices settings and scroll down to the about section and find the build number of the current software which is on the device. Click the build number 7 times and Developer Options will be enabled. Then you can go to the Developer Options panel in the settings and enable USB debugging from there. Now the only other thing you need is a USB cable to connect your device to your computer. Here is what todays journey will look like: Installing the requirements Getting started The basics of writing scripts Creating a selfie timer Creating a definition searcher Installing the requirements The first of the 2 things we need to install, is the ADB tool on our computer. This comes automatically bundled with Android Studio, so if you already have that then do not worry. Otherwise, you can head over to the official docs and at the top of the page there should be instructions on how to install it. Once you have installed the ADB tool, you need to get the python library which we will use to interface with the ADB and our device. You can install the pure-python-adb library using pip install pure-python-adb. Optional: To make things easier for us while developing our scripts, we can install an open-source program called scrcpy which allows us to display and control our android device with our computer using a mouse and keyboard. To install it, you can head over to the Github repo and download the correct version for your operating system (Windows, macOS or Linux). If you are on Windows, then extract the zip file into a directory and add this directory to your path. This is so we can access the program from anywhere on our system just by typing in scrcpy into our terminal window. Getting started Now that all the dependencies are installed, we can start up our ADB and connect our device. Firstly, connect your device to your PC with the USB cable, if USB debugging is enabled then a message should pop up asking if it is okay for your PC to control the device, simply answer yes. Then on your PC, open up a terminal window and start the ADB server by typing in adb start-server. This should print out the following messages: * daemon not running; starting now at tcp:5037 * daemon started successfully If you also installed scrcpy, then you can start that by just typing scrcpy into the terminal. However, this will only work if you added it to your path, otherwise you can open the executable by changing your terminal directory to the directory of where you installed scrcpy and typing scrcpy.exe. Hopefully if everything works out, you should be able to see your device on your PC and be able to control it using your mouse and keyboard. Now we can create a new python file and check if we can find our connected device using the library: Here we import the AdbClient class and create a client object using it. Then we can get a list of devices connected. Lastly, we get the first device out of our list (it is generally the only one there if there is only one device connected). The basics of writing scripts The main way we are going to interface with our device is using the shell, through this we can send commands to simulate a touch at a specific location or to swipe from A to B. To simulate screen touches (taps) we first need to work out how the screen coordinates work. To help with these we can activate the pointer location setting in the developer options. Once activated, wherever you touch on the screen, you can see that the coordinates for that point appear at the top. The coordinate system works like this: A diagram to show how the coordinate system works A diagram to show how the coordinate system works The top left corner of the display has the x and y coordinates (0, 0) respectively, and the bottom right corners’ coordinates are the largest possible values of x and y. Now that we know how the coordinate system works, we need to check out the different commands we can run. I have made a list of commands and how to use them below for quick reference: Input tap x y Input text “hello world!” Input keyevent eventID Here is a list of some common eventID’s: 3: home button 4: back button 5: call 6: end call 24: volume up 25: volume down 26: turn device on or off 27: open camera 64: open browser 66: enter 67: backspace 207: contacts 220: brightness down 221: brightness up 277: cut 278: copy 279: paste If you wanted to find more, here is a long list of them here. Creating a selfie timer Now we know what we can do, let’s start doing it. In this first example I will show you how to create a quick selfie timer. To get started we need to import our libraries and create a connect function to connect to our device: You can see that the connect function is identical to the previous example of how to connect to your device, except here we return the device and client objects for later use. In our main code, we can call the connect function to retrieve the device and client objects. From there we can open up the camera app, wait 5 seconds and take a photo. It’s really that simple! As I said before, this is simply replicating what you would usually do, so thinking about how to do things is best if you do them yourself manually first and write down the steps. Creating a definition searcher We can do something a bit more complex now, and that is to ask the browser to find the definition of a particular word and take a screenshot to save it on our computer. The basic flow of this program will be as such: 1. Open the browser 2. Click the search bar 3. Enter the search query 4. Wait a few seconds 5. Take a screenshot and save it But, before we get started, you need to find the coordinates of your search bar in your default browser, you can use the method I suggested earlier to find them easily. For me they were (440, 200). To start, we will have to import the same libraries as before, and we will also have our same connect method. In our main function we can call the connect function, as well as assign a variable to the x and y coordinates of our search bar. Notice how this is a string and not a list or tuple, this is so we can easily incorporate the coordinates into our shell command. We can also take an input from the user to see what word they want to get the definition for: We will add that query to a full sentence which will then be searched, this is so that we can always get the definition. After that we can open the browser and input our search query into the search bar as such: Here we use the eventID 66 to simulate the press of the enter key to execute our search. If you wanted to, you could change the wait timings per your needs. Lastly, we will take a screenshot using the screencap method on our device object, and we can save that as a .png file: Here we must open the file in the write bytes mode because the screencap method returns bytes representing the image. If all went according to plan, you should have a quick script which searches for a specific word. Here it is working on my phone: A GIF to show how the definition searcher example works on my phone A GIF to show how the definition searcher example works on my phone Final thoughts Hopefully you have learned something new today, personally I never even knew this was a thing before I did some research into it. The cool thing is, that you can do anything you normal would be able to do, and more since it just simulates your own touches and actions! I hope you enjoyed the article and thank you for reading! 💖 468 9 468 9 More from ITNEXT Follow ITNEXT is a platform for IT developers & software engineers to share knowledge, connect, collaborate, learn and experience next-gen technologies. Sabrina Amrouche Sabrina Amrouche ·Apr 15, 2021 Using the Spotify Algorithm to Find High Energy Physics Particles Python 5 min read Using the Spotify Algorithm to Find High Energy Physics Particles Wenkai Fan Wenkai Fan ·Apr 14, 2021 Responsive design at different levels in Flutter Flutter 3 min read Responsive design at different levels in Flutter Abhishek Gupta Abhishek Gupta ·Apr 14, 2021 Getting started with Kafka and Rust: Part 2 Kafka 9 min read Getting started with Kafka and Rust: Part 2 Adriano Raiano Adriano Raiano ·Apr 14, 2021 How to properly internationalize a React application using i18next React 17 min read How to properly internationalize a React application using i18next Gary A. Stafford Gary A. Stafford ·Apr 14, 2021 AWS IoT Core for LoRaWAN, AWS IoT Analytics, and Amazon QuickSight Lora 11 min read AWS IoT Core for LoRaWAN, Amazon IoT Analytics, and Amazon QuickSight Read more from ITNEXT Recommended from Medium Morpheus Morpheus Morpheus Swap — Resurrection Ashutosh Kumar Ashutosh Kumar GIT Branching strategies and GitFlow Balachandar Paulraj Balachandar Paulraj Delta Lake Clones: Systematic Approach for Testing, Sharing data Jason Porter Jason Porter Week 3 -Yieldly No-Loss Lottery Results Casino slot machines Mikolaj Szabó Mikolaj Szabó in HackerNoon.com Why functional programming matters Tt Tt Set Up LaTeX on Mac OS X Sierra Goutham Pratapa Goutham Pratapa Upgrade mongo to the latest build Julia Says Julia Says in Top Software Developers in the World How to Choose a Software Vendor AboutHelpTermsPrivacy Get the Medium app A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
talsraviv
Vibe code functional prototypes with minimal babysitting using any AI coding agent.
nimin1
System Design explained from first principles to senior engineer-level thinking, with a focus on AI-assisted (vibecoding) development. Learn how to reason about systems, understand trade-offs, and validate AI-generated designs beyond just working code.
Tinkprocodes
This repo is a fork from main repo and will usually have new features bundled faster than main repo (and maybe bundle some bugs, too). # Unofficial Facebook Chat API <img alt="version" src="https://img.shields.io/github/package-json/v/ProCoderMew/fca-unofficial?label=github&style=flat-square"> Facebook now has an official API for chat bots [here](https://developers.facebook.com/docs/messenger-platform). This API is the only way to automate chat functionalities on a user account. We do this by emulating the browser. This means doing the exact same GET/POST requests and tricking Facebook into thinking we're accessing the website normally. Because we're doing it this way, this API won't work with an auth token but requires the credentials of a Facebook account. _Disclaimer_: We are not responsible if your account gets banned for spammy activities such as sending lots of messages to people you don't know, sending messages very quickly, sending spammy looking URLs, logging in and out very quickly... Be responsible Facebook citizens. See [below](#projects-using-this-api) for projects using this API. ## Install If you just want to use fca-unofficial, you should use this command: ```bash npm install procodermew/fca-unofficial ``` It will download `fca-unofficial` from NPM repositories ## Testing your bots If you want to test your bots without creating another account on Facebook, you can use [Facebook Whitehat Accounts](https://www.facebook.com/whitehat/accounts/). ## Example Usage ```javascript const login = require("fca-unofficial"); // Create simple echo bot login({email: "FB_EMAIL", password: "FB_PASSWORD"}, (err, api) => { if(err) return console.error(err); api.listen((err, message) => { api.sendMessage(message.body, message.threadID); }); }); ``` Result: <img width="517" alt="screen shot 2016-11-04 at 14 36 00" src="https://cloud.githubusercontent.com/assets/4534692/20023545/f8c24130-a29d-11e6-9ef7-47568bdbc1f2.png"> ## Documentation You can see it [here](DOCS.md). ## Main Functionality ### Sending a message #### api.sendMessage(message, threadID[, callback][, messageID]) Various types of message can be sent: * *Regular:* set field `body` to the desired message as a string. * *Sticker:* set a field `sticker` to the desired sticker ID. * *File or image:* Set field `attachment` to a readable stream or an array of readable streams. * *URL:* set a field `url` to the desired URL. * *Emoji:* set field `emoji` to the desired emoji as a string and set field `emojiSize` with size of the emoji (`small`, `medium`, `large`) Note that a message can only be a regular message (which can be empty) and optionally one of the following: a sticker, an attachment or a url. __Tip__: to find your own ID, you can look inside the cookies. The `userID` is under the name `c_user`. __Example (Basic Message)__ ```js const login = require("fca-unofficial"); login({email: "FB_EMAIL", password: "FB_PASSWORD"}, (err, api) => { if(err) return console.error(err); var yourID = "000000000000000"; var msg = "Hey!"; api.sendMessage(msg, yourID); }); ``` __Example (File upload)__ ```js const login = require("fca-unofficial"); login({email: "FB_EMAIL", password: "FB_PASSWORD"}, (err, api) => { if(err) return console.error(err); // Note this example uploads an image called image.jpg var yourID = "000000000000000"; var msg = { body: "Hey!", attachment: fs.createReadStream(__dirname + '/image.jpg') } api.sendMessage(msg, yourID); }); ``` ------------------------------------ ### Saving session. To avoid logging in every time you should save AppState (cookies etc.) to a file, then you can use it without having password in your scripts. __Example__ ```js const fs = require("fs"); const login = require("fca-unofficial"); var credentials = {email: "FB_EMAIL", password: "FB_PASSWORD"}; login(credentials, (err, api) => { if(err) return console.error(err); fs.writeFileSync('appstate.json', JSON.stringify(api.getAppState())); }); ``` Alternative: Use [c3c-fbstate](https://github.com/c3cbot/c3c-fbstate) to get fbstate.json (appstate.json) ------------------------------------ ### Listening to a chat #### api.listen(callback) Listen watches for messages sent in a chat. By default this won't receive events (joining/leaving a chat, title change etc…) but it can be activated with `api.setOptions({listenEvents: true})`. This will by default ignore messages sent by the current account, you can enable listening to your own messages with `api.setOptions({selfListen: true})`. __Example__ ```js const fs = require("fs"); const login = require("fca-unofficial"); // Simple echo bot. It will repeat everything that you say. // Will stop when you say '/stop' login({appState: JSON.parse(fs.readFileSync('appstate.json', 'utf8'))}, (err, api) => { if(err) return console.error(err); api.setOptions({listenEvents: true}); var stopListening = api.listenMqtt((err, event) => { if(err) return console.error(err); api.markAsRead(event.threadID, (err) => { if(err) console.error(err); }); switch(event.type) { case "message": if(event.body === '/stop') { api.sendMessage("Goodbye…", event.threadID); return stopListening(); } api.sendMessage("TEST BOT: " + event.body, event.threadID); break; case "event": console.log(event); break; } }); }); ``` ## FAQS 1. How do I run tests? > For tests, create a `test-config.json` file that resembles `example-config.json` and put it in the `test` directory. From the root >directory, run `npm test`. 2. Why doesn't `sendMessage` always work when I'm logged in as a page? > Pages can't start conversations with users directly; this is to prevent pages from spamming users. 3. What do I do when `login` doesn't work? > First check that you can login to Facebook using the website. If login approvals are enabled, you might be logging in incorrectly. For how to handle login approvals, read our docs on [`login`](DOCS.md#login). 4. How can I avoid logging in every time? Can I log into a previous session? > We support caching everything relevant for you to bypass login. `api.getAppState()` returns an object that you can save and pass into login as `{appState: mySavedAppState}` instead of the credentials object. If this fails, your session has expired. 5. Do you support sending messages as a page? > Yes, set the pageID option on login (this doesn't work if you set it using api.setOptions, it affects the login process). > ```js > login(credentials, {pageID: "000000000000000"}, (err, api) => { … } > ``` 6. I'm getting some crazy weird syntax error like `SyntaxError: Unexpected token [`!!! > Please try to update your version of node.js before submitting an issue of this nature. We like to use new language features. 7. I don't want all of these logging messages! > You can use `api.setOptions` to silence the logging. You get the `api` object from `login` (see example above). Do > ```js > api.setOptions({ > logLevel: "silent" > }); > ``` <a name="projects-using-this-api"></a> ## Projects using this API: - [c3c](https://github.com/lequanglam/c3c) - A bot that can be customizable using plugins. Support Facebook & Discord. - [Miraiv2](https://github.com/miraiPr0ject/miraiv2) - A simple Facebook Messenger Bot made by CatalizCS and SpermLord. ## Projects using this API (original repository, facebook-chat-api): - [Messer](https://github.com/mjkaufer/Messer) - Command-line messaging for Facebook Messenger - [messen](https://github.com/tomquirk/messen) - Rapidly build Facebook Messenger apps in Node.js - [Concierge](https://github.com/concierge/Concierge) - Concierge is a highly modular, easily extensible general purpose chat bot with a built in package manager - [Marc Zuckerbot](https://github.com/bsansouci/marc-zuckerbot) - Facebook chat bot - [Marc Thuckerbot](https://github.com/bsansouci/lisp-bot) - Programmable lisp bot - [MarkovsInequality](https://github.com/logicx24/MarkovsInequality) - Extensible chat bot adding useful functions to Facebook Messenger - [AllanBot](https://github.com/AllanWang/AllanBot-Public) - Extensive module that combines the facebook api with firebase to create numerous functions; no coding experience is required to implement this. - [Larry Pudding Dog Bot](https://github.com/Larry850806/facebook-chat-bot) - A facebook bot you can easily customize the response - [fbash](https://github.com/avikj/fbash) - Run commands on your computer's terminal over Facebook Messenger - [Klink](https://github.com/KeNt178/klink) - This Chrome extension will 1-click share the link of your active tab over Facebook Messenger - [Botyo](https://github.com/ivkos/botyo) - Modular bot designed for group chat rooms on Facebook - [matrix-puppet-facebook](https://github.com/matrix-hacks/matrix-puppet-facebook) - A facebook bridge for [matrix](https://matrix.org) - [facebot](https://github.com/Weetbix/facebot) - A facebook bridge for Slack. - [Botium](https://github.com/codeforequity-at/botium-core) - The Selenium for Chatbots - [Messenger-CLI](https://github.com/AstroCB/Messenger-CLI) - A command-line interface for sending and receiving messages through Facebook Messenger. - [AssumeZero-Bot](https://github.com/AstroCB/AssumeZero-Bot) – A highly customizable Facebook Messenger bot for group chats. - [Miscord](https://github.com/Bjornskjald/miscord) - An easy-to-use Facebook bridge for Discord. - [chat-bridge](https://github.com/rexx0520/chat-bridge) - A Messenger, Telegram and IRC chat bridge. - [messenger-auto-reply](https://gitlab.com/theSander/messenger-auto-reply) - An auto-reply service for Messenger. - [BotCore](https://github.com/AstroCB/BotCore) – A collection of tools for writing and managing Facebook Messenger bots. - [mnotify](https://github.com/AstroCB/mnotify) – A command-line utility for sending alerts and notifications through Facebook Messenger.
SansaTechnologies
F1 pit strategy optimization challenge - Reverse-engineer the race simulation algorithm from 30,000 historical races. A language-agnostic coding assessment designed to evaluate algorithmic thinking and data analysis skills.
OmarHammemi
The Myers Briggs Type Indicator (or MBTI for short) is a personality type system that divides everyone into 16 distinct personality types across 4 axis: Introversion (I) – Extroversion (E) Intuition (N) – Sensing (S) Thinking (T) – Feeling (F) Judging (J) – Perceiving (P) (More can be learned about what these mean here) So for example, someone who prefers introversion, intuition, thinking and perceiving would be labelled an INTP in the MBTI system, and there are lots of personality based components that would model or describe this person’s preferences or behaviour based on the label. It is one of, if not the, the most popular personality test in the world. It is used in businesses, online, for fun, for research and lots more. A simple google search reveals all of the different ways the test has been used over time. It’s safe to say that this test is still very relevant in the world in terms of its use. From scientific or psychological perspective it is based on the work done on cognitive functions by Carl Jung i.e. Jungian Typology. This was a model of 8 distinct functions, thought processes or ways of thinking that were suggested to be present in the mind. Later this work was transformed into several different personality systems to make it more accessible, the most popular of which is of course the MBTI. Recently, its use/validity has come into question because of unreliability in experiments surrounding it, among other reasons. But it is still clung to as being a very useful tool in a lot of areas, and the purpose of this dataset is to help see if any patterns can be detected in specific types and their style of writing, which overall explores the validity of the test in analysing, predicting or categorising behaviour. Content This dataset contains over 8600 rows of data, on each row is a person’s: Type (This persons 4 letter MBTI code/type) A section of each of the last 50 things they have posted (Each entry separated by "|||" (3 pipe characters)) Acknowledgements This data was collected through the PersonalityCafe forum, as it provides a large selection of people and their MBTI personality type, as well as what they have written. Inspiration Some basic uses could include: Use machine learning to evaluate the MBTIs validity and ability to predict language styles and behaviour online. Production of a machine learning algorithm that can attempt to determine a person’s personality type based on some text they have written.
keeratsingh
This C# UWP solution demonstrates the URL retreival after JSON string parsing from the Bing Image of the Day API. Each line of code is well detailed in order to demonstrate the thinking behind using the method and better understanding.
webf-zone
While programming for User Interfaces one primary expectation from the code is to describe the connections between interaction points. UiBase is aiming to be a complete front-end framework with this thinking at its core. Along with the 'component driven' approach, other highlights are almost typical for today's framework.
RecruiterRon
David Aplin Group, one of Canada's Best Managed Companies, has partnered with our client to recruit Junior Software Developers. New graduates or soon-to-graduate students are encouraged to apply! Our client is looking for Junior Software Developers to join their growing team. This position is responsible for the development, evaluation, implementation, and maintenance of new software solutions, including maintenance and development of existing applications. Applications involve data collection, data storage, machine learning, and data visualization. The Role: Designing, coding, and debugging software applications using front-end frameworks and enterprise applications - front-end, back-end, and full-stack development. Performing software analysis, code analysis, requirements analysis, software reviews, identification of code metrics, system risk analysis, software reliability analysis. Providing assistance with installations, system configuration, and third-party system integrations. Providing team members and clients with support and guidance. The Ideal Candidate: A Bachelor's degree or Diploma in Computer Science, Computer Engineering, Information Technology, or a similar field. Experience working with coding languages C#, JavaScript, Angular, React, Python, PHP jQuery, JSON, and Ajax. Solid understanding of web design and development principles. Good planning, analytical, and decision-making skills. A portfolio of web design, applications, and projects you have worked on including projects published on GitHub. Critical-thinking skills. In-depth knowledge of software prototyping and UX design tools. High personal code/development standards (peer testing, unit testing, documentation, etc). Team spirit and a sense of humour are always great. Goal-orientated and deadline-driven. COVID-19 considerations: All employees are currently working from home. Any equipment or materials required for work will be provided by the company via shipment to the employee's home. Company policy will continue to evolve through the COVID-19 pandemic and implement alternative working arrangements to ensure that all our people stay safe. If you are interested in this position and meet the above criteria, please send your resume in confidence directly to Jim Juacalla or Ron Cantiveros at Aplin Information Technology, A Division of David Aplin Group. We thank all applicants; however, only those selected for an interview will be contacted. Apply: https://jobs.aplin.com/job/409253/Junior-Software-Developers-New-Graduates
ToposInstitute
This repository contains all the code presented in the online book "Relational Thinking - from Abstractions to Applications".
armankarimpour
Welcome to Hyper Bot ! Create your own permanent Hyper Bot ( runs on Heroku, no Lc0 ) If you want to create your own permanent bot, do the following: Sign up to GitHub https://github.com/join , if you have not already. With your GitHub account visit https://github.com/hyperchessbot/hyperbot , then click on Fork. Create a BOT account if you do not already have one. To create one use an account that has not played any games yet, log into this account, then visit https://hypereasy.herokuapp.com/auth/lichess/bot , approve oauth and then on the page you are taken to click on 'Request upgrade to bot'. Create an API access token with your BOT account at https://lichess.org/account/oauth/token ( should have scopes Read incoming challenges / Create, accept, decline challenges / Play games with the bot API ) Sign up to Heroku https://signup.heroku.com/ , if you have not already. At Heroku create a new app using New / Create new app. Choose Europe for region. In the app's dashboard go to the Deploy tab. Use the GitHub button to connect the app to your forked repo. Press Search to find your repositories, then select hyperbot. You need to deploy the master branch. Enable Automatic Deploys and press Deploy Branch, for the initial deploy. Wait for the build to finish. In Heroku Settings / Reveal Config Vars create a new variable TOKEN and set its value to your newly created access token, then create a new variable BOT_NAME and set its value to your bot's lichess username. For more detailed instructions and screenshots on setting up your Heroku app refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Creating-and-configuring-your-app-on-Heroku#creating-and-configuring-your-app-on-heroku . Congratulations, you have an up and running lichess bot. If you want to use 3-4-5 piece tablebases on Heroku, refer to this guide https://github.com/hyperchessbot/hyperbot/wiki/Update-Heroku-app-to-latest-version-using-Gitpod#enabling-syzygy-tablebases . Upgrade to bot and play games in your browser To upgrade an account, that has played no games yet, to bot, and to make this bot accept challenges and play games in your browser, visit https://hypereasy.herokuapp.com . For detailed instructions see https://lichess.org/forum/off-topic-discussion/hyper-easy-all-variants-lichess-bot-running-in-your-browser#1 . Update Heroku app to latest version using Gitpod Refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Update-Heroku-app-to-latest-version-using-Gitpod#update-heroku-app-to-latest-version-using-gitpod . Creating a MongoDb account Refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Creating-a-MongoDb-account#creating-a-mongodb-account . Build external multi game PGN file with MongoDb book builder ( version 2 ) Refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Build-book-from-external-multi-game-PGN-file#build-book-from-external-multi-game-pgn-file . Install bot on Windows ( runs Lc0 ) Refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Install-bot-on-Windows-(-runs-Lc0-)#install-bot-on-windows--runs-lc0- . Install bot on goorm.io ( runs Lc0 ) Refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Install-bot-on-goorm.io-(-runs-Lc0-)#install-bot-on-goormio--runs-lc0- . Download a net for Lc0 Dowload a net from https://lczero.org/dev/wiki/best-nets-for-lc0 . Rename the weights file 'weights.pb.gz', then copy it to the 'lc0goorm' folder. Overwrite the old file. Update to latest version on Windows / goorm Refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Update-to-latest-version-on-Windows-or-goorm#update-to-latest-version-on-windows--goorm . Explanation of files Refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Explanation-of-files#git . Contribute to code Refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Contribute-to-code#contribute-to-code . Discussion / Feedback Discuss Hyper Bot on Discord https://discord.gg/8m3Muay . Post issues on GitHub https://github.com/hyperchessbot/hyperbot/issues . Getting assistance in lichess PM You can seek assistance in lichess PM using your BOT account. Open an issue at https://github.com/hyperchessbot/hyperbot/issues with the GitHub account on which your forked Hyper Bot, with the title 'Identifying lichess account'. Give a link to your lichess account in the issue. After identification you can PM https://lichess.org/@/hyperchessbotauthor . Seeking assistance in lichess PM without verifying your lichess account with your GitHub account may get you blocked. The block may be lifted once you identify your lichess account with your GitHub account. Config vars KEEP_ALIVE_URL : set this to the full link of your bot home page ( https://[yourappname].herokuapp.com , where change [yourappname] to your Heroku app name ) if you want your bot to be kept alive from early morning till late night Heroku server time, keeping alive a free Heroku bot for 24/7 is not possible, because a free Heroku account has a monthly quota of 550 hours ALWAYS_ON : requires paid Heroku account, set it to 'true' to keep the bot alive 24/7, you have to set KEEP_ALIVE_URL to your bot's full home page link for ALWAYS_ON to work ( see also the explanation of KEEP_ALIVE_URL config var ) ALLOW_CORRESPONDENCE : set it to 'true' to allow playing correspondence and infinite time control games CORRESPONDENCE_THINKING_TIME : think in correspondence as if the bot had that many seconds left on its clock ( default : 120 ), the actual thinking time will be decided by the engine MONGODB_URI : connect URI of your MongoDb admin user ( only the host, no slash after the host, do database specified, no query string ), if defined, your latest games or games downloaded from an url ( version 2 only ) will be added to the database on every startup, by default this config var is not defined USE_MONGO_BOOK : set it to 'true' to use the MongoDb book specified by MONGODB_URI DISABLE_ENGINE_FOR_MONGO : set it to 'true' to disable using engine completely when a MongoDb book move is available ( by default the bot may ignore a MongoDb book move at its discretion and use the engine instead for better performance and to allow for more varied play ) MONGO_VERSION : MongoDb book builder version, possible values are 1 ( default, builds a book from bot games as downloaded from lichess as JSON ), 2 ( builds a book from bot games as downloaded from lichess as PGN, or from an arbitrary url specified in PGN_URL ) PGN_URL : url for downloading a multi game PGN file for MongoDb book builder ( version 2 only ) MAX_GAMES : maximum number of games to be built by MongoDb book builder GENERAL_TIMEOUT : timeout for event streams in seconds ( default : 15 ) ENGINE_THREADS : engine Threads uci option ( default : 1 ) ENGINE_HASH : engine Hash uci option in megabytes ( default : 16 ) ENGINE_CONTEMPT : engine Contempt uci option in centipawns ( default : 24 ) ENGINE_MOVE_OVERHEAD : engine Move Overhead uci option in milliseconds ( default : 500 ) ALLOW_PONDER : set it to 'true' to make the bot think on opponent time BOOK_DEPTH : up to how many plies into the game should the bot use the book, choosing too high book depth is running the risk of playing unsound moves ( default : 20 ) BOOK_SPREAD : select the move from that many of the top book moves, choosing to high book spread is running the risk of playing unsound moves ( default : 4 ) BOOK_RATINGS : comma separated list of allowed book rating brackets, possible ratings are 1600, 1800, 2000, 2200, 2500 ( default : '2200,2500') BOOK_SPEEDS : comma separated list of allowed book speeds, possible speeds are bullet, blitz, rapid, classical ( default : 'blitz,rapid' ) LOG_API : set it to 'true' to allow more verbose logging, logs are available in the Inspection / Console of the browser USE_SCALACHESS : set it to 'true' to use scalachess library and multi variant engine ACCEPT_VARIANTS : space separated list of variant keys to accept ( default : 'standard' ), for non standard variants USE_SCALACHESS has to be set to 'true' , example : 'standard crazyhouse chess960 kingOfTheHill threeCheck antichess atomic horde racingKings fromPosition' ACCEPT_SPEEDS : space separated list of speeds to accept ( default : 'bullet blitz rapid classical' ), to allow correspondence set ALLOW_CORRESPONDENCE to 'true' DISABLE_RATED : set it to 'true' to reject rated challenges DISABLE_CASUAL : set it to 'true' to reject casual challenges DISABLE_BOT : set it to 'true' to reject bot challenges DISABLE_HUMAN : set it to 'true' to reject human challenges GAME_START_DELAY : delay between accepting challenge and starting to play game in seconds ( default : 2 ) CHALLENGE_INTERVAL : delay between auto challenge attempts in minutes ( default : 30 ) CHALLENGE_TIMEOUT : start attempting auto challenges after being idle for that many minutes ( default : 60 ) USE_NNUE : space separated list of variant keys for which to use NNUE ( default: 'standard chess960 fromPosition' ) USE_LC0 : set it to 'true' to use Lc0 engine, only works with Windows and goorm installation, on Heroku and Gitpod you should not use it or set it to false USE_POLYGLOT : set it to 'true' to use polyglot opening book WELCOME_MESSAGE : game chat welcome message ( delay from game start : 2 seconds , default : 'coded by @hyperchessbotauthor' ) GOOD_LUCK_MESSAGE : game chat good luck message ( delay from game start : 4 seconds , default : 'Good luck !' ) GOOD_GAME_MESSAGE : game chat good game message ( delay from game end : 2 seconds , default : 'Good game !' ) DISABLE_SYZYGY : set it to 'true' to disable using syzygy tablebases, note that syzygy tablebases are always disabled when USE_LC0 is set to 'true', syzygy tablebases are only installed for deployment on Heroku APP_NAME : Heroku app name ( necessary for interactive viewing of MongoDb book ) ABORT_AFTER : abort game after that many seconds if the opponent fails to make their opening move ( default : 120 ) DECLINE_HARD : set it to 'true' to explicitly decline unwanted challenges ( by default they are only ignored and can be accepted manually )
65ping
A Claude Code skill that guides any team through Design Thinking, from user research to shipped solution. Includes phase-by-phase methods, templates, facilitation scripts, and role-specific guidance for designers, founders, and business leaders.
melek90
hello everybody my name is malek and i'm a junior computer engineering student i'm trying to improve my skills in coding so i decided to start a github account and upload every project i'm trying to do so i can learn from you guys and try to improve my skills and my way of thinking in solving problems
Kwamb0
Part I - WeatherPy In this example, you’ll be creating a Python script to visualize the weather of 500+ cities across the world of varying distance from the equator. To accomplish this, you’ll be utilizing a simple Python library, the OpenWeatherMap API, and a little common sense to create a representative model of weather across world cities. Your first objective is to build a series of scatter plots to showcase the following relationships: Temperature (F) vs. Latitude Humidity (%) vs. Latitude Cloudiness (%) vs. Latitude Wind Speed (mph) vs. Latitude After each plot add a sentence or too explaining what the code is and analyzing. Your next objective is to run linear regression on each relationship, only this time separating them into Northern Hemisphere (greater than or equal to 0 degrees latitude) and Southern Hemisphere (less than 0 degrees latitude): Northern Hemisphere - Temperature (F) vs. Latitude Southern Hemisphere - Temperature (F) vs. Latitude Northern Hemisphere - Humidity (%) vs. Latitude Southern Hemisphere - Humidity (%) vs. Latitude Northern Hemisphere - Cloudiness (%) vs. Latitude Southern Hemisphere - Cloudiness (%) vs. Latitude Northern Hemisphere - Wind Speed (mph) vs. Latitude Southern Hemisphere - Wind Speed (mph) vs. Latitude After each pair of plots explain what the linear regression is modelling such as any relationships you notice and any other analysis you may have. Your final notebook must: Randomly select at least 500 unique (non-repeat) cities based on latitude and longitude. Perform a weather check on each of the cities using a series of successive API calls. Include a print log of each city as it’s being processed with the city number and city name. Save a CSV of all retrieved data and a PNG image for each scatter plot. Part II - VacationPy Now let’s use your skills in working with weather data to plan future vacations. Use jupyter-gmaps and the Google Places API for this part of the assignment. Note: if you having trouble displaying the maps try running jupyter nbextension enable --py gmaps in your environment and retry. Create a heat map that displays the humidity for every city from the part I of the homework. heatmap Narrow down the DataFrame to find your ideal weather condition. For example: A max temperature lower than 80 degrees but higher than 70. Wind speed less than 10 mph. Zero cloudiness. Drop any rows that don’t contain all three conditions. You want to be sure the weather is ideal. Note: Feel free to adjust to your specifications but be sure to limit the number of rows returned by your API requests to a reasonable number. Using Google Places API to find the first hotel for each city located within 5000 meters of your coordinates. Plot the hotels on top of the humidity heatmap with each pin containing the Hotel Name, City, and Country. hotel map As final considerations: Create a new GitHub repository for this project called API-Challenge (note the kebab-case). Do not add to an existing repo You must complete your analysis using a Jupyter notebook. You must use the Matplotlib or Pandas plotting libraries. For Part I, you must include a written description of three observable trends based on the data. You must use proper labeling of your plots, including aspects like: Plot Titles (with date of analysis) and Axes Labels. For max intensity in the heat map, try setting it to the highest humidity found in the data set. Hints and Considerations The city data you generate is based on random coordinates as well as different query times; as such, your outputs will not be an exact match to the provided starter notebook. You may want to start this assignment by refreshing yourself on the geographic coordinate system. Next, spend the requisite time necessary to study the OpenWeatherMap API. Based on your initial study, you should be able to answer basic questions about the API: Where do you request the API key? Which Weather API in particular will you need? What URL endpoints does it expect? What JSON structure does it respond with? Before you write a line of code, you should be aiming to have a crystal clear understanding of your intended outcome. A starter code for Citipy has been provided. However, if you’re craving an extra challenge, push yourself to learn how it works: citipy Python library. Before you try to incorporate the library into your analysis, start by creating simple test cases outside your main script to confirm that you are using it correctly. Too often, when introduced to a new library, students get bogged down by the most minor of errors – spending hours investigating their entire code – when, in fact, a simple and focused test would have shown their basic utilization of the library was wrong from the start. Don’t let this be you! Part of our expectation in this challenge is that you will use critical thinking skills to understand how and why we’re recommending the tools we are. What is Citipy for? Why would you use it in conjunction with the OpenWeatherMap API? How would you do so? In building your script, pay attention to the cities you are using in your query pool. Are you getting coverage of the full gamut of latitudes and longitudes? Or are you simply choosing 500 cities concentrated in one region of the world? Even if you were a geographic genius, simply rattling 500 cities based on your human selection would create a biased dataset. Be thinking of how you should counter this. (Hint: Consider the full range of latitudes). Once you have computed the linear regression for one chart, the process will be similar for all others. As a bonus, try to create a function that will create these charts based on different parameters. Remember that each coordinate will trigger a separate call to the Google API. If you’re creating your own criteria to plan your vacation, try to reduce the results in your DataFrame to 10 or fewer cities. Lastly, remember – this is a challenging activity. Push yourself! If you complete this task, then you can safely say that you’ve gained a strong mastery of the core foundations of data analytics and it will only go better from here. Good luck!
armankarimpour
Welcome to Hyper Bot ! Create your own permanent Hyper Bot ( runs on Heroku, no Lc0 ) If you want to create your own permanent bot, do the following: Sign up to GitHub https://github.com/join , if you have not already. With your GitHub account visit https://github.com/hyperchessbot/hyperbot , then click on Fork. Create a BOT account if you do not already have one. To create one use an account that has not played any games yet, log into this account, then visit https://hypereasy.herokuapp.com/auth/lichess/bot , approve oauth and then on the page you are taken to click on 'Request upgrade to bot'. Create an API access token with your BOT account at https://lichess.org/account/oauth/token ( should have scopes Read incoming challenges / Create, accept, decline challenges / Play games with the bot API ) Sign up to Heroku https://signup.heroku.com/ , if you have not already. At Heroku create a new app using New / Create new app. Choose Europe for region. In the app's dashboard go to the Deploy tab. Use the GitHub button to connect the app to your forked repo. Press Search to find your repositories, then select hyperbot. You need to deploy the master branch. Enable Automatic Deploys and press Deploy Branch, for the initial deploy. Wait for the build to finish. In Heroku Settings / Reveal Config Vars create a new variable TOKEN and set its value to your newly created access token, then create a new variable BOT_NAME and set its value to your bot's lichess username. For more detailed instructions and screenshots on setting up your Heroku app refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Creating-and-configuring-your-app-on-Heroku#creating-and-configuring-your-app-on-heroku . Congratulations, you have an up and running lichess bot. If you want to use 3-4-5 piece tablebases on Heroku, refer to this guide https://github.com/hyperchessbot/hyperbot/wiki/Update-Heroku-app-to-latest-version-using-Gitpod#enabling-syzygy-tablebases . Upgrade to bot and play games in your browser To upgrade an account, that has played no games yet, to bot, and to make this bot accept challenges and play games in your browser, visit https://hypereasy.herokuapp.com . For detailed instructions see https://lichess.org/forum/off-topic-discussion/hyper-easy-all-variants-lichess-bot-running-in-your-browser#1 . Update Heroku app to latest version using Gitpod Refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Update-Heroku-app-to-latest-version-using-Gitpod#update-heroku-app-to-latest-version-using-gitpod . Creating a MongoDb account Refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Creating-a-MongoDb-account#creating-a-mongodb-account . Build external multi game PGN file with MongoDb book builder ( version 2 ) Refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Build-book-from-external-multi-game-PGN-file#build-book-from-external-multi-game-pgn-file . Install bot on Windows ( runs Lc0 ) Refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Install-bot-on-Windows-(-runs-Lc0-)#install-bot-on-windows--runs-lc0- . Install bot on goorm.io ( runs Lc0 ) Refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Install-bot-on-goorm.io-(-runs-Lc0-)#install-bot-on-goormio--runs-lc0- . Download a net for Lc0 Dowload a net from https://lczero.org/dev/wiki/best-nets-for-lc0 . Rename the weights file 'weights.pb.gz', then copy it to the 'lc0goorm' folder. Overwrite the old file. Update to latest version on Windows / goorm Refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Update-to-latest-version-on-Windows-or-goorm#update-to-latest-version-on-windows--goorm . Explanation of files Refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Explanation-of-files#git . Contribute to code Refer to this Wiki https://github.com/hyperchessbot/hyperbot/wiki/Contribute-to-code#contribute-to-code . Discussion / Feedback Discuss Hyper Bot on Discord https://discord.gg/8m3Muay . Post issues on GitHub https://github.com/hyperchessbot/hyperbot/issues . Getting assistance in lichess PM You can seek assistance in lichess PM using your BOT account. Open an issue at https://github.com/hyperchessbot/hyperbot/issues with the GitHub account on which your forked Hyper Bot, with the title 'Identifying lichess account'. Give a link to your lichess account in the issue. After identification you can PM https://lichess.org/@/hyperchessbotauthor . Seeking assistance in lichess PM without verifying your lichess account with your GitHub account may get you blocked. The block may be lifted once you identify your lichess account with your GitHub account. Config vars KEEP_ALIVE_URL : set this to the full link of your bot home page ( https://[yourappname].herokuapp.com , where change [yourappname] to your Heroku app name ) if you want your bot to be kept alive from early morning till late night Heroku server time, keeping alive a free Heroku bot for 24/7 is not possible, because a free Heroku account has a monthly quota of 550 hours ALWAYS_ON : requires paid Heroku account, set it to 'true' to keep the bot alive 24/7, you have to set KEEP_ALIVE_URL to your bot's full home page link for ALWAYS_ON to work ( see also the explanation of KEEP_ALIVE_URL config var ) ALLOW_CORRESPONDENCE : set it to 'true' to allow playing correspondence and infinite time control games CORRESPONDENCE_THINKING_TIME : think in correspondence as if the bot had that many seconds left on its clock ( default : 120 ), the actual thinking time will be decided by the engine MONGODB_URI : connect URI of your MongoDb admin user ( only the host, no slash after the host, do database specified, no query string ), if defined, your latest games or games downloaded from an url ( version 2 only ) will be added to the database on every startup, by default this config var is not defined USE_MONGO_BOOK : set it to 'true' to use the MongoDb book specified by MONGODB_URI DISABLE_ENGINE_FOR_MONGO : set it to 'true' to disable using engine completely when a MongoDb book move is available ( by default the bot may ignore a MongoDb book move at its discretion and use the engine instead for better performance and to allow for more varied play ) MONGO_VERSION : MongoDb book builder version, possible values are 1 ( default, builds a book from bot games as downloaded from lichess as JSON ), 2 ( builds a book from bot games as downloaded from lichess as PGN, or from an arbitrary url specified in PGN_URL ) PGN_URL : url for downloading a multi game PGN file for MongoDb book builder ( version 2 only ) MAX_GAMES : maximum number of games to be built by MongoDb book builder GENERAL_TIMEOUT : timeout for event streams in seconds ( default : 15 ) ENGINE_THREADS : engine Threads uci option ( default : 1 ) ENGINE_HASH : engine Hash uci option in megabytes ( default : 16 ) ENGINE_CONTEMPT : engine Contempt uci option in centipawns ( default : 24 ) ENGINE_MOVE_OVERHEAD : engine Move Overhead uci option in milliseconds ( default : 500 ) ALLOW_PONDER : set it to 'true' to make the bot think on opponent time BOOK_DEPTH : up to how many plies into the game should the bot use the book, choosing too high book depth is running the risk of playing unsound moves ( default : 20 ) BOOK_SPREAD : select the move from that many of the top book moves, choosing to high book spread is running the risk of playing unsound moves ( default : 4 ) BOOK_RATINGS : comma separated list of allowed book rating brackets, possible ratings are 1600, 1800, 2000, 2200, 2500 ( default : '2200,2500') BOOK_SPEEDS : comma separated list of allowed book speeds, possible speeds are bullet, blitz, rapid, classical ( default : 'blitz,rapid' ) LOG_API : set it to 'true' to allow more verbose logging, logs are available in the Inspection / Console of the browser USE_SCALACHESS : set it to 'true' to use scalachess library and multi variant engine ACCEPT_VARIANTS : space separated list of variant keys to accept ( default : 'standard' ), for non standard variants USE_SCALACHESS has to be set to 'true' , example : 'standard crazyhouse chess960 kingOfTheHill threeCheck antichess atomic horde racingKings fromPosition' ACCEPT_SPEEDS : space separated list of speeds to accept ( default : 'bullet blitz rapid classical' ), to allow correspondence set ALLOW_CORRESPONDENCE to 'true' DISABLE_RATED : set it to 'true' to reject rated challenges DISABLE_CASUAL : set it to 'true' to reject casual challenges DISABLE_BOT : set it to 'true' to reject bot challenges DISABLE_HUMAN : set it to 'true' to reject human challenges GAME_START_DELAY : delay between accepting challenge and starting to play game in seconds ( default : 2 ) CHALLENGE_INTERVAL : delay between auto challenge attempts in minutes ( default : 30 ) CHALLENGE_TIMEOUT : start attempting auto challenges after being idle for that many minutes ( default : 60 ) USE_NNUE : space separated list of variant keys for which to use NNUE ( default: 'standard chess960 fromPosition' ) USE_LC0 : set it to 'true' to use Lc0 engine, only works with Windows and goorm installation, on Heroku and Gitpod you should not use it or set it to false USE_POLYGLOT : set it to 'true' to use polyglot opening book WELCOME_MESSAGE : game chat welcome message ( delay from game start : 2 seconds , default : 'coded by @hyperchessbotauthor' ) GOOD_LUCK_MESSAGE : game chat good luck message ( delay from game start : 4 seconds , default : 'Good luck !' ) GOOD_GAME_MESSAGE : game chat good game message ( delay from game end : 2 seconds , default : 'Good game !' ) DISABLE_SYZYGY : set it to 'true' to disable using syzygy tablebases, note that syzygy tablebases are always disabled when USE_LC0 is set to 'true', syzygy tablebases are only installed for deployment on Heroku APP_NAME : Heroku app name ( necessary for interactive viewing of MongoDb book ) ABORT_AFTER : abort game after that many seconds if the opponent fails to make their opening move ( default : 120 ) DECLINE_HARD : set it to 'true' to explicitly decline unwanted challenges ( by default they are only ignored and can be accepted manually )
Aryia-Behroziuan
Knowledge-representation is a field of artificial intelligence that focuses on designing computer representations that capture information about the world that can be used to solve complex problems. The justification for knowledge representation is that conventional procedural code is not the best formalism to use to solve complex problems. Knowledge representation makes complex software easier to define and maintain than procedural code and can be used in expert systems. For example, talking to experts in terms of business rules rather than code lessens the semantic gap between users and developers and makes development of complex systems more practical. Knowledge representation goes hand in hand with automated reasoning because one of the main purposes of explicitly representing knowledge is to be able to reason about that knowledge, to make inferences, assert new knowledge, etc. Virtually all knowledge representation languages have a reasoning or inference engine as part of the system.[10] A key trade-off in the design of a knowledge representation formalism is that between expressivity and practicality. The ultimate knowledge representation formalism in terms of expressive power and compactness is First Order Logic (FOL). There is no more powerful formalism than that used by mathematicians to define general propositions about the world. However, FOL has two drawbacks as a knowledge representation formalism: ease of use and practicality of implementation. First order logic can be intimidating even for many software developers. Languages that do not have the complete formal power of FOL can still provide close to the same expressive power with a user interface that is more practical for the average developer to understand. The issue of practicality of implementation is that FOL in some ways is too expressive. With FOL it is possible to create statements (e.g. quantification over infinite sets) that would cause a system to never terminate if it attempted to verify them. Thus, a subset of FOL can be both easier to use and more practical to implement. This was a driving motivation behind rule-based expert systems. IF-THEN rules provide a subset of FOL but a very useful one that is also very intuitive. The history of most of the early AI knowledge representation formalisms; from databases to semantic nets to theorem provers and production systems can be viewed as various design decisions on whether to emphasize expressive power or computability and efficiency.[11] In a key 1993 paper on the topic, Randall Davis of MIT outlined five distinct roles to analyze a knowledge representation framework:[12] A knowledge representation (KR) is most fundamentally a surrogate, a substitute for the thing itself, used to enable an entity to determine consequences by thinking rather than acting, i.e., by reasoning about the world rather than taking action in it. It is a set of ontological commitments, i.e., an answer to the question: In what terms should I think about the world? It is a fragmentary theory of intelligent reasoning, expressed in terms of three components: (i) the representation's fundamental conception of intelligent reasoning; (ii) the set of inferences the representation sanctions; and (iii) the set of inferences it recommends. It is a medium for pragmatically efficient computation, i.e., the computational environment in which thinking is accomplished. One contribution to this pragmatic efficiency is supplied by the guidance a representation provides for organizing information so as to facilitate making the recommended inferences. It is a medium of human expression, i.e., a language in which we say things about the world. Knowledge representation and reasoning are a key enabling technology for the Semantic Web. Languages based on the Frame model with automatic classification provide a layer of semantics on top of the existing Internet. Rather than searching via text strings as is typical today, it will be possible to define logical queries and find pages that map to those queries.[13] The automated reasoning component in these systems is an engine known as the classifier. Classifiers focus on the subsumption relations in a knowledge base rather than rules. A classifier can infer new classes and dynamically change the ontology as new information becomes available. This capability is ideal for the ever-changing and evolving information space of the Internet.[14] The Semantic Web integrates concepts from knowledge representation and reasoning with markup languages based on XML. The Resource Description Framework (RDF) provides the basic capabilities to define knowledge-based objects on the Internet with basic features such as Is-A relations and object properties. The Web Ontology Language (OWL) adds additional semantics and integrates with automatic classification reasoners.[15]
BholaHrishikesh
If you like this build I’ve also written other posts on building a simple voice controlled Magic Mirror with the Raspberry Pi and the AIY Projects Voice Kit, and a face-tracking cyborg dinosaur called “Do-you-think-he-saurs” with the Raspberry Pi and the AIY Projects Vision Kit. At the tail end of last month, just ahead of the announcement of the pre-order availability of the new Google AIY Project Voice Kit, I finally decided to take the kit I’d managed to pick up with issue 57 of the MagPi out of its box, and put it together. However inspired by the 1986 Google Pi Intercom build put together Martin Mander, ever since I’ve been thinking about venturing beyond the cardboard box and building my own retro-computing enclosure around the Voice Kit. I was initially thinking about using an old radio until I came across the GPO 746 Rotary Telephone. This is a modern day replica of what must be the most iconic rotary dial phone in the United Kingdom. This is the phone that sat on everybody’s desk, and in their front halls, throughout the 1970’s. It was the standard rental phone, right up until British Telecom was privatised in the middle of the 1980's. The GPO 746 Rotary Telephone. While the GPO 746 is available in the United States it’s half the price, and there are a lot more colours to choose from, if you’re buying the phone in the United Kingdom. A definite business opportunity for someone there because it turns out that, on the inside, it’s a rather interesting bit of hardware. Gathering your Tools For this project you’ll need is a small Philips “00” watch maker’s screwdriver, a craft knife, scissors, a set of small wire snips, a drill and a 2 to 4mm bit, a soldering iron, solder, some jumper wires, female header blocks, a couple of LEDs, some electrical tape, a cable tie, and possibly some Sugru and heat shrink tubing, depending how neat you want to be about things. While I did end up soldering a few things during the build, it is was mostly restricted to joining wires together and should definitely be approachable for beginners. Opening the Box Ahead of the new Voice Kit hitting the shelves next month I managed to get my hands on a few pre-production kits, which fortunately meant that I didn’t have to take my cardboard box apart to put together a new build. The new AIY Project Voice Kit. The new AIY Voice Kit comes comes in a box very similar to the original kit distributed with the Mag Pi magazine. The box might be a bit thinner, but otherwise things look much the same. Missing from my pre-production kits the two little plastic spacers that keep the Voice HAT from bending down and hitting the top of the Raspberry Pi. I’m presuming they’ll include them in the production kits, without them the underside of the HAT tends to push downwards and the solder tails of the speaker screw terminal shorts out against the Raspberry Pi’s HDMI connector. I fixed this by adding some electrical tape to separate the two boards, but the spacers would have worked a lot better and added more stability. The only component swap was the arcade button, gone was the separate lamp, holder, microswitch and button—all four components have been replaced by a single button with everything integrated. Since it was somewhat fiddly to get that assembled last time, this is a definite improvement. While my pre-production kits didn’t include it, I’m told the retail version will have a copy of the MagPi Essentials AIY Projects book written by Lucy Hattersley on how to “Create a Voice Kit with your Raspberry Pi.” Other than that, things went together much as before. At which point I quickly put together the Voice Kit, this time however, I didn’t bother with the cardboard box. Opening the Phone Pulling the replica GPO 746 out of its box you’ll find it comes in two parts, the main phone with the dial, and a separate handset which plugs in underneath the base. The first thing I needed to do was take the base unit of the phone apart and figure out how it worked. Until I knew what I had to work with, it was going to be impossible to figure out a sensible plan to integrate the Voice Kit. Opening up the GPO 746. The main PCB is mounted on the base along with a steel weight to give the impression of “heft” to the replica phone. There’s also a large bell, which makes that distinctive ringing noise familiar to anyone that owned or used a GPO 746 back in the 1970's. The circuitry attached to the base of the GPO 746. To the left of the PCB is the jack socket where the telephone line is connected (two wires, red and green). To the top, two switches. One is for handset, and the other for ringer, volume. At the bottom another jack switch (four wires, red, black, yellow, and green) where the handset is attached. The only thing of real interest on the PCB is the Hualon Microelectronics HM9102D which is a switchable tone/pulse dialer chip, which we’re actually not going to use. In fact, since the line voltage in the UK is +50V, pretty much none of it was going to be any use to me. So after measuring the voltage on the cable connecting the dialer to the PCB, I snipped the wires to the switches and the jacks—leaving them in place with as much trailing wire as possible in case they were going to come in useful,—and then removed both the PCB and the bell. After that, I filed down the plastic moulding that held everything in place leaving me with a large flat area which was perfectly sized for the Raspberry Pi and the Voice HAT. The moulded top of the phone has two assemblies, a simple microswitch toggled using a hinged and sprung plastic plate when the phone handset is taken on and off the hook, and the dialer assembly which is connected to the base and the PCB using a ribbon cable. How the Dialer Works It was time to break out the logic analyser. While I’ve got a Saleae Logic Pro 16 on my desk, if you’re thinking about picking one up for the first time I’d really recommend the much cheaper Logic 8, or even the lower specification Logic 4, rather than splashing out on the higher end model. Either will take you a long way before you get the itch that you have to upgrade. Logic Analyser attached to the dial of the GPO 746 powered up using a Bench Power Supply. Stripping the connector from the cable that connected the dialer to the PCB and powering it up with a bench power supply to +5V—which is more-or-less what I’d measured on the cable and was something I could reasonably expect to get from the Raspberry Pi—I connected the rest of the cables to my logic analyser and started turning the dial confidently expecting to see something interesting going on. I found nothing, I had flat lines, there was no signal going down the wires at all. After playing around with the voltage for a few minutes, with no results, I stripped the dialer assembly out of the case for a closer look. Dialer assembly removed from the GPO 746. The back of the dialer assembly has two LEDs, which I thought was rather odd since there dial isn’t illuminated in any way, at least not from the outside. Interestingly these two LEDs flash briefly when the dial is turned all the way around to hit the stop. Cracking the case brings us to something else interesting, it’s a light box. Designed to keep the light from the LEDs inside, it has a hole which rotates around as you dial a number. Taking apart the dial assembly. The hole exposes one of twelve photoresistors to the light from the LEDs and the number (or symbol) you’re dialing determines which of the resistors will be under the hole when the dial stop is reached. The photoresistors inside the dial assembly. It was all passive circuitry. No wonder I hadn’t seen anything on the logic analyser, there wasn’t any logic to analyse. It was all analogue. Unfortunately for me, the Raspberry Pi has no built in analogue inputs. That means I’d have to pull a Microchip MCP3008, or something similar, from the shelf and build some circuitry. I’d also have to figure out how the resistance for twelve photoresistors ended up travelling down just eight wires, which sort of had me puzzled at this point. That all sounded like a lot of effort. Since I really only wanted to dial a single digit to activate the Voice Kit, and I didn’t care what that was, I decided to ignore the photoresistors and concentrate on the dial stop. The dialer mechanism showing the back of the dial stop (left) with microswitch. Unlike the original GPO 746, the dial stop on this replica moves. It drops when you hit it with the side of your finger when dialling a number. It turned out that it was connected to a microswitch, and when the microswitch was activated, this was the thing that briefly flashed the LEDs and exposed the appropriate photoresistor. It was actually all rather clever. A really neat way to minimise the build of materials costs for the phone. Startups thinking about building hardware could learn a lesson or two in economy from this phone. Using the logic analyser on the microswitch. Just to be sure I had this right, I dialled down the bench power supply to a Raspberry Pi friendly +3.3V and wired up the microswitch to the logic analyser. Applying +3.3V (middle trace) and “dialing” shows the microswitch toggling (lower trace). Dialling a number on the dialer assembly worked as expected. We could ignore the dial itself, and those photoresistors that would be a pain to use with the Raspberry Pi and just make use of the microswitch. In fact we could more-or-less just replace the arcade button with this switch. Integrating the AIY Project Kit Moving on, I really wanted to reuse both the speaker and the microphone already in the handset instead of the ones the came with the Voice Kit. Handset stripped of its speaker and micrphone, Taking apart the handset—the end caps holding the speaker and microphone just screw off—showed that there were four wires inside the curled cable. Two for the speaker, and two for the electret condenser microphone. The Voice Kit makes use of two InvenSense ICS-43434 MEMS microphones which use I2S to communicate. They’re a solid replacement for traditional 2-wire analog microphones like the one we in the handset of the GPO 746. The Voice HAT Microphone daughter board. Looking at the Voice HAT microphone daughter board, it has been designed so that you can break the two microphones away from the board at the perforations and then you can solder the wiring harness directly to the pads. So long as you keep the signals consistent you should be able to place the mics pretty much anywhere, and with a clock rate of ~3MHz, a longer cable should be fine. Unfortunately I2S uses more wires than I had available. Unless I wanted to replace the curled cable, and I didn’t really want to have to do that, I was in trouble. Putting that aside for a moment I decided to start with the dialer assembly. Refitting it to the case, I snipped the wires leading to the microswitch and, grabbing the wiring harness for the arcade button, I soldered the microswitch to the relevant wires in the harness. Soldering the Voice HAT button wiring harness to the phone’s microswitch. I then grabbed a ultra-bright LED and a 220Ω resistor from the shelves and soldered the resistor in-line with the LED. I then attached my new LED assembly to the other two wires in the arcade button wiring harness. At this point I had a replacement for the arcade button that came with the Voice Kit. Attaching a current limiting resistor to my LED. Giving up on putting microphones into the handset I pulled out a drill and measuring the spacing between the two microphones I drilled a couple of holes in the external shell of the phone. Drilling two holes in the shell of the phone. These weren’t going to be visible from the outside as there is a void between the top of the phone, where the handset rests. This is a carrying handle where you can tuck your hand in, and pick up the phone. In the old days this let you pick up the phone and wander around the room—well, so long as the cable tying you to the wall was long enough. Attaching the Voice HAT microphone board to the phone shell. I then went ahead and tucked the microphone board behind the spring which operated the hook mechanism. There was just enough room to secure it there with a cable tie, and some Sugru. After that I plugged the handset into the jack on the base and connected the two wires from the handset jack that were attached to the speaker to the screw terminals on the Voice HAT. The re-wired internals of the modified GPO 746. Microphone board and Voice Kit both fixed in place with Sugru. Stripping the jack out where the phone line originally ran left two upright pillars that used to go on either side of the jack. I threaded the end of a 2.5A micro-USB charger through the hole and tied it around the pillars for strain relief. Which completed the re-wiring. The arcade button had been replaced with the dial stop microswitch and an LED which I was going to tuck just ahead of the microphone board in a convenient clip-like part of the body moulding. The speaker had been swapped out directly with the one in the handset—fortunately the impedance match wasn’t too far off—and the microphone had been mounted somewhere convenient inside the main body of the phone. A Working Phone Screwing everything back together we have once again something that looks like a phone. The assembled phone. I booted the Raspberry Pi, logged in via SSH and went ahead and ran the src/assistant_library_with_button_demo.py script from the dev console. A working build, but it’s not quite there yet. Success. Picking up the handset and dialling a number, any number, let you talk to the Voice Assistant. But it wasn’t quite there yet. While it worked, it didn’t feel like a phone. Adding a Dial Tone What the phone needed was a dial tone. It needed to play when the handset was lifted and shut off when the phone was dialled, or the handset replaced. The phone hook works the opposite way that you might expect, when the handset is in the cradle the microswitch that simulates the hook is open as the bar below it is pushed down by the hook. When the handset is off the hook, then the microswitch is closed as the bar moves upwards. Conveniently the Voice HAT breaks out most of the unused GPIO pins from the Raspberry Pi, so at least in theory wiring the the microswitch attached to the the hook mechanism to one to the Voice HAT should be fairly simple. Available unpopulated connectors on the Voice HAT. (Image credit: Google) Thinking about how to approach this in software however left us with a bit of a quandary. While the underlying Python GPIO library allows us to detect both the rising and falling edge events when a switch is toggled, the AIY wrapper code doesn’t in the Voice Kit doesn’t. While I could have gone in and modified the wrapper code to add that functionality, I decided I didn’t want to mess around with that—perhaps I’ll get around to it later and send them a pull request—instead I decided to fix it in hardware and wire the hook switch into both GPIO4 and GPIO17. That way I could use one pin to monitor for GPIO.RISING, and the other for GPIO.FALLING. Wiring up the phone hook. It’s easy enough to do that using the aiy._drivers._button.Button class, and two callback methods. One called with the handset is taken off the hook, and the other called with it is replaced. All the additional wiring in place and working. We can then use the pygame library to play a WAV file in the background when the handset is lifted, and stop when it is replaced. We also have to add a stop command inside the _on_button_pressed() method so that the dial tone stops when the phone is dialled, and a call to stop_conversation() to stop the Voice Assistant talking if the handset is returned on hook while Google is answering our question. Adding a Greeting and a Hang Up Noise We’re not quite there yet, we can also use aiy.audio.play_wave() to add that distinctive disconnect noise when Google finishes talking and “hangs up” before returning to our dial tone. We can also use aiy.audio.say(‘…’) call to add a greeting when Google “picks up” the phone to talk to us. The final build. It’s surprising how much atmosphere just adding these simple sounds ended up making to the build, and how much the user experience was improved. It now doesn’t just look like a rotary phone, it sort of feels, and perhaps more importantly, sounds like one too. The Script The final version of the script has amazingly small number of modifications away from the original version distributed byGoogle. Which sort of shows how simple it is to build something that looks and feels very different from the original cardboard box with not a lot of effort, at least on the software If you want to replicate the build you can grab the two mono WAV files I used for the build from Dropbox. Although, if you’re outside the the United Kingdom, you might want to replace the standard British dial tone of 350Hz and 450Hz—which you hear any time you lift a phone off the hook—with something more appropriate. Available to Preorder The new kits are being produced by Google, and are available to pre-order at Micro Center and through their resellers like Adafruit, and SeeedStudio. The AIY Voice Kit is priced at $25 on its own, but you can pick one up for free if you order a Raspberry Pi 3 at $35 for in-store pickup from Micro Center. My new retro rotary phone build next to my original Voice Kit. The kit will be available in the United Kingdom through Pimoroni, and cost £25, and you can expect shipping dates for kits ordered in through them to be similar to those ordered from Micro Center.
frankienhayesa
Amazon announced a new device called the Glow during its fall product launch event, a $250 video chatting gadget that allows children to virtually interact with loved ones by playing games and reading books together. Although the company has been selling the Echo Dot Kids Edition for years, it's rare for Amazon to develop an entirely new device designed specifically for children. While it's new for Amazon, the general concept behind Glow might sound familiar -- especially if you've ever used the storytelling app Caribu or the Osmo brand of educational tablet accessories. That's because although they're different products, they share a lot of underlying qualities with Amazon's Glow. Caribu is designed to help kids play games and read stories with relatives remotely through an interactive video chatting platform, and Osmo is all about incorporating real-world game pieces into educational games you can play on a tablet. Read more: Amazon unwraps privacy features as it tries to roll deeper into your home To understand the similarities, it's important to know how the Glow works. Amazon Glow is an Alexa-free video chatting device that consists of an 8-inch upright display, a camera with a built-in shutter and a projector. The device isn't available to the public yet and can only be obtained via invitation since it's part of the company's Day 1 Editions program. The basic premise behind the Glow is simple. Children can video chat with relatives and loved ones on the device's screen, while a projector conjures up a virtual play area for games and activities that's displayed on a silicone mat in front of them. The person on the other end of the call can participate in that game or puzzle on their tablet through the Glow app.The activities are also designed to combine real-world elements with digital ones. For example, in a demo video on Amazon's website, kids can be seen arranging physical game tiles, drawing pictures with their finger on the play mat and moving digital puzzle pieces on the mat -- all while a grandparent or aunt on the other end cheers them on. The device will come with a one-year subscription to Amazon Kids Plus and will feature content from Disney, Sesame Street, Barbie, Pixar and Hot Wheels. The Caribu app is built on a similar concept, but with a different execution: It's an app with the same goal, not a purpose-built device. Caribu is meant to make the video calling experience more interactive by enabling children and loved ones to share experiences like bedtime stories, coloring sheets and games virtually. It's essentially like a Zoom for kids that's available on iOS, Android and the web, but with built-in activities. The app has been around since 2016, but grew in popularity throughout the pandemic as relatives looked for ways to connect with little ones they couldn't see in person. Maxeme Tuchman, Caribu's CEO and co-founder, doesn't seem bothered by Amazon's entry into the space. "What I can say is that Caribu obviously identified a problem in the market, started a trend, and now everyone wants in," Tuchman said in a statement.Osmo, on the other hand, is more about turning your tablet into an interactive device for educational games and activities rather than social interaction. Osmo's system involves slotting a compatible tablet into a base that enables it to stand upright in portrait mode. You would then place a red reflector piece over the device's camera. This reflector enables the tablet to detect physical game pieces so that these real-world elements can be incorporated into the game on screen. Games designed for the Osmo cover a range of skills, including coding, literacy, critical thinking, drawing, math and science. Certain Osmo bundles are priced similarly to the Amazon Glow, but the starter kit -- which includes the base, reflector, and four games aimed at children ages 3 to 5 -- costs just $79. Read more: Always Home Cam: Amazon's flying Ring drone might be tricky to get your hands on Like Caribu, Osmo isn't an apples-to-apples competitor to Amazon Glow. Amazon seems focused on the technology while it relies on big-name partners for most of the content, unlike Osmo. And Osmo is centered on solo playtime and learning, rather than shared experiences. There are also some fundamental differences in how the products work. Osmo doesn't project images onto a nearby surface like the Glow. Instead it uses the reflector to send an image of game pieces or a child's drawing to the tablet's camera so it can be incorporated into the game. Amazon also specifically mentions that the Glow itself isn't a toy, despite its similarity to products that are considered to be part of the toy market.Still, the core appeal of both products comes down to combining an on-screen experience with real-world play elements. As is the case with Caribu, the concepts are just carried out in different ways. Similar to Tuchman, Osmo co-creator Pramod Sharma didn't express concern about increased competition from Amazon. "We're excited to see Amazon join the play movement we started with Osmo over seven years ago," Sharma said via email. Amazon's device also isn't the first experimental computing device to rely on a projector as a central part of the interaction. You might remember HP's Sprout Windows 8 all-in-one PC from 2014, which projected a second screen onto a 20-inch touch sensitive mat situated in front of the computer for drawing and other creative work. It's easy to understand why Amazon would develop a product like the Glow at a time like this. The pandemic has normalized remote learning and fueled interest in connecting with family members virtually. At the same time, technology is playing a bigger role in the global market for educational toys, which is expected to grow from $19.2 billion in 2020 to $31.62 billion by 2026, according to Arziton Advisory and Intelligence. Toys that use augmented reality to overlay digital graphics on real-world objects will likely boost the demand for learning toys year-over-year, says the report.Tech companies are also increasingly tailoring their products to appeal to younger audiences. Facebook offers a version of its Messenger chat app for children, and the company has been building a version of Instagram for kids, too. (Those plans were recently put on hold following backlash over the concerns that come with exposing younger age groups to social media.) Apple launched parental controls for the iPhone in 2019 and released Swift Playgrounds in 2016, a game aimed at teaching children how to code in Apple's Swift programming language. It's too soon to know whether the Amazon Glow will be a success. Amazon's Day 1 Editions program is meant to provide access to new products before they're ready for prime time, meaning they may not be ready for widespread release. Not all products in the program make it past the Day 1 Editions phase either. The Echo Loop, an Alexa-powered smart ring, never graduated from Day 1 Editions to become a real product, for example.We'll have to wait until we've tried Amazon's new child-friendly gadget to know how it stacks up against existing products.
My DSA codes from websites like Leetcode, InterviewBit, CodeChef in programming languages like Java, Python, cpp. This is an effort to project my thinking for the codes. Lets discuss more if you can refine my thinking.
windyguo2046
Task Your task in this assignment is to aggregate the data found in the Citi Bike Trip History Logs to build a data dashboard, story, or report. You may work with a timespan of your choosing. If you're really ambitious, you can merge multiple datasets from different periods. Try to provide answers to the following questions: How many trips have been recorded total during the chosen period? By what percentage has total ridership grown? How has the proportion of short-term customers and annual subscribers changed? What are the peak hours in which bikes are used during summer months (for whatever year of data you selected)? What are the peak hours in which bikes are used during winter months (for whatever year of data you selected)? What are the top 10 stations in the city for starting a journey? (Based on data, why do you hypothesize these are the top locations?) What are the top 10 stations in the city for ending a journey? (Based on data, why?) What are the bottom 10 stations in the city for starting a journey? (Based on data, why?) What are the bottom 10 stations in the city for ending a journey (Based on data, why?) What is the gender breakdown of active participants (Male v. Female)? How does the average trip duration change by age? What is the average distance in miles that a bike is ridden? Which Bikes (by ID) are most likely due for repair or inspection this year? How variable is the utilization by bike ID? Additionally, city officials would like to see the following visualizations: A static map that plots all bike stations with a visual indication of the most popular locations to start and end a journey with zip code data overlaid on top. A dynamic map that shows how each station's popularity changes over time (by month and year) -- with commentary pointing to any interesting events that may be behind these phenomena. Lastly, as a chronic over-achiever, you must also: Find at least two unexpected phenomena in the data and provide a visualization and analysis to document their presence. Considerations Remember, the people reading your analysis will NOT be data analysts. Your audience will be city officials, public administrators, and heads of New York City departments. Your data and analysis needs to be presented in a way that is focused, concise, easy-to-understand, and visually compelling. Your visualizations should be colorful enough to be included in press releases, and your analysis should be thoughtful enough for dictating programmatic changes. Assessment Your final product will be assessed on the following metrics: Completeness of Analysis Analytic Rigor Readability Visual Attraction Professionalism Hints You may need to get creative in how you combine each of the CSVs. Don't just assume Tableau is the right tool for the job. At this point, you have a wealth of technical skills and research abilities. Dig for an approach that works and just go with it. Don't just assume the CSV format hasn't changed since 2013. Subtle changes to the formats in any of your columns can blockade your analysis. Ensure your data is consistent and clean throughout your analysis. (Hint: Start and End Time change at some point in the history logs). Consider building your dashboards with small extracts of the data (i.e. single files) before attempting to import the whole thing. What you will find is that importing all 20+ million records of data will create performance issues quickly. Welcome to "Big Data". While utilizing all of the data may seem like a nice power play, consider the time-course in making your analysis. Is data from 2013 the most relevant for making bike replacement decisions today? Probably not. Don't let overwhelming data fool you. Ground your analysis in common sense. Remember, data alone doesn't "answer" anything. You will need to accompany your data visualizations with clear and directed answers and analysis. As is often the case, your clients are asking for a LOT of answers. Be considerate about their need-to-know and the importance of not "cramming in everything". Of course, answer each question, but do so in a way that is organized and presentable. Since this is a project for the city, spend the appropriate time thinking through decisions on color schemes, fonts, and visual story-telling. The Citi Bike program has a clear visual footprint. As a suggestion, look for ways to have your data visualizations match their aesthetic tones. Pay attention to labels. What exactly is "time duration"? What's the value of "age of birth"? You will almost certainly need calculated fields to get what you need. Keep a close eye for obvious outliers or false data. Not everyone who signs up for the program is answering honestly. In answering the question of "why" a phenomena is happening, consider adding other pieces of information on socioeconomics or other geographic data. Tableau has a map "layer" feature that you may find handy. Don't be afraid to manipulate your data and play with settings in Tableau. Tableau is meant to be explored. We haven't covered all that you need -- so you will need to keep an eye out for new tricks. The final "format" of your deliverable is up to you. It can be an embedded Tableau dashboard, a Tableau Story, a Tableau visualization + PDF -- you name it. The bottom line is: This is your story to tell. Use the medium you deem most effective. (But you should definitely be using Tableau in some way!) Treat this as a serious endeavor! This is an opportunity to show future employers that you have what it takes to be a top-notch analyst.
CoppingEthan
Adapted code from MCode-Team to add realtime preview of the thinking process for Deepseek R1 and Deepseek Chat
HEETMEHTA18
Coding-Bingo-Platform — Multiplayer coding games for developers & teams A full-stack platform offering a suite of real-time, multiplayer coding games designed to test and hone programming skills, logical thinking, and teamwork — from classic Bingo to speed-coding races, puzzle hunts, and creative code-art challenges. Built with modern technologies
surajshanbhag63-netizen
An end-to-end exploration of building an Autonomous Business Intelligence Agent — from product requirements (PRD, problem statement, executive summary) to runnable agent code on Cursor. Includes insights on AI product thinking, evaluation, safety, and business impact.
Anurag1224
Welcome to my DSA Practice Series repository! This is where I document my daily journey of mastering DSA, starting from the basics and progressing toward advanced topics. Each folder is organized by day, featuring solutions to various problems designed to improve my problem-solving skills, algorithmic thinking, and coding efficiency.
Talamantez
Send an email by thinking hard with Mindflex and Processing,..So you have a Hacked Mind Flex Headset? and you want to trigger events online with it? Say "Hi" to Hacky! It uses a Processing Sketch to get your EEG data and stream it to Internet of Things platform Xively.com. From there, you can automate internet stuff with Zapier.com. This code is based off https://github.com/kitschpatrol/Processing-Brain-Grapher and https://github.com/jmsaavedra/Cosm-Processing-Library.
beatriz-emiliano
In this project a game was developed in the Python language, based on the existing game called 'Akinator'. The game aims to instigate the code to find out what the user is thinking. For this, the program asks some questions until it reaches the character that the user thought. The idea arose from the desire to combine what we learned in RASbóticas IEEE UFCG with something that brought our identity. In this way, we added a Python language to the Simpsons and that was how Wiki-Simpsons came about.
My thesis was on the Development of IoT-based smart security and monitoring systems for the agricultural farm. It was a group project. We are focusing on the latest technologies like sensors to IoT to have a great change in the field of agriculture by collecting data from soil moisture, temperature, humidity, unwanted trespassing and then finding the best optical solutions for any situation. We researched our way into how are going to have a transparent relation between software and hardware. In this project, my part was a literature review, design functionalities, hardware selection, design thinking, system visualization methodology, UI design, complete work schedule & coding, etc. Amid tools and technology, we used app development (Flutter, Android Studio, Dart), database (MySQL), & programming language (Java, PHP, Arduino), etc.
gathuaalex
With the emergence of the MATLAB/Simulink graphical programming environment, modeling and simulation of various plants and controllers can be accomplished quite easily by students who might not have extensive training in digital control and numerical methods. However, practical implementation of such controllers remains elusive for most undergraduate students. Therefore, the objective of this project is to develop a simple physical plant that can be used seamlessly with the MATLAB/Simulink simulation environment to allow students to implement and test real-time controllers using implemented in code. The project described here provides some practical experience for students, using an inexpensive and portable setup that can be taken home. The experiment is designed following the principles of variational theory of learning developed by Marton and coworkers [1], [2] and the approach of guided discovery/interactive-engagement labs characteristic of several well-known labs, such as the Modeling Workshop Project [3], Socratic Dialogue Inducing Labs [4], Real Time Physics [5], and Tools for Scientific Thinking [6]. The portability and low cost of the setup allows the students to conduct experiments over two semesters and use the device to complete a semester project. In addition to significantly reducing the cost of offering an experimental component, the experimental set up built by the students provides an opportunity to demonstrate concepts from system identification, digital control and nonlinear feedback control.
rutushah
ParkinGRid is a forward-thinking parking company that changes the way you park. We provide very seamless parking information. It's easy. You come in, park your vehicle and easily drive out. At ParkinGRid, we care about you, your car, and your time. Our parking technology means hassle-free parking. You can pay in with your computers, smartphone, and other devices. At ParkinGRid you can choose the area you wish to park your vehicle. From there, you can select our parking facility in that area. You can select your vehicle type 4-wheelers cars – sedans, hatchbacks, trucks, or your 2- wheeler bikes. ParkinGRid also allows you to select on which floor of the parking space you want to park your vehicle. For their convenience, there would also be reserved space for ‘Specially abled’ persons. After selecting the vehicle type and giving some information about yourself and your vehicle, a ticket/alphanumeric code would be generated. That can be only used once when you enter the parking premises as per the selected hours. A staff member would verify the tickets and assist you when you enter the parking premises for a safe and secure parking experience. Parking charges would be on an hourly basis or the plan is chosen. Your vehicle is safe and secure with us, there would be CCTV cameras everywhere on the premises to give a safe experience.
DDiaz07
## WeatherPy In this example, you'll be creating a Python script to visualize the weather of 500+ cities across the world of varying distance from the equator. To accomplish this, you'll be utilizing a [simple Python library](https://pypi.python.org/pypi/citipy), the [OpenWeatherMap API](https://openweathermap.org/api), and a little common sense to create a representative model of weather across world cities. Your objective is to build a series of scatter plots to showcase the following relationships: * Temperature (F) vs. Latitude * Humidity (%) vs. Latitude * Cloudiness (%) vs. Latitude * Wind Speed (mph) vs. Latitude Your final notebook must: * Randomly select **at least** 500 unique (non-repeat) cities based on latitude and longitude. * Perform a weather check on each of the cities using a series of successive API calls. * Include a print log of each city as it's being processed with the city number and city name. * Save both a CSV of all data retrieved and png images for each scatter plot. As final considerations: * You must complete your analysis using a Jupyter notebook. * You must use the Matplotlib or Pandas plotting libraries. * You must include a written description of three observable trends based on the data. * You must use proper labeling of your plots, including aspects like: Plot Titles (with date of analysis) and Axes Labels. * See [Example Solution](WeatherPy_Example.pdf) for a reference on expected format. ## Hints and Considerations * You may want to start this assignment by refreshing yourself on the [geographic coordinate system](http://desktop.arcgis.com/en/arcmap/10.3/guide-books/map-projections/about-geographic-coordinate-systems.htm). * Next, spend the requisite time necessary to study the OpenWeatherMap API. Based on your initial study, you should be able to answer basic questions about the API: Where do you request the API key? Which Weather API in particular will you need? What URL endpoints does it expect? What JSON structure does it respond with? Before you write a line of code, you should be aiming to have a crystal clear understanding of your intended outcome. * A starter code for Citipy has been provided. However, if you're craving an extra challenge, push yourself to learn how it works: [citipy Python library](https://pypi.python.org/pypi/citipy). Before you try to incorporate the library into your analysis, start by creating simple test cases outside your main script to confirm that you are using it correctly. Too often, when introduced to a new library, students get bogged down by the most minor of errors -- spending hours investigating their entire code -- when, in fact, a simple and focused test would have shown their basic utilization of the library was wrong from the start. Don't let this be you! * Part of our expectation in this challenge is that you will use critical thinking skills to understand how and why we're recommending the tools we are. What is Citipy for? Why would you use it in conjunction with the OpenWeatherMap API? How would you do so? * In building your script, pay att