Found 133 repositories(showing 30)
vis4
Chrome extension that highlights anonymous sources in news articles
Yogapriya2512
A chatbot (also known as a talkbot, chatterbot, Bot, IM bot, interactive agent, or Artificial Conversational Entity)The classic historic early chatbots are ELIZA (1966) and PARRY (1972).More recent notable programs include A.L.I.C.E., Jabberwacky and D.U.D.E (Agence Nationale de la Recherche and CNRS 2006). While ELIZA and PARRY were used exclusively to simulate typed conversation, many chatbots now include functional features such as games and web searching abilities. In 1984, a book called The Policeman's Beard is Half Constructed was published, allegedly written by the chatbot Racter (though the program as released would not have been capable of doing so). One pertinent field of AI research is natural language processing. Usually, weak AI fields employ specialized software or programming languages created specifically for the narrow function required. For example, A.L.I.C.E. uses a markup language called AIML, which is specific to its function as a conversational agent, and has since been adopted by various other developers of, so called, Alicebots. Nevertheless, A.L.I.C.E. is still purely based on pattern matching techniques without any reasoning capabilities, the same technique ELIZA was using back in 1966. This is not strong AI, which would require sapience and logical reasoning abilities. Jabberwacky learns new responses and context based on real-time user interactions, rather than being driven from a static database. Some more recent chatbots also combine real-time learning with evolutionary algorithms that optimise their ability to communicate based on each conversation held. Still, there is currently no general purpose conversational artificial intelligence, and some software developers focus on the practical aspect, information retrieval. Chatbot competitions focus on the Turing test or more specific goals. Two such annual contests are the Loebner Prize and The Chatterbox Challenge (offline since 2015, materials can still be found from web archives). According to Forrester (2015), AI will replace 16 percent of American jobs by the end of the decade.Chatbots have been used in applications such as customer service, sales and product education. However, a study conducted by Narrative Science in 2015 found that 80 percent of their respondents believe AI improves worker performance and creates jobs.[citation needed] is a computer program or an artificial intelligence which conducts a conversation via auditory or textual methods. Such programs are often designed to convincingly simulate how a human would behave as a conversational partner, thereby passing the Turing test. Chatbots are typically used in dialog systems for various practical purposes including customer service or information acquisition. Some chatterbots use sophisticated natural language processing systems, but many simpler systems scan for keywords within the input, then pull a reply with the most matching keywords, or the most similar wording pattern, from a database. The term "ChatterBot" was originally coined by Michael Mauldin (creator of the first Verbot, Julia) in 1994 to describe these conversational programs.Today, most chatbots are either accessed via virtual assistants such as Google Assistant and Amazon Alexa, via messaging apps such as Facebook Messenger or WeChat, or via individual organizations' apps and websites. Chatbots can be classified into usage categories such as conversational commerce (e-commerce via chat), analytics, communication, customer support, design, developer tools, education, entertainment, finance, food, games, health, HR, marketing, news, personal, productivity, shopping, social, sports, travel and utilities. Background
mirrys
Repository of data and code to use the models described in the paper "Citation Needed: A Taxonomy and Algorithmic Assessment of Wikipedia's Verifiability"
ykdojo
Claude Code plugin for searching 250M+ academic papers via OpenAlex. Search by keyword, look up by DOI, sort by citations or date. No API key needed.
Jana-Marie
Source-files for [citation needed] stickers.
bitflight-devops
Zero-dependency Claude Code plugin that catches speculation, invented causality, and fake citations before they pollute your context. Install in one command, works offline, no API keys needed.
Scripts for CitationNeeded.news
ACGaming
The most important mobs in the history of Minecraft! [citation needed]
tecywiz121
An improved version of the AI from 2012's CS Games
Madjita
Value-stream mapping, also known as "material- and information-flow mapping",[1] is a lean-management method for analyzing the current state and designing a future state for the series of events that take a product or service from the beginning of the specific process until it reaches the customer. A value stream map is a visual tool that displays all critical steps in a specific process and easily quantifies the time and volume taken at each stage.[citation needed][2] Value stream maps show the flow of both materials and information as they progress through the process.[3]
dsainvi
wgu-c950 Data Structures and Algorithms II - WGU Scenario The Western Governors University Parcel Service (WGUPS) needs to determine the best route and delivery distribution for their Daily Local Deliveries (DLD) because packages are not currently being consistently delivered by their promised deadline. The Salt Lake City DLD route has three trucks, two drivers, and an average of 40 packages to deliver each day; each package has specific criteria and delivery requirements. Your task is to determine the best algorithm, write code, and present a solution where all 40 packages, listed in the attached “WGUPS Package File,” will be delivered on time with the least number of miles added to the combined mileage total of all trucks. The specific delivery locations are shown on the attached “Salt Lake City Downtown Map” and distances to each location are given in the attached “WGUPS Distance Table.” While you work on this assessment, take into consideration the specific delivery time expected for each package and the possibility that the delivery requirements—including the expected delivery time—can be changed by management at any time and at any point along the chosen route. In addition, you should keep in mind that the supervisor should be able to see, at assigned points, the progress of each truck and its packages by any of the variables listed in the “WGUPS Package File,” including what has been delivered and what time the delivery occurred. The intent is to use this solution (program) for this specific location and to use the same program in many cities in each state where WGU has a presence. As such, you will need to include detailed comments, following the industry-standard Python style guide, to make your code easy to read and to justify the decisions you made while writing your program. Assumptions: Each truck can carry a maximum of 16 packages. Trucks travel at an average speed of 18 miles per hour. Trucks have a “infinite amount of gas” with no need to stop. Each driver stays with the same truck as long as that truck is in service. Drivers leave the hub at 8:00 a.m., with the truck loaded, and can return to the hub for packages if needed. The day ends when all 40 packages have been delivered. Delivery time is instantaneous, i.e., no time passes while at a delivery (that time is factored into the average speed of the trucks). There is up to one special note for each package. The wrong delivery address for package #9, Third District Juvenile Court, will be corrected at 10:20 a.m. The correct address is 410 S State St., Salt Lake City, UT 84111. The package ID is unique; there are no collisions. No further assumptions exist or are allowed. Requirements Your submission must be your original work. No more than a combined total of 30% of the submission and no more than a 10% match to any one individual source can be directly quoted or closely paraphrased from sources, even if cited correctly. An originality report is provided when you submit your task that can be used as a guide. You must use the rubric to direct the creation of your submission because it provides detailed criteria that will be used to evaluate your work. Each requirement below may be evaluated by more than one rubric aspect. The rubric aspect titles may contain hyperlinks to relevant portions of the course. Section 1: Programming/Coding A. Identify the algorithm that will be used to create a program to deliver the packages and meets all requirements specified in the scenario. B. Write a core algorithm overview, using the sample given, in which you do the following: Comment using pseudocode to show the logic of the algorithm applied to this software solution. Apply programming models to the scenario. Evaluate space-time complexity using Big O notation throughout the coding and for the entire program. Discuss the ability of your solution to adapt to a changing market and to scalability. Discuss the efficiency and maintainability of the software. Discuss the self-adjusting data structures chosen and their strengths and weaknesses based on the scenario. C. Write an original code to solve and to meet the requirements of lowest mileage usage and having all packages delivered on time. Create a comment within the first line of your code that includes your first name, last name, and student ID. Include comments at each block of code to explain the process and flow of the coding. D. Identify a data structure that can be used with your chosen algorithm to store the package data. Explain how your data structure includes the relationship between the data points you are storing. Note: Do NOT use any existing data structures. You must design, write, implement, and debug all code that you turn in for this assessment. Code downloaded from the internet or acquired from another student or any other source may not be submitted and will result in automatic failure of this assessment. E. Develop a hash table, without using any additional libraries or classes, with an insertion function that takes the following components as input and inserts the components into the hash table: package ID number delivery address delivery deadline delivery city delivery zip code package weight delivery status (e.g., delivered, in route) F. Develop a look-up function that takes the following components as input and returns the corresponding data elements: package ID number delivery address delivery deadline delivery city delivery zip code package weight delivery status (e.g., delivered, in route) G. Provide an interface for the insert and look-up functions to view the status of any package at any time. This function should return all information about each package, including delivery status. Provide screenshots to show package status of all packages at a time between 8:35 a.m. and 9:25 a.m. Provide screenshots to show package status of all packages at a time between 9:35 a.m. and 10:25 a.m. Provide screenshots to show package status of all packages at a time between 12:03 p.m. and 1:12 p.m. H. Run your code and provide screenshots to capture the complete execution of your code. Section 2: Annotations I. Justify your choice of algorithm by doing the following: Describe at least two strengths of the algorithm you chose. Verify that the algorithm you chose meets all the criteria and requirements given in the scenario. Identify two other algorithms that could be used and would have met the criteria and requirements given in the scenario. a. Describe how each algorithm identified in part I3 is different from the algorithm you chose to use in the solution. J. Describe what you would do differently if you did this project again. K. Justify your choice of data structure by doing the following: Verify that the data structure you chose meets all the criteria and requirements given in the scenario. a. Describe the efficiency of the data structure chosen. b. Explain the expected overhead when linking to the next data item. c. Describe the implications of when more package data is added to the system or other changes in scale occur. Identify two other data structures that can meet the same criteria and requirements given in the scenario. a. Describe how each data structure identified in part K2 is different from the data structure you chose to use in the solution. L. Acknowledge sources, using in-text citations and references, for content that is quoted, paraphrased, or summarized. M. Demonstrate professional communication in the content and presentation of your submission.
Slickytail
A reddit bot that reminds people to cite their sources. Run on /u/citation-is-needed
Manajit89
This repository provides the data needed to produce the ERGM estimation on patent citation networks from European patents
JThistle9
I needed a csv file of all the new york penal codes, so I made a web scraper in python to go through all the title, article, and citation links from https://law.justia.com/codes/new-york/2018/pen/part-3/ and take the paragraphs and format them.
kr-viku
The player guessing the word may, at any time, attempt to guess the whole word.[citation needed] If the word is correct, the game is over and the guesser wins
Aryia-Behroziuan
The classical problem in computer vision, image processing, and machine vision is that of determining whether or not the image data contains some specific object, feature, or activity. Different varieties of the recognition problem are described in the literature:[citation needed] Object recognition (also called object classification) – one or several pre-specified or learned objects or object classes can be recognized, usually together with their 2D positions in the image or 3D poses in the scene. Blippar, Google Goggles and LikeThat provide stand-alone programs that illustrate this functionality. Identification – an individual instance of an object is recognized. Examples include identification of a specific person's face or fingerprint, identification of handwritten digits, or identification of a specific vehicle. Detection – the image data are scanned for a specific condition. Examples include detection of possible abnormal cells or tissues in medical images or detection of a vehicle in an automatic road toll system. Detection based on relatively simple and fast computations is sometimes used for finding smaller regions of interesting image data which can be further analyzed by more computationally demanding techniques to produce a correct interpretation. Currently, the best algorithms for such tasks are based on convolutional neural networks. An illustration of their capabilities is given by the ImageNet Large Scale Visual Recognition Challenge; this is a benchmark in object classification and detection, with millions of images and 1000 object classes used in the competition.[29] Performance of convolutional neural networks on the ImageNet tests is now close to that of humans.[29] The best algorithms still struggle with objects that are small or thin, such as a small ant on a stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters (an increasingly common phenomenon with modern digital cameras). By contrast, those kinds of images rarely trouble humans. Humans, however, tend to have trouble with other issues. For example, they are not good at classifying objects into fine-grained classes, such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this with ease[citation needed]. Several specialized tasks based on recognition exist, such as: Content-based image retrieval – finding all images in a larger set of images which have a specific content. The content can be specified in different ways, for example in terms of similarity relative a target image (give me all images similar to image X), or in terms of high-level search criteria given as text input (give me all images which contain many houses, are taken during winter, and have no cars in them). Computer vision for people counter purposes in public places, malls, shopping centres Pose estimation – estimating the position or orientation of a specific object relative to the camera. An example application for this technique would be assisting a robot arm in retrieving objects from a conveyor belt in an assembly line situation or picking parts from a bin. Optical character recognition (OCR) – identifying characters in images of printed or handwritten text, usually with a view to encoding the text in a format more amenable to editing or indexing (e.g. ASCII). 2D code reading – reading of 2D codes such as data matrix and QR codes. Facial recognition Shape Recognition Technology (SRT) in people counter systems differentiating human beings (head and shoulder patterns) from objects
javajawa
DoCitten: The Cutest IRC bot ever [citation needed]
pratheeknagaraj
citation needed project
Small webhook server to augment a self-hosted Ghost site at CitationNeeded.news
molly
Patches to the Ghost core software, used for the Citation Needed newsletter
Aryia-Behroziuan
In the late 1960s, computer vision began at universities which were pioneering artificial intelligence. It was meant to mimic the human visual system, as a stepping stone to endowing robots with intelligent behavior.[11] In 1966, it was believed that this could be achieved through a summer project, by attaching a camera to a computer and having it "describe what it saw".[12][13] What distinguished computer vision from the prevalent field of digital image processing at that time was a desire to extract three-dimensional structure from images with the goal of achieving full scene understanding. Studies in the 1970s formed the early foundations for many of the computer vision algorithms that exist today, including extraction of edges from images, labeling of lines, non-polyhedral and polyhedral modeling, representation of objects as interconnections of smaller structures, optical flow, and motion estimation.[11] The next decade saw studies based on more rigorous mathematical analysis and quantitative aspects of computer vision. These include the concept of scale-space, the inference of shape from various cues such as shading, texture and focus, and contour models known as snakes. Researchers also realized that many of these mathematical concepts could be treated within the same optimization framework as regularization and Markov random fields.[14] By the 1990s, some of the previous research topics became more active than the others. Research in projective 3-D reconstructions led to better understanding of camera calibration. With the advent of optimization methods for camera calibration, it was realized that a lot of the ideas were already explored in bundle adjustment theory from the field of photogrammetry. This led to methods for sparse 3-D reconstructions of scenes from multiple images. Progress was made on the dense stereo correspondence problem and further multi-view stereo techniques. At the same time, variations of graph cut were used to solve image segmentation. This decade also marked the first time statistical learning techniques were used in practice to recognize faces in images (see Eigenface). Toward the end of the 1990s, a significant change came about with the increased interaction between the fields of computer graphics and computer vision. This included image-based rendering, image morphing, view interpolation, panoramic image stitching and early light-field rendering.[11] Recent work has seen the resurgence of feature-based methods, used in conjunction with machine learning techniques and complex optimization frameworks.[15][16] The advancement of Deep Learning techniques has brought further life to the field of computer vision. The accuracy of deep learning algorithms on several benchmark computer vision data sets for tasks ranging from classification, segmentation and optical flow has surpassed prior methods.[citation needed]
DarrenAbramson
The purpose of this code is to provide an example of a behavioral analysis of Wikipedia. The intended application is for providing empirical justification for a controversial epistemological category.
timdream
Insert [citation needed] to systemic biased Wikipedia articles
ChildishGiant
Randomiser for handpicked interesting wiki articles
DanGodfrey
No description available
jonnybrooks
A wikipedia web game
aaditkapoor
Need Citation: A way to fact check medical/health related claims using GPT
Altreus
Hold people accountable for their shit
norseboar
Browser extension to chain claims made online to primary sources
jRimbault
Wikipedia trivia game