Found 743 repositories(showing 30)
Aryia-Behroziuan
An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.[68] Decision trees Main article: Decision tree learning Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making. Support vector machines Main article: Support vector machines Support vector machines (SVMs), also known as support vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.[69] An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. Illustration of linear regression on a data set. Regression analysis Main article: Regression analysis Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. The latter is often extended by regularization (mathematics) methods to mitigate overfitting and bias, as in ridge regression. When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel[70]), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space. Bayesian networks Main article: Bayesian network A simple Bayesian network. Rain influences whether the sprinkler is activated, and both rain and the sprinkler influence whether the grass is wet. A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams. Genetic algorithms Main article: Genetic algorithm A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s.[71][72] Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.[73] Training models Usually, machine learning models require a lot of data in order for them to perform well. Usually, when training a machine learning model, one needs to collect a large, representative sample of data from a training set. Data from the training set can be as varied as a corpus of text, a collection of images, and data collected from individual users of a service. Overfitting is something to watch out for when training a machine learning model. Federated learning Main article: Federated learning Federated learning is an adapted form of distributed artificial intelligence to training machine learning models that decentralizes the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralized server. This also increases efficiency by decentralizing the training process to many devices. For example, Gboard uses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back to Google.[74] Applications There are many applications for machine learning, including: Agriculture Anatomy Adaptive websites Affective computing Banking Bioinformatics Brain–machine interfaces Cheminformatics Citizen science Computer networks Computer vision Credit-card fraud detection Data quality DNA sequence classification Economics Financial market analysis[75] General game playing Handwriting recognition Information retrieval Insurance Internet fraud detection Linguistics Machine learning control Machine perception Machine translation Marketing Medical diagnosis Natural language processing Natural language understanding Online advertising Optimization Recommender systems Robot locomotion Search engines Sentiment analysis Sequence mining Software engineering Speech recognition Structural health monitoring Syntactic pattern recognition Telecommunication Theorem proving Time series forecasting User behavior analytics In 2006, the media-services provider Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million.[76] Shortly after the prize was awarded, Netflix realized that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly.[77] In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis.[78] In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors' jobs would be lost in the next two decades to automated machine learning medical diagnostic software.[79] In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognized influences among artists.[80] In 2019 Springer Nature published the first research book created using machine learning.[81] Limitations Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results.[82][83][84] Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.[85] In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision.[86] Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested.[87][88] Bias Main article: Algorithmic bias Machine learning approaches in particular can suffer from different data biases. A machine learning system trained on current customers only may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on man-made data, machine learning is likely to pick up the same constitutional and unconscious biases already present in society.[89] Language models learned from data have been shown to contain human-like biases.[90][91] Machine learning systems used for criminal risk assessment have been found to be biased against black people.[92][93] In 2015, Google photos would often tag black people as gorillas,[94] and in 2018 this still was not well resolved, but Google reportedly was still using the workaround to remove all gorillas from the training data, and thus was not able to recognize real gorillas at all.[95] Similar issues with recognizing non-white people have been found in many other systems.[96] In 2016, Microsoft tested a chatbot that learned from Twitter, and it quickly picked up racist and sexist language.[97] Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains.[98] Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that "There’s nothing artificial about AI...It’s inspired by people, it’s created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.”[99] Model assessments Classification of machine learning models can be validated by accuracy estimation techniques like the holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validation method randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy.[100] In addition to overall accuracy, investigators frequently report sensitivity and specificity meaning True Positive Rate (TPR) and True Negative Rate (TNR) respectively. Similarly, investigators sometimes report the false positive rate (FPR) as well as the false negative rate (FNR). However, these rates are ratios that fail to reveal their numerators and denominators. The total operating characteristic (TOC) is an effective method to express a model's diagnostic ability. TOC shows the numerators and denominators of the previously mentioned rates, thus TOC provides more information than the commonly used receiver operating characteristic (ROC) and ROC's associated area under the curve (AUC).[101] Ethics Machine learning poses a host of ethical questions. Systems which are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitizing cultural prejudices.[102] For example, using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants against similarity to previous successful applicants.[103][104] Responsible collection of data and documentation of algorithmic rules used by a system thus is a critical part of machine learning. Because human languages contain biases, machines trained on language corpora will necessarily also learn these biases.[105][106] Other forms of ethical challenges, not related to personal biases, are more seen in health care. There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is huge potential for machine learning in health care to provide professionals a great tool to diagnose, medicate, and even plan recovery paths for patients, but this will not happen until the personal biases mentioned previously, and these "greed" biases are addressed.[107] Hardware Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks (a particular narrow subdomain of machine learning) that contain many layers of non-linear hidden units.[108] By 2019, graphic processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI.[109] OpenAI estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.[110][111] Software Software suites containing a variety of machine learning algorithms include the following: Free and open-source so
PeterVanPercson
A 30-day build challenge for people who can execute under pressure. You will build one real project every day across software, AI, systems, and applied engineering. Miss one day, and you are out. This is not a course. This is not passive learning.
SadeepaNHerath
Full-stack developer and AI specialist showcasing projects that combine software engineering expertise with AI innovations. Explore my code, contributions, and technical journey in this professional portfolio.
ishan190425
ADA lets you set up a team of AI agents that autonomously manage your software project — from strategy and research to product specs, engineering, ops, and design. Each agent has a specialized role, a playbook, and a shared memory bank for continuity.
Project Overview Welcome to the Convolutional Neural Networks (CNN) project in the AI Nanodegree! In this project, you will learn how to build a pipeline that can be used within a web or mobile app to process real-world, user-supplied images. Given an image of a dog, your algorithm will identify an estimate of the canine’s breed. If supplied an image of a human, the code will identify the resembling dog breed. Sample Output Along with exploring state-of-the-art CNN models for classification, you will make important design decisions about the user experience for your app. Our goal is that by completing this lab, you understand the challenges involved in piecing together a series of models designed to perform various tasks in a data processing pipeline. Each model has its strengths and weaknesses, and engineering a real-world application often involves solving many problems without a perfect answer. Your imperfect solution will nonetheless create a fun user experience! Project Instructions Instructions Clone the repository and navigate to the downloaded folder. git clone https://github.com/udacity/dog-project.git cd dog-project Download the dog dataset. Unzip the folder and place it in the repo, at location path/to/dog-project/dogImages. Download the human dataset. Unzip the folder and place it in the repo, at location path/to/dog-project/lfw. If you are using a Windows machine, you are encouraged to use 7zip to extract the folder. Download the VGG-16 bottleneck features for the dog dataset. Place it in the repo, at location path/to/dog-project/bottleneck_features. (Optional) If you plan to install TensorFlow with GPU support on your local machine, follow the guide to install the necessary NVIDIA software on your system. If you are using an EC2 GPU instance, you can skip this step. (Optional) If you are running the project on your local machine (and not using AWS), create (and activate) a new environment. Linux (to install with GPU support, change requirements/dog-linux.yml to requirements/dog-linux-gpu.yml): conda env create -f requirements/dog-linux.yml source activate dog-project Mac (to install with GPU support, change requirements/dog-mac.yml to requirements/dog-mac-gpu.yml): conda env create -f requirements/dog-mac.yml source activate dog-project NOTE: Some Mac users may need to install a different version of OpenCV conda install --channel https://conda.anaconda.org/menpo opencv3 Windows (to install with GPU support, change requirements/dog-windows.yml to requirements/dog-windows-gpu.yml): conda env create -f requirements/dog-windows.yml activate dog-project (Optional) If you are running the project on your local machine (and not using AWS) and Step 6 throws errors, try this alternative step to create your environment. Linux or Mac (to install with GPU support, change requirements/requirements.txt to requirements/requirements-gpu.txt): conda create --name dog-project python=3.5 source activate dog-project pip install -r requirements/requirements.txt NOTE: Some Mac users may need to install a different version of OpenCV conda install --channel https://conda.anaconda.org/menpo opencv3 Windows (to install with GPU support, change requirements/requirements.txt to requirements/requirements-gpu.txt): conda create --name dog-project python=3.5 activate dog-project pip install -r requirements/requirements.txt (Optional) If you are using AWS, install Tensorflow. sudo python3 -m pip install -r requirements/requirements-gpu.txt Switch Keras backend to TensorFlow. Linux or Mac: KERAS_BACKEND=tensorflow python -c "from keras import backend" Windows: set KERAS_BACKEND=tensorflow python -c "from keras import backend" (Optional) If you are running the project on your local machine (and not using AWS), create an IPython kernel for the dog-project environment. python -m ipykernel install --user --name dog-project --display-name "dog-project" Open the notebook. jupyter notebook dog_app.ipynb (Optional) If you are running the project on your local machine (and not using AWS), before running code, change the kernel to match the dog-project environment by using the drop-down menu (Kernel > Change kernel > dog-project). Then, follow the instructions in the notebook. NOTE: While some code has already been implemented to get you started, you will need to implement additional functionality to successfully answer all of the questions included in the notebook. Unless requested, do not modify code that has already been included. Evaluation Your project will be reviewed by a Udacity reviewer against the CNN project rubric. Review this rubric thoroughly, and self-evaluate your project before submission. All criteria found in the rubric must meet specifications for you to pass. Project Submission When you are ready to submit your project, collect the following files and compress them into a single archive for upload: The dog_app.ipynb file with fully functional code, all code cells executed and displaying output, and all questions answered. An HTML or PDF export of the project notebook with the name report.html or report.pdf. Any additional images used for the project that were not supplied to you for the project. Please do not include the project data sets in the dogImages/ or lfw/ folders. Likewise, please do not include the bottleneck_features/ folder.
fmelihh
Generative AI & Recommendation Engine --- Firat University / Faculty of Technology / Software Engineering / Final Project
wolffiex
Presentation and demos for https://qconsf.com/presentation/nov2025/engineering-ai-speed-lessons-first-agentically-accelerated-software-project
mohammadsaleem-dev
Personal portfolio website showcasing projects in Blockchain, AI, Cybersecurity, and Software Engineering.
Aryia-Behroziuan
The earliest work in computerized knowledge representation was focused on general problem solvers such as the General Problem Solver (GPS) system developed by Allen Newell and Herbert A. Simon in 1959. These systems featured data structures for planning and decomposition. The system would begin with a goal. It would then decompose that goal into sub-goals and then set out to construct strategies that could accomplish each subgoal. In these early days of AI, general search algorithms such as A* were also developed. However, the amorphous problem definitions for systems such as GPS meant that they worked only for very constrained toy domains (e.g. the "blocks world"). In order to tackle non-toy problems, AI researchers such as Ed Feigenbaum and Frederick Hayes-Roth realized that it was necessary to focus systems on more constrained problems. These efforts led to the cognitive revolution in psychology and to the phase of AI focused on knowledge representation that resulted in expert systems in the 1970s and 80s, production systems, frame languages, etc. Rather than general problem solvers, AI changed its focus to expert systems that could match human competence on a specific task, such as medical diagnosis. Expert systems gave us the terminology still in use today where AI systems are divided into a Knowledge Base with facts about the world and rules and an inference engine that applies the rules to the knowledge base in order to answer questions and solve problems. In these early systems the knowledge base tended to be a fairly flat structure, essentially assertions about the values of variables used by the rules.[2] In addition to expert systems, other researchers developed the concept of frame-based languages in the mid-1980s. A frame is similar to an object class: It is an abstract description of a category describing things in the world, problems, and potential solutions. Frames were originally used on systems geared toward human interaction, e.g. understanding natural language and the social settings in which various default expectations such as ordering food in a restaurant narrow the search space and allow the system to choose appropriate responses to dynamic situations. It was not long before the frame communities and the rule-based researchers realized that there was synergy between their approaches. Frames were good for representing the real world, described as classes, subclasses, slots (data values) with various constraints on possible values. Rules were good for representing and utilizing complex logic such as the process to make a medical diagnosis. Integrated systems were developed that combined Frames and Rules. One of the most powerful and well known was the 1983 Knowledge Engineering Environment (KEE) from Intellicorp. KEE had a complete rule engine with forward and backward chaining. It also had a complete frame based knowledge base with triggers, slots (data values), inheritance, and message passing. Although message passing originated in the object-oriented community rather than AI it was quickly embraced by AI researchers as well in environments such as KEE and in the operating systems for Lisp machines from Symbolics, Xerox, and Texas Instruments.[3] The integration of Frames, rules, and object-oriented programming was significantly driven by commercial ventures such as KEE and Symbolics spun off from various research projects. At the same time as this was occurring, there was another strain of research that was less commercially focused and was driven by mathematical logic and automated theorem proving. One of the most influential languages in this research was the KL-ONE language of the mid-'80s. KL-ONE was a frame language that had a rigorous semantics, formal definitions for concepts such as an Is-A relation.[4] KL-ONE and languages that were influenced by it such as Loom had an automated reasoning engine that was based on formal logic rather than on IF-THEN rules. This reasoner is called the classifier. A classifier can analyze a set of declarations and infer new assertions, for example, redefine a class to be a subclass or superclass of some other class that wasn't formally specified. In this way the classifier can function as an inference engine, deducing new facts from an existing knowledge base. The classifier can also provide consistency checking on a knowledge base (which in the case of KL-ONE languages is also referred to as an Ontology).[5] Another area of knowledge representation research was the problem of common sense reasoning. One of the first realizations learned from trying to make software that can function with human natural language was that humans regularly draw on an extensive foundation of knowledge about the real world that we simply take for granted but that is not at all obvious to an artificial agent. Basic principles of common sense physics, causality, intentions, etc. An example is the frame problem, that in an event driven logic there need to be axioms that state things maintain position from one moment to the next unless they are moved by some external force. In order to make a true artificial intelligence agent that can converse with humans using natural language and can process basic statements and questions about the world, it is essential to represent this kind of knowledge. One of the most ambitious programs to tackle this problem was Doug Lenat's Cyc project. Cyc established its own Frame language and had large numbers of analysts document various areas of common sense reasoning in that language. The knowledge recorded in Cyc included common sense models of time, causality, physics, intentions, and many others.[6] The starting point for knowledge representation is the knowledge representation hypothesis first formalized by Brian C. Smith in 1985:[7] Any mechanically embodied intelligent process will be comprised of structural ingredients that a) we as external observers naturally take to represent a propositional account of the knowledge that the overall process exhibits, and b) independent of such external semantic attribution, play a formal but causal and essential role in engendering the behavior that manifests that knowledge. Currently one of the most active areas of knowledge representation research are projects associated with the Semantic Web. The Semantic Web seeks to add a layer of semantics (meaning) on top of the current Internet. Rather than indexing web sites and pages via keywords, the Semantic Web creates large ontologies of concepts. Searching for a concept will be more effective than traditional text only searches. Frame languages and automatic classification play a big part in the vision for the future Semantic Web. The automatic classification gives developers technology to provide order on a constantly evolving network of knowledge. Defining ontologies that are static and incapable of evolving on the fly would be very limiting for Internet-based systems. The classifier technology provides the ability to deal with the dynamic environment of the Internet. Recent projects funded primarily by the Defense Advanced Research Projects Agency (DARPA) have integrated frame languages and classifiers with markup languages based on XML. The Resource Description Framework (RDF) provides the basic capability to define classes, subclasses, and properties of objects. The Web Ontology Language (OWL) provides additional levels of semantics and enables integration with classification engines.[8][9]
Sri Venkateshwara University (SVU) strives to create professionals who are not only adept in academics but also in application for the benefit of humanity. We foster a culture of learning by doing. We believe in nurturing students who are at the forefront of innovation by offering an environment of research & development to make us Best University in Uttar Pradesh (UP). SVU believes in experiential learning. To facilitate this, we have an ultra-modern infrastructure that motivates students to experiment & excel in their area of interest. The Best University of Moradabad has laboratories & workshops that signify our commitment to core research, thus enabling innovation. SVU is the only institution to have set up labs in collaboration with the industry. This way we can train our students on the latest skills & make them employable. Students sharpen their practical skills under the watch full eyes of trainers & become competent professionals. For the overall development of the students, we organize cultural programs. Students take part in these programs & exhibit their talent to become confident professionals. The annual fest attracts students from all over the country & showcase their talent to make us the Top University in India. We equipped the computing labs with the latest software & hardware to augment the technical skills of the students. SVU’s library is an epitome of knowledge. It has over 3000 books & journals that ensure the students are never short on intellectual input. The team of industry trainers educate them on the key skills so crucial for employment & make us the Best University in Gajraula. The specially created engineering labs assist engineers to refine their technical acumen so much needed for the country. The Chairman Dr. Sudhir Giri believes in removing all the economic & social barriers that can hinder education. Hence, SVU provides many scholarships & grants to meritorious students. Up till now, the college has enabled over 500000 students to attain their academic desires to make us the Best Private University in Uttar Pradesh (UP). The group is running a dozen educational institutions that include medical colleges in India & abroad. Our commitment towards education & healthcare has enabled Dr Sudhir Giri to win the International Glory Man of the year Award 2021. The Best Private University in Moradabad is on the Delhi Moradabad highway, well connected with rail & road. The green surroundings provide peace of mind that enables research based learning. The carefully recruited faculty is the pride of the university. They have years of industrial & academic experience so vital for the students. They transfer key skills & make us the Best Private University in Gajraula. The faculty encourages students to undertake research & sharpen their skills that will enable them to get jobs. Majority of the faculty members are doctorates who educate the students to become competent professionals. The faculty takes part in FDP in order to develop a culture of research. The specialty of SVU is the internship. We have partnered with leading industries for providing internship to the students. We believe that education without applicability is incomplete. Students gain hands on exposure through internship & become job ready. We place most of the students during internship to make us the Top University in India. SVU, the Best University in Uttar Pradesh (UP), adopts a futuristic teaching pedagogy. We strive for experiential learning of our students through role plays, projects & presentation. The students take part in the learning activity & imbibe concepts that enable their placements. The AC seminar & conference halls allow knowledge dispersion for the development of the students. The University is running over 150 undergraduate (UG), postgraduate (PG) courses, (Ph.D.), diploma and certificate courses in various fields of Applied Sciences, Medical Science, Humanities & Social Sciences. We also run courses in Languages, Design, Agriculture, Engineering & Technology, Nursing, Pharmacy, Paramedical, Commerce & Management, Law, Library & information Sciences, Mass Comm. & Journalism to enhance the employability of the youth. SVU has a culture of project based learning. Students do projects in each semester under the guidance of faculty. They complete these projects in earmarked industries to garner hands-on skills. Through these projects, we train students on the hot skills so crucial for employment to make us the Best University in Moradabad. SVU’s Research & Development (R&D) wing encourages students to work on research areas important for the country. We have partnered with leading research institutions to undertake research. The breath-taking infrastructure of the best university in Gajraula motivates researchers to achieve their goals for research. Owing to our dedication, SVU has received grants from GOI for research on areas of national importance. The faculty members provide guidance to the scholars until they achieve their aim. We have set up the incubation center to provide fillip to new ideas that foster entrepreneurship. We want to be an institution that supports the ‘Make in India’ vision of the government. The center supports new ideas that enable the young entrepreneurs to create startups & become successful. Under the strong leadership of Dr. Sudhir Giri, till date we have successfully incubated 150 start-ups. This speaks of our exemplary education & make us the Best Private University in Uttar Pradesh (UP). These startups are not only creating wealth but also providing employment to the needy. The industrialists have lamented that the epicenter for entrepreneurship will be the educational institutions. We need to provide them with the support & infrastructure for this. The annual hackathon attracts individuals who showcase their business acumen to make us the Best Private University in Moradabad. SVU has a dedicated International Research & collaboration Cell (IRCC) that collaborates with universities abroad. Faculty & students who want to pursue studies abroad the IRCC starts admission formalities for them. We have partnered with reputed institutions for providing excellent research collaborations. Those who wish to do P. HD abroad the IRCC help them gain admission & make us the Top University in India. A lot of our faculty members are pursuing their research internationally & contributing to the welfare of humanity. SVU strives to make our students feel comfortable at the campus. Separate hostel for boys & girls with 24 hour security is available at SVU. The cafeteria serves nutritious food to the students. Gym, recreation hall & the sports ground help to relax our students & make us the Best University in Uttar Pradesh (UP). The campus has an in house ATM & convenience store for the benefit of the students. SVU enables placement through exemplary training. We train on communication & interpersonal skills in order to refine the personality of the students. We make them practice mock interviews & group discussion that help to clear placement tests. Ninety percent of the students get placed before their last semester to make us the best university in Moradabad. We have hired industrial trainers in order to provide training on block chain, machine learning, artificial intelligence (AI), and python & data science. These trainers have years of experience that enables them in training the students. The students gain key insights on these technologies & sharpen their acumen to make us the Best University in Gajraula.
sminerport
Welcome! This portfolio highlights my projects in software development, data engineering, analytics, and AI/ML. Feel free to explore, connect, or collaborate!
motykatomasz
Project reproducing paper: "Pythia, AI-assisted code completion system". The project was done for the course "Machine Learning for Software Engineering" at TU Delft.
hersh-kat
Stocky is an AI chatbot that can answer all your financial market questions. CS261 Software Engineering group project, in collaboration with Deutsche Bank.
hpi-sam
Projects on Generative AI in the context of Advanced Topics in Software Engineering
FelipeLVieira
A AI checkers game for a Software Engineering 2 project at Fluminense Federal University (UFF).
TejasNaik24
This is my professional resume built using LaTeX. It highlights my background in software engineering, AI/ML, and web development, with sections on education, experience, projects, and technical skills. The resume is designed for clarity, easy customization, and effective presentation of key qualifications.
raghu24k
GitHub is a web-based platform designed for developers to collaborate, manage, and share their software projects. As a computer engineering student at RK University in Gujarat, Rajkot, and aspiring AI engineer and developer, GitHub can be an invaluable tool for your academic and professional journey.
wattanasiri
Travel application with AI generate trip. (KMUTT, CSS321-322 Software Engineering Project)
omerblank
this is my final project using AI for the 13th software engineering diploma.
AnerGcorp
Kevin AI is an open-source AI software engineer project focused on developing innovative artificial intelligence technologies. Our goal is to create scalable and adaptable AI solutions that address real-world challenges in software engineering by leveraging the collective expertise of our open-source community.
tiwariar7
SoleMate AI is an intelligent, content-based recommendation system designed to personalize footwear discovery, maintenance, and replacement. Developed as my third-year Computer Science project, it integrates machine learning, database management, and software engineering best practices to deliver a practical, user-centric solution.
UmaraNoor
A collection of AI projects developed by students of BSSE F21 in the Artificial Intelligence course taught by Dr. Umara Noor at Department of Software Engineering, International Islamic University, Islamabad, Pakistan.
crankysmh47
Software Engineering Project under VIctreat AI
songminjae
Term project for CS454-AI Based Software Engineering(Autumn 2020)
Aayush-Mishraa
Personal portfolio showcasing automation projects, AI testing tools, and software engineering work.
Rocky-Dewan
A personal portfolio website to showcase my AI, full-stack, and software engineering projects.
gabrielegenovese
Project for the Service Oriented Software Engineering (Ingegneria del Software Orientata ai Servizi - ISOS) AA 2023/2024's course
Justgo13
Software engineering project based on the popular board game Risk. Implemented the MVC design pattern and built a Java GUI that allows players to play the Risk game with other players or with AI.
madhu-0912
A personal innovation lab for Software, Machine Learning, AI, and experimental engineering projects — building tomorrow’s intelligent systems today.
kevinkunkel98
Project for University of Leipzig Course Software Engineering for AI: StudyBrAIn is a personal AI Rag Tutor. is integrated with a ChatGPT proxy.