Found 1,443 repositories(showing 30)
Tencent
Maybe the world's fastest logging library. Lightweight & industrial-grade, battle-tested in Honor of Kings. C++/Java/C#/Kotlin/TS, Unity/Unreal/HarmonyOS. 可能是全球最快的日志库——轻量级且工业级,源自《王者荣耀》,支持多语言多平台
jakartaee
The specification in Jakarta EE to help Jakarta EE developers create enterprise-grade applications using Java® and NoSQL technologies.
devonfw
devonfw Java stack - create enterprise-grade business apps in Java safe and fast
Book Description: Key Features Learn how to use the MVVM software architectural pattern and see the benefits of using it with Windows Presentation Fountain (WPF) Explore various ways to enhance efficiency through performance tuning and UI automation Obtain a deep understanding of data validation and understand various methods that suit different situations Book Description Windows Presentation Foundation is rich in possibilities when it comes to delivering an excellent user experience. This book will show you how to build professional-grade applications that look great and work smoothly. We start by providing you with a foundation of knowledge to improve your workflow – this includes teaching you how to build the base layer of the application, which will support all that comes after it. We’ll also cover the useful details of data binding. Next, we cover the user interface and show you how to get the most out of the built-in and custom WPF controls. The final section of the book demonstrates ways to polish your applications, from adding practical animations and data validation to improving application performance. The book ends with a tutorial on how to deploy your applications and outlines potential ways to apply your new-found knowledge so you can put it to use right away. The book also covers 2D and 3D graphics, UI automation, and performance tuning. What you will learn Use MVVM to improve workflow Create visually stunning user interfaces Perform data binds proficiently Implement advanced data validation Locate and resolve errors quickly Master practical animations Improve your applications’ performance About the Author Sheridan Yuen is a Microsoft .NET MCTS and Oracle Java SCJP certified software developer, living in London, England. His passion for coding made him stand out from the crowd right from the start. From his second year onward at university, he was employed to be a teaching assistant for the first year student coding
We are team technophiles and participated in 48hrs hackathon organized by Nirma University in collabration with Binghamton University. Our Problem Definition : To develop a solution, the first step is to understand the problem. The problem here is to develop an Application Programming Interface which can be easily integrated with Android and IOS to detect the skin disease without any physical interaction with a Dermatologist. The detected skin disease should be sent through whatsapp to a particular patient and doctor. Our college name: Pandit Deendayal Energy University Team Members: Rushabh Thakkar, Divy Patel, Denish Kalariya, Yug Thakkar, and Shubham Vyas. Project Details: We made an application which classifies the skin diseases into these given types healthy, lupus, ringworm and scalp_infections How did we make? The data given was analysed first. We came to conclusion that the data given was not enough so we searched for new datasets. We got these datasets: https://ieee-dataport.org/documents/image-dataset-various-skin-conditions-and-rashes https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/DBW86T We segregated the datasets of harvard. Combined all the datasets and trained the tensorflow image classification model multiple times. Accuracy was not satisfying. Augmented the data to unbaised the model and the dataset would be balanced. Data Augmentation was done on the data given . We generated 800 images per disease. Again we had trained the model. Accuracy was good. Exported the .tflite and label.txt file. We imported the files into android studio We have used three python codes: data_removal.py This code is used to remove data randomly from the folder if there are more number of images than required. We just need to change total_files_req variable in the code to number of files required after deletion. data_augmentation.py This code is used to augment the data randomly from the folder if there are less number of images than required. We just need to change total_files_req variable in the code to number of files required after augmentation. We change various parameters of images like clearity, rotation, brightness, etc. image_classification_code.py This is the main code in which we have trained the model and exported it to run on the app Models we tried: efficientnet-lite0(USED in our project) efficientnet-lite1 efficientnet-lite2 efficientnet-lite3 efficientnet-lite4 API: TensorFlowLite Used Android studio for App development . Used Language = java We sync all the grade files. Changed the model files and update it with the new model Working model file name is model.tflite Tflite classifier working java files are CameraActivity.java CamerConnectionFragment.java ClasssifierActivity.java LegacyCameraConnectionFreagment.java Dataset: Uploaded on Github WORKING MODEL LINK: https://drive.google.com/file/d/1BnqfFInFkJJDkYDlmdj9VB601f7PjTdj/view?usp=sharing
ikuzweelisa
A Grades Calculator App developed in java
fazeelkhalid
Points: 100 Topics: Graphs, topological sort, freedom to decide how to represent data and organize code (while still reading in a graph and performing topological sort) PLAGIARISM/COLLUSION: You should not read any code (solution) that directly solves this problem (e.g. implements DFS, topological sorting or other component needed for the homework). The graph representation provided on the Code page (which you are allowed to use in your solution) and the pseudocode and algorithm discussed in class provide all the information needed. If anything is unclear in the provided materials check with us. You can read materials on how to read from a file, or read a Unix file or how to tokenize a line of code, BUT not in a sample code that deals with graphs or this specific problem. E.g. you can read tutorials about these topics, but not a solution to this problem (or a problem very similar to it). You should not share your code with any classmate or read another classmate's code. Part 1: Main program requirements (100 pts) Given a list of courses and their prerequisites, compute the order in which courses must be taken so that when taking a courses, all its prerequisites have already been taken. All the files that the program would read from are in Unix format (they have the Unix EOL). Provided files: ● Grading Criteria ● cycle0.txt ● data0.txt ● data0_rev.txt ● data1.txt - like data0.txt but the order of the prerequisite courses is modified on line 2. ● slides.txt (graph image) - courses given in such a way that they produce the same graph as in the image. (The last digit in the course number is the same as the vertex corresponding to it in the drawn graph. You can also see this in the vertex-to-course name correspondence in the sample run for this file.) ● run.html● data0_easy.txt - If you cannot handle the above file format, this is an easier file format that you can use, but there will be 15 points lost in this case. More details about this situation are given in Part 3. ● Unix.zip - zipped folder with all data files. ● For your reference: EOL_Mac_Unix_Windows.png - EOL symbols for Unix/Mac/Windows Specifications: 1. You can use structs, macros, typedef. 2. All the code must be in C (not C++, or any other language) 3. Global or static variables are NOT allowed. The exception is using macros to define constants for the size limits (e.g. instead of using 30 for the max course name size). E.g. #define MAX_ARRAY_LENGTH 20 4. You can use static memory (on the frame stack) or dynamic memory. (Do not confuse static memory with static variables.) 5. The program must read from the user a filename. The filename (as given by the user) will include the extension, but NOT the path. E.g.: data0.txt 6. You can open and close the file however many times you want. 7. File format: 1. Unix file. It will have the Unix EOL (end-of-line). 2. Size limits: 1. The file name will be at most 30 characters. 2. A course name will be at most 30 characters 3. A line in the file will be at most 1000 characters. 3. The file ends with an empty new line. 4. Each line (except for the last empty line) has one or more course names. 5. Each course name is a single word (without any spaces). E.g. CSE1310 (with no space between CSE and 1310). 6. There is no empty space at the end of the line. 7. There is exactly one empty space between any two consecutive courses on the same line. (You do not need to worry about having tabs or more than one empty space between 2 courses.) The first course name on each line is the course being described and the following courses are the prerequisites for it. E.g. CSE2315 CSE1310 MATH1426 ENGL13018. The first line describes course CSE2315 and it indicates that CSE2315 has 2 prerequisite courses, namely: CSE1310 and MATH1426. The second line describes course ENG1301 and it indicates that ENG1301 has no prerequisites. 9. You can assume that there is exactly one line for every course, even for those that do not have prerequisites (see ENGL1301 above). Therefore you can count the number of lines in the file to get the total number of courses. 10.The courses are not given in any specific order in the file. 8. You must create a directed graph corresponding to the data in the file. 1. The graph will have as many vertices as different courses listed in the file. 2. You can represent the vertices and edges however you want. 3. You do NOT have to use a graph struct. If you can do all the work with just the 2D table (the adjacency matrix) that is fine. You HAVE TO implement the topological sorting covered in class (as this assignment is on Graphs), but you can organize, represent and store the data however you want. 4. For the edges, you can use either the adjacency matrix representation or the adjacency list. If you use the adjacency list, keep the nodes in the list sorted in increasing order. 5. For each course that has prerequisites, there is an edge, from each prerequisite to that course. Thus the direction of the edge indicates the dependency. The actual edge will be between the vertices in the graph corresponding to these courses. E.g. file data0.txt has: c100 c300 c200 c100 c200 c100 Meaning: c100-----> c200 \ | \ | \ | \ | \ | \ | V V c300(The above drawing is provided here to give a picture of how the data in the file should be interpreted and the graph that represents this data. Your program should *NOT* print this drawing. See the sample run for expected program output.) From this data you should create the correspondence: vertex 0 - c100 vertex 1 - c300 vertex 2 - c200 and you can represent the graph using adjacency matrix (the row and column indexes are provided for convenience): | 0 1 2 ----------------- 0| 0 1 1 1| 0 0 0 2| 0 1 0 e.g. E[0][1] is 1 because vertex 0 corresponds to c100 and vertex 1 corresponds to c300 and c300 has c100 as a prerequisite. Notice that E[1][0] is not 1. If you use the adjacency list representation, then you can print the adjacency list. The list must be sorted in increasing order (e.g. see the list for 0). It should show the corresponding node numbers. E.g. for the above example the adjacency list will be: 0: 1, 2, 1: 2: 1, 6. 7. In order for the output to look the same for everyone, use the correspondence given here: vertex 0 for the course on the first line, vertex 1 for the course on the second line, etc. 1. Print the courses in topological sorted order. This should be done using the DFS (Depth First Search) algorithm that we covered in class and the topological sorting based on DFS discussed in class. There is no topological order if there is a cycle in the graph; in this case print an error message. If in DFV-visit when looking at the (u,v) edge, if the color of v is GRAY then there is a cycle in the graph (and therefore topological sorting is not possible). See the Lecture on topological sorting (You can find the date based on the table on the Scans page and then watch the video from that day. I have also updated the pseudocodein the slides to show that. Refresh the slides and check the date on the first page. If it is 11/26/2020, then you have the most recent version.) 8. (6 points) create and submit 1 test file. It must cover a special case. Indicate what special case you are covering (e.g. no course has any prerequisite). At the top of the file indicate what makes it a special case. Save this file as special.txt. It should be in Unix EOL format. Part 2: Suggestions for improvements (not for grade) 1. CSE Advisors also are mindful and point out to students the "longest path through the degree". That is longest chain of course prerequisites (e.g. CSE1310 ---> CSE1320 --> CSE3318 -->...) as this gives a lower bound on the number of semesters needed until graduation. Can you calculate for each course the LONGEST chain ending with it? E.g. in the above example, there are 2 chains ending with c300 (size 2: just c100-->c300, size 3: c100-->c200-->c300) and you want to show longest path 3 for c300. Can you calculate this number for each course? 2. Allow the user the enter a list of courses taken so far (from the user or from file) and print a list of the courses they can take (they have all the prerequisites for). 3. Ask the user to enter a desired number of courses per semester and suggest a schedule (by semester). Part 3: Implementation suggestions 1. Reading from file: (15 points) For each line in the file, the code can extract the first course and the prerequisites for it. If you cannot process each line in the file correctly, you can use a modified input file that shows on each line, the number of courses, but you would lose the 15 points dedicated to line processing. If your program works with the "easy files", in order to make it easy for the TAs to know which file to provide, please name your C program courses_graph_easy.c. Here is the modification shown for a new example. Instead of c100 c300 c200 c100 c200 the file would have: 1 c1003 c300 c200 c100 1 c200 1. that way the first data on each line is a number that tells how many courses (strings) follow after it on that line. Everything is separated by exactly one space. All the other specifications are the same as for the original file (empty line at the end, no space at the end of any line, length of words, etc). Here is data0_easy.txt Make a direct correspondence between vertex numbers and course names. E.g. the **first** course name on the first line corresponds to vertex 0, the **first** course name on the second line corresponds to vertex 1, etc... 2. 3. The vertex numbers are used to refer to vertices. 4. In order to add an edge in the graph you will need to find the vertex number corresponding to a given course name. E.g. find that c300 corresponds to vertex 1 and c200 corresponds to vertex 2. Now you can set E[2][1] to be 1. (With the adjacency list, add node 1 in the adjacency list for 2 keeping the list sorted.) To help with this, write a function that takes as arguments the list/array of [unique] course names and one course name and returns the index of that course in the list. You can use that index as the vertex number. (This is similar to the indexOf method in Java.) 5. To see all the non-printable characters that may be in a file, find an editor that shows them. E.g. in Notepad++ : open the file, go to View -> Show symbol -> Show all characters. YOU SHOULD TRY THIS! In general, not necessarily for this homework, if you make the text editor show the white spaces, you will know if what you see as 4 empty spaces comes from 4 spaces or from one tab or show other hidden characters. This can help when you tokenize. E.g. here I am using Notepad++ to see the EOL for files saved with Unix/Mac/Windows EOL (see the CR/LF/CRLF at the end of each line): EOL_Mac_Unix_Windows.png How to submit Submit courses_graph.c (or courses_graph_easy.c) and special.txt (the special test case you created) in Canvas . (For courses_graph_easy.c you can submit the "easy" files that you created.)Your program should be named courses_graph.c if it reads from the normal/original files. If instead it reads from the 'easy' files, name it courses_graph_easy.c As stated on the course syllabus, programs must be in C, and must run on omega.uta.edu or the VM. IMPORTANT: Pay close attention to all specifications on this page, including file names and submission format. Even in cases where your answers are correct, points will be taken off liberally for non-compliance with the instructions given on this page (such as wrong file names, wrong compression format for the submitted code, and so on). The reason is that non-compliance with the instructions makes the grading process significantly (and unnecessarily) more time consuming. Contact the instructor or TA if you have any questions
GuillaumeDerval
Simply grade student assignments made in Java or anything that runs on the JVM (Scala/Kotlin/Jython/...).
devleo-m
The Educational System is a robust back-end Rest API developed in Java, using the Spring Boot framework. It was designed to efficiently manage the academic operations of educational institutions, offering a complete solution for managing users, teachers, classes, courses, materials and grades.
mikehelmick
Interface based grading of programming assignments (in Java)
shemaikuzwe
a desktop app grade calculator for students made in java
instabaines
submine is a research‑grade Python library for frequent subgraph mining that provides a unified, safe, and extensible interface over heterogeneous mining algorithms implemented in Python, C++, and Java.
sakit333
Spring_mySql_project is a backend REST API built with Spring Boot and MySQL. It leverages JPA for ORM, supports CRUD operations, and follows layered architecture best practices. Ideal for showcasing data persistence and API development in enterprise-grade Java applications.
s1iqbal
2D "Street Fighter" Style Fighting Game. Done in Java with a fleshed out Run-Time Engine, SpriteSheet Algo, State management, and characters that feature my self and my collaborator! Done to demo in grade 12 as a fun project.
stephnr
Sample OS Simulator in JAVA. Constructed as part of a term project towards my Computer Science degree. Final Grade: A
99Ahmadprojects
A small scale project for Students Grade Management in Java
TheRenegadeCoder
Unpack and automatically grade a collection of student submissions in Java
Nandhini131
how to calc the grade by using java
minojsos
A University Grading System written in Java and uses MySQL for data storage. GUI developed using JavaFX Library.
Nandhini131
how to calc the grade by using java
Rachanagaikwad49
Student Grade Calculator in Java
tthevegaa
Console-based academic management system in Java with persistent data storage, user roles, schedules, credentials, and grade assignment.
matteobettini
This repository contains a Java video game implementation of the board game Santorini. It has been developed for the course "Software Engineering" at Politecnico di Milano as part of the final examination projects for the Bachelor in Computer Engineering. For this project we have been awarded the maximum grade of 30 cum laude/30.
Songrui9269
MapReduce is a programming model that involves two steps. The first, the map step, takes an input set I and groups it into N equivalence classes I0, I1, I2, ..., IN-1. I can be thought of as a set of tuples <key, data>, and the function map maps I into the equivalence classes based on the value of key. In the second reduce step, the equivalence classes are processed, and the set of tuples in an equivalence class Ij are reduced into a single value. MapReduce has become very popular in part because of its use by Google, but is an old parallel programming model. It is surprisingly general. To perform a parallel MapReduce, the input is spread across the available processors. Each processor runs one or more instances of map, followed by executing one or more instances of reduce. Each instance of map will potentially form equivalence classes I0, I1, I2, ..., IN-1. Consider the word counting problem, which can be solved in parallel using MapReduce. Given a list of words, the output should consist of how many times each word appeared in the list (or text). Viewing the input as tuples, the word is the key, and the data is the constant 1. A naive map function would collect all instances of a word into an equivalence class. Each equivalence class would then be assigned to a process pr, and process pr would determine the cardinality of the equivalence classes from all maps, which would be the word count. A more intelligent map function would form singleton equivalence classes Iword, where the only element is <word, count>. The process pr that reduces Iword would receive the Iword equivalence classes from all of the map functions, and would perform a reduction on the class. In Google terminology, the function that performs this optimization is called a combiner and executes on the same process as the map. This is important since its function is to combine many members of an equivalence class into a single member so as to decrease the volume of communicated data sent form the needed between the map and reduce stages. A second optimization that can be performed is to group multiple equivalence classes together to be sent together to the same reducer. Thus, the records for “cat”, “dog”, “test” and “homework” might be sent by different mappers to the same reducer. This enables all of the to be sent by a single communication operation, improving the efficiency of the communication. The question then becomes, how do we decide which equivalence classes to group together. This decision is done using a hash function H. Let’s say we will have R reducers. Then having a function 0 ≤ H(key) ≤ R-1 will group the equivalence classes into R groups to be sent to the R reducers. What we will program We will program a map reduce that executes on a distributed memory machine and uses OpenMP on each core to compute the map reduce. The project will be done in three steps: The OpenMP version and a wordcount map reduce (20% of the project grade) The MPI version that uses the OpenMP version to perform node-local computation with a wordcount map reduce (20% of the project grade) Final turn-in. (60% of the project grade) Details are given below. Note that even though I use OpenMP you can use Pthreads, Java or other code that supports multithreading to write the shared memory version. Note that if you use Java you will need to use Java isolates to communicate between nodes/processes. General information: The text for the map reduce will be distributed across FI input text files, where FI > Nmpi*C, where Nmpiis the number of nodes (machines and processes) used by MPI and C is the number of cores on each processor. OpenMP code (i.e. OpenMP code on a node). There will be four kinds of threads: Reader threads, which read files and put the data read (or created by self-initialization) into a work queue. For wordcount each work item will be a word. For the numerical problem, each entry can be a section of the array that a thread should work on; Mapper threads, which execute in parallel with Reader threads (at least until the Reader threads finish) and create combined records of words. I.e., if there are 2045 instances of “cat” in the files read by the program, the final output of the mapper threads will be a record that looks like <“cat”,2045>; Reducer threads that operate on work queue entries created by mapper threads and combine (reduce) them to a single record. Thus, for the word “cat”, there is potentially a <“cat”,counti> record sent by every mapper thread ti in the system and it will sum all of the counts and place it on a work queue. For each word there is exactly one Reducer thread in the system that handles it. Writer threads that take a sum from the work queue and write it to a file. Note that each process can write its results to a separate file. You may not need threads for each of these but only different work queue entries. Thus, Reader and Writer threads run at different times. Mapper and Reducer threads, within a node, can be made to run at different times. These threads can be made to do different tasks by pulling different work out of work queues. This is not mandatory, i.e., you can have different groups of threads to perform different tasks, thus you might have reader, mapper, reducer and writer threads. A work queue for each reducer thread. Mapper threads will put work items into this queue. For load balance purposes it is desirable that the range of function H that determines which reducer will get a work item be from 0 to R where R = k⋅numMappers, and k is some constant. You need to have mechanisms to ensure that Mapper threads wait until all Readers have finished before considering themselves complete, i.e. the work queue from which Mapper threads get their work may be empty at some point in time, but have data at a later point in time because an unfinished Reader thread put data in it. Mappers will need to put their data on a reducer’s work queue based on the key (word) for that data: As mentioned above, the reducer of a key should be determined by some sort of hash function g = H(key). All keys that map onto reducer g should be added to g’s work queue. Each process can assume it will be receiving data from every other node. This will simplify the communication structure of your program when you go to the MPI version. A node that sends no data should send an “empty” record letting the other process no it will get no data from it. As each process finishes its reduce work, it should write its results to an output file, close it, and notify the master thread that it is finished so that it can terminate the job, and then terminate itself. MPI version: The MPI version will use multiple nodes. Each node will run a copy of the OpenMP code above to perform local computations. A few changes need to be made to the OpenMP process on a node to communicate with the OpenMP processes running on other nodes. Instead of mappers putting their results onto a reducers work queue, they should put them onto a list to be sent to other nodes. A sender thread should be used to send the results of reducers in these lists to the appropriate node. Each node should have a receiver thread that obtains data sent to it by sender threads in other nodes The receiver thread for a node will place its received data onto work queues in the node for each reducer. Each node will read some portion of the FI > Nmpi*C input files. We could statically define the files each node will process, but this could lead to some nodes getting many big files and other nodes getting many small files. Instead, each node should request a file from a master node which will either send a filename back to the node or an “all done” that indicates that all files have been or are being processed. Performance data and tuning: You should collect performance data showing: What the bottlenecks are in the code. This might involve time Mapper threads are waiting for work from Reader threads, how long I/O takes vs. Mapping (not counting waiting for I/O on mapping) and data to support this other numbers below. How much load imbalance there is within a node. How much load imbalance there is across nodes (i.e. the difference in time between the first map node is ready to send its data and the latest/last map node is ready to send its data to be reduced. You should experiment with different numbers of Reader threads Step deliverables: For the OpenMP version: speedup numbers when using 1, 2, 4, . . . , #cores Mapper and Reader threads; For the MPI version: speedup numbers when using 1, 2, 4, …, #nodes to run the program, with Mapper and Reader threads for each core on a node (i.e. you don’t need to experiment with various numbers of nodes and cores For the final turn-in version: A paper not longer than ten pages that describes your overall strategy, performance bottlenecks, Performance numbers and implementation positives and negatives (what you are happy about, what you would like to change.) A full set of performance numbers either the word-count problem, and scaling by number of nodes, and dataset size, for the matrix multiply problem. Speedups and efficiencies for 2, 4, 8 and 16 processors. Do the Karp-Flatt analysis on 2, 4, 8 and 16 processors. Curves showing the number of Reader threads and performance, and the number of map and reduce threads and performance. Overall performance of the different parts of the map reduce, and the entire map reduce. For baseline “serial” numbers, use a system with one thread for each of the tasks above. Performance numbers for different numbers of nodes along with the various speedup metrics (speedup, efficiency and Karp-Flatt). An explanation of why you are getting the speedups you are getting. I may have a meeting with each group to have you demonstrate your code. This would likely happen during dead week. The point distribution will be 40% for a working parallel project with any speedup; 40% for the paper and presentation of your results and explanation of your results, 20% for acceptable speedups or non-trivial explanations of unacceptable speedups.
deblinaroy11
Student Grade Calculator in Java
castellanos70
Educational Game for 4th Grade Math Students written in Java
xtianus
Patterns, good practices and useful code to quickly build enterprise-grade web applications in Java using Spring, Hibernate and Thymeleaf.
mdaum
Java program that takes in a submission directory (or list of .java files), runs them through Moss, Jplag, and Plaggie...and reports cross sections of results...will add in support for Prasun Dewan's CheckStyles output in his grader for those using that. Part of UNC research (MS Student).
gregorian-09
Production-grade Point & Figure (PnF) charting engine in C++20 with C, Python, Rust, Java, and .NET bindings for pattern detection, trendline analysis, indicator computation, and real-time dashboarding.
OscarGamst
Group project through two semesters in the course "Applikasjonsutvikling" at USN. We created a fullstack exercise app using technologies like Java, SpringBoot, React.js, PostgreSQL and many others! Grade: B