Found 46 repositories(showing 30)
Mazhichaoruya
Use the Intel D435 real-sensing camera to realize target detection based on the Yolov3 framework under the Opencv DNN framework, and realize the 3D positioning of the Objection according to the depth information. Real-time display of the coordinates in the camera coordinate system.ADD--Using Yolov5 By TensorRT model,AGX-Xavier,RealTime Object Detection
minhphan46
📱An React Native app powered by the Viro library that detects and displays images and objects in real-time, presenting lifelike 3D models through augmented reality (AR)🕶️
otacke
H5P content type that features Augmented Reality (AR). Using a devices camera, find physical markers attached to objects in order to trigger displaying 3D models or to trigger H5P interactions.
This repository presents a code to detect the rear of cars using RCNNs. The dataset consists of road images in different conditions like daylight and night conditions. The labels are given in the .csv format. Each row of the labels file consists of name of the image, details about coordinates of the bounding box(x_min, x_max, y_min and y_max), and the label itself. Details are extracted from the csv file and stored in a dataframe. ONly a subset of the data was trained on due to the resource exhaustion. All the details will be given below. Object detection: There are two parts to object detection- Object classification Object localization Bounding boxes are used usually for the localization purpose and the labels are used for classification. The two major techniques used in the industry for object detection are RCNNs and YOLO. I have dedicated the time spent on these assignments to learn about one of these techniques: RCNNs. Region Based Convolutional Neural Networks The Architecture of RCNN is very extensive as it has different blocks of layes for the above mentioned purposes: classification and localization. The code I have used takes VGG-16 as the first block of layers which take in the images as 3D tensors and and give out feature maps. To understand the importance of Transfer learning, I have used pre-trained weights of this model. This is the base network. The next network block is the Region Proposal Network. This is a Fully Convolutional Network. This network uses a concept of Anchors. It is a very interesting concept. This solves the problem of using exactly what length of bounding boxes. The image is scaled down and now each pixel woks as an anchor. Each anchor defines a certain number of bounding box primitives. The RPN is used to predict the score of object being inside each of this bounding box primitive. A Region of INterest pooling layer appears next. This is a layer which takes in ROIs of the feature map to compare and classify each bounding box. A post processing technique of Non-maximal supression is used to select the bounding box with the highest probability of the object being there. The image is scaled back up and this box is displayed. Hyperparameters used- Number of samples for training- 2252 Number of samples for testing- 176 ROIs- 4 epoch length- 500 Epochs- 91 Anchors-9 All results are visible in the ipynb files of training and testing. With only running the 40 epochs the mAP over the test data gave 0.68 value. THis is close to the 75% expected. I trained more and the accuracy visibly improved from the loss graph and the bounding box accuracy but sadly I am not able to find the mAP after this training round because the I increased the dataset size and I always get and error of resource exhaustion. I am planning to make the code more modular so that I can allocate resources to different modules separately and this issue is overcome. The accuarcy can further be improved by training over a larger dataset and running for more epochs. I will try to do this and improve the accuracy.
Akash-Ramjyothi
Pokem"AR" is a small mimic of the famous Augmented Reality game Pokemon Go. Here Geo Location is not being used and the models are not implemented in a real-time world. A 3D game object appears as Pokemon and when touched the ball, it automaticaly collides with the Game Object and the details of the Pokemon are stored and displayed through PokeAPI.
donfuegosf
AR Object Viewer & Scanner: This iOS application leverages Object Capture and ARKit to allow users to select and display 3D objects in augmented reality, and to create high-quality 3D models using a LiDAR scanner. Suitable for devices with an A14 Bionic chip or later, running iOS or iPadOS 17 or later.
Joon1221
An object loader that can display 3d models in a 3d space using the DX12 library.
The-Assembly
Three.js is a revolutionary open-source cross-platform JavaScript API and library that is used to create and display 3D computer graphics in a desktop web browser using WebGL, working at a high level to create GPU-accelerated animations without the need for plugins or external applications. The library comes packed with features not commonly associated with the web, including on-the-fly lighting effects, multiple camera angles, object geometry & material manipulation, and even VR/AR support through WebXR. In this session, we’ll cover the basics of the library and demonstrate how to code websites with 3D backgrounds, as well as show you how to bring 3D models onto the page for a whole new end-user experience you might not have thought possible before. Three.js is a revolutionary open-source cross-platform JavaScript API and library that is used to create and display 3D computer graphics in a desktop web browser using WebGL, working at a high level to create GPU-accelerated animations without the need for plugins or external applications. The library comes packed with features not commonly associated with the web, including on-the-fly lighting effects, multiple camera angles, object geometry & material manipulation, and even VR/AR support through WebXR. In this session, we’ll cover the basics of the library and demonstrate how to code websites with 3D backgrounds, as well as show you how to bring 3D models onto the page for a whole new end-user experience you might not have thought possible before. Prerequisites: ✅ Visual Studio Code (https://code.visualstudio.com/download) ✅ Basic knowledge of JavaScript and web programming ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings #3DAnimation #Three.js
swetamishra123
A web page displaying a 3D object using Google's Model Viewer.
Rockindash
Detects the scanned 3D object and displays a iPhone model on the top of it
IT21219498
Develop a simple marker-based augmented reality (AR) application (Web/Mobile) that displays 3D models of objects
Ticketedmoon
A mini-project involving 3D models & animations. I am using the webGL Framework to animate and display 3D objects on a website.
I-Am-Lain
A SPA, where user can view data from NASA that displays all near-Earth objects in a 3d model.
MohdAnam
An AR based app that focuses on augmenting Multiple 3D models of objects starting with the concerned letter for example: A for Airplane, subsequently a 3D model of Airplane displays on the screen.
EliseevDmitry
ARModelViewer is a module for integrating 3D models into augmented reality using SwiftUI and ARKit/RealityKit. It supports loading models from resources and files, displays gesture hints, and enables moving, rotating, and scaling 3D objects within the AR environment.
amin-Lotfi
A 3D Model Viewer application built using PyQt5 and PyOpenGL. This program allows users to load, display, and interact with 3D models in .obj format. Users can rotate the model, zoom in and out, and highlight specific points by entering their coordinates. It provides an intuitive graphical interface for exploring and visualizing 3D objects.
Samekah
A final year coursework for my Advanced Computer Graphics, where we were tasked with understanding the ways of representing, rendering, and displaying pictures of 3D objects by mathematically modelling the interaction of light on mesh models through creating a ray tracer.
sadiq-skd
This Python script utilizes OpenCV, NumPy, and TensorFlow to create a 3D model from an input image, performing image segmentation, and estimating the weight and dimensions of the segmented objects. Additionally, it predicts materials using the MobileNetV2 model, displaying results and visualizations.
itsarnaud
The university project ESCAPE involves designing an interactive map to showcase an iconic object from a selected region, focusing here on a Viking Langskip. It uses Svelte and Three.js to create a web application that displays a 3D model in the browser. The 3D model (GLB file), is rendered using Three.js, with a reactive user interface.
FavioJasso
This is the front end to "D3D" aka Data in 3 Dimensions. A group project which I lead front end development and design for a Augmented Reality Web Application displaying Data Visualizations which have been created into 3D Objects / Models.
CoraZhang
This course introduces the basic concepts and algorithms of computer graphics. It covers the basic methods needed to model and render 3D objects, including much of the following: graphics displays, basic optics, line drawing, affine and perspective transformations, windows and viewports, clipping, visibility, illumination and reflectance models, radiometry, energy transfer models, parametric representations, curves and surfaces, texture mapping, graphics hardware, ray tracing, graphics toolkits, animation systems.
The aim of my project was to develop a user-friendly method for generating realistic 3D models and displaying them on smartphones using augmented reality. The only input required from the user is a short video of the desired object, which can be uploaded to the app. This project uses a modified version of Nvidia's Instant-NeRF.
tapeshjoham
We are trying to build up holographic projection of objects around us and trying to exploit them to create easy visualization of models which could be used in education fields like architecture and modelling soft wares like SolidWorks, Autocad etc. We are trying to add some dynamic movement to the 3D objects for real time behavior like scaling a structure, moving it from one place to another etc. We are breaking the project in five-six steps, starting from a very basic model of a cube and proceeding further to make the holograms. 3D Holographic Display technology has endless applications as far as human mind can imagine. Holography being the closest display technology to our real environment may just be the right substitute when reality fails. It provides real time live view.
juns19
Computer Graphics is concerned with all aspects of producing pictures or images using a computer. Using computer graphics, we can create images by computers that are indistinguishable from photographs of real objects. We are implementing our project using a particular graphics software system that is, OPENGL. OpenGL is strictly defined as “a software interface to graphics hardware.” In essence, it is a 3D graphics and modelling library that is extremely portable and very fast. Using OpenGL, you can create elegant and beautiful 3D graphics with nearly the visual quality. OpenGL is intended for use with computer hardware that is designed and optimized for the display and manipulation of 3D graphics. OpenGL is used for a variety of purposes, from CAD engineering and architectural applications to computer-generated dinosaurs in blockbuster movies. This aim of this project is to show the Implementation of Stack and Queue using a numbers as an element. The stack is a data structure which works on LIFO (Last In First Out) technique and queue is a data structure which works on FIFO (First In First Out). The numbers are pushed or popped in the stack while in the queue the numbers are inserted or deleted. When there is no element in the stack the message is shown as “Stack is Empty” and in queue it shows “No elements to display in queue”.
Sign language is used by hearing and speech impaired people to transmit their messages to other people but it is difficult for a regular people to understand this gesture based language. Instantaneous responses on sign language can significantly enhance the understanding of sign language. In this paper, we propose a system that detects Bangla Sign Language using a digital motion sensor called Leap Motion Controller. It is a sensor or device which can detect 3D motion of hands, fingers and finger like objects without any contact. A Sign Language Recognition system has to be designed to recognize a hand gesture. In sign language system, gestures are defined as some specific patterns or movement of the hands to give an expression. There has to be a library which includes all the datasets to match with the user given gestures. We have to compare the sequences of data we get from Leap Motion and our datasets to get an optimal result which is basically the output. It will then show the output as text in the display. For our system, we choose to use $P Point-Cloud Recognizer algorithm to match the input data with our datasets. This recognition algorithm was designed for rapid prototyping of gesture-based UI and can deliver an average over 99% accuracy in user-dependent testing. Our proposed model is designed in a way so that the hearing and speech impaired people can communicate easily and efficiently with common people.
laura-james
This Javascript code takes a 3d designed object from Blender or Blockbench.net as a GLTFF file and displays it as a 3d model
sonal-ojha
Website that displays 3D models for the objects
HussienFahmy36
Furniture display in 3d model objects using ARkit
Sokol264
Viewer capable of displaying 3D models from object files
krishnaupadhya
App for kids, displays 3d models of animals or any other objects