Found 190 repositories(showing 30)
jordigilh
Open-source AIOps platform that closes the loop from Kubernetes alert to automated remediation. An LLM investigates incidents live via kubectl, matches a fix from a workflow catalog, and executes it — or escalates with a full RCA. Approval gates, confidence thresholds, and SOC2 audit trails keep humans in control.
corentinaltepe
Application for Windows enabling picking a color out of Pantone's Uncoated Colors catalog. 2-way search: from reference name to RGB, and from RGB to ref. Matching of the RGB to the closest color and averaging of several picked colors for the best match.
rlwastro
Robust astrometric registration and cross-match of astronomical catalogs
kevinkhu
TESS Input Catalog v8.2 and Gaia DR3 cross-match
demistry
Framework to match custom audio against a custom reference catalog based on ShazamKit
pointofsale
Mongo db console commands //showing the existing dbs.. show dbs //use test switching to db test, (only creating it when actually adding new data) //prompts the name of the working db now db //the fllw would prompt the count(), in the link2 collection, in the current db... >db.links2.count() //inserting a record in links2 db.links2.insert({title:"unn titulo", url:"", comment:"", tags:["un primer tag", "un segundo tag"], saved_on: new Date}) //working with an object the javascript way... data = {} | data.title = "un titulo" | data.tags = ["un tag", "otro"] | data.meta = {} | data.meta.OS = "win7" | db.links2.insert(data) //printing the result of the find, in the structured json format. db.links2.find().forEach(printjson) //--> in this case we pass to forEach the printjson function... //retriving only the first of the results of the find method. db.links2.find()[0] db.links2.find()[0]._id //getting the timestamp present in the _id variable (is made of (also) the time it was created) db.links2.find()[0]._id.getTimestamp() /*the following function creates, when called, a new collection inside the same working db, that tracks the last id number we are in. This allows having the same behavieur than in relational DBs.*/ //apparently, u have to declare this function... function counter(name) { var ret = db.counter.findAndModify({query:{_id:name}, update:{$inc:{next:1}}, "new":true, upsert:true}); return ret.next; } //so u can do something like db.products.insert({_id:counter("products"), nombre:"primer nombre"}) //the result is something like: { "_id": 1, "name": "un producto" } { "_id": 2, "name": "otro producto" } /*referencing in MongoDB*/ db.users.insert({name:"Richard"}) var a = db.users.findOne({name:"Richard"}) db.links2.insert({title:"primer titulo", author:a._id}) //reference to other collection throught the _id key... //quering db.users.findOne({ _id:link.author }) //a way to make manual inner joins... within the user db, we search for a coincidence of our _ids on the links2 db, author field. ---note--- embedding is much more efficient when we have significantly more read than writes. Otherwise, consider using the normalized way. These depends on every case. /**/ #importing data from a .js in json format. With mongod running or in a services: > ../../../mongodb/bin/mongo 127.0.0.1/bookmarks bookmarks.js //the first part is the location to the mongo exe in the mongo usual location //the second part is the server and db in which we will be importing in //the third part is the file with all the mongo commands... --this bookmarks file is in C:\Tuto\mongo\trying -- https://raw.github.com/tuts-premium/learning-mongodb/master/08%20-%20bookmarks.js /*bookmarks.js extract*/ var u1 = db.users.findOne({ 'name.first': 'John' }), u2 = db.users.findOne({ 'name.first': 'Jane' }), u3 = db.users.findOne({ 'name.first': 'Bob' }); db.links.insert({ title: 'Nettuts+', url: 'http://net.tutsplus.com', comment: 'Great site for web dev tutorials', tags: ['tutorials', 'dev', 'code'], favourites: 100, userId: u1._id }); /**/ //connecting directly to db bookmarks > ../../../mongodb/bin/mongo bookmarks //searching in the collection all docs that have inside the tags array the "code" element. //this can be done because we are dealing with an array --> array advantages... db.users.find({tags:"code"}).forEach(printjson) //with findOne u can do (not with find) findOne().name db.links.find({favourites:100}, title:true, url:1) //selecting only some fields... db.links.find({favourites:100}, tags:0) //selecting all but the tag field... //selecting inside an object... db.users.findOne({"name.first": "John"}) db.users.findOne({"name.first": "John"}, "name.last":1) var john = db.users.findOne({"name.first": "John"}) db.links.find({userId:john._id}, {title:1, _id: 0}) /*queries directives*/ //greater than 150 db.links.find({favourites:{$gt:150}}, {_id:0, favourites:1, title:1}).forEach(printjson) db.links.find({favourites:{$gt:150}}, {_id:0, favourites:1, title:1}).count() //less than db.links.find({favourites:{$lt:150}}, {_id:0, favourites:1, title:1}).forEach(printjson) //$lte, $gte -- and iqual //using in db.users.find({"name.first":{$in:["John", "Jane"]}}) //the opposite is $nin db.users.find({"name.first":{$nin:["John", "Jane"]}}) //$all -- only the records with all the specifications in "tags" field. db.links.find({tags: {$all:["code", "marketplace"]}}, {title:1, tags:1, _id:0}) //$ne -- not equal //the $or flag search for the fullfillment of at least one of the elements in the array passed... db.users.find({$or: [{"name.first": "John"}, {"name.last": "Wilson"}]}) //the opposite: $nor //inclusive: $and //$exists db.users.find({email: {$exists: true}}) //$mod db.links.find({favourites: {$mod: [5, 0]}}, {_id:0, title:1, favourites:1}) db.links.find({favourites: {$not: {$mod: [5, 0]}}}, {_id:0, title:1, favourites:1}) //elemMatch -- inside logins, search for an element match that has minutes = 20, and return the complete record db.users.find({logins: {$elemMatch: {minutes: 20}}}) //searching for an 'at' prior to 2012/03/30.. and returning the whole record... db.users.find({logins: {$elemMatch: {at: { $lt: new Date(2012, 3, 30)}}}}) //using where -- c) is equivalent to a) a) db.users.find({ $where: 'this.name.first === "John"'}) b) db.users.find({ $where: 'this.name.first === "John"', age:30}) c) db.users.find( 'this.name.first === "John"') //injecting functions in mongodb -- as this example returns trueéfalse, its going to return values randomly var frand = function() {return Math.random() > 0.5} db.users.find(frand) // var f = function() { return this.name.first === "John"} db.users.find(f) //or db.users.find( {$where: f} ) //other queries //distinct -- returns a list of diff results db.links.distinct('favourites') --> [100, 32, 21, 78, ...] db.links.distinct("url") db.links.group({ key:{userId : true}, initial:{favCount: 0}, reduce: function (doc, o) {o.favCount += doc.favourites}, finalize: function(o) {o.name = db.users.findOne({ _id: o.userId}).name } }); *** //the final part is not working... db.links.group({ key:{userId : true}, initial:{favCount: 0}, reduce: function (doc, o) {o.favCount += doc.favourites} }); db.links.group({ key:{userId : true}, initial:{favCount: 0}, reduce: function (doc, o) {o.favCount += doc.favourites}, finalize: function(o) {o.name = "richard"}} ); //regex db.links.find({ title: /tuts\+$/}) db.links.find({ title: {regex: /tuts\+$/}}, {title:1}) //counting db.users.count({'name.first': 'John'}) db.users.count(); //all users in the collection //sorting, limit db.links.find({}, {title:1}).sort({title:1}).limit(1) //1: asc -1: desc //sorting, skipping and limiting... normal behavieur in the pagination rutine... db.links.find({}, {title:1, _id:0}).sort({title:1}).skip(3).limit(3) /*updating*/ //by replacement or by modification... ---general form /* db.collection.update( <query>, <update>, { upsert: <Boolean>, //if not found insert multi: <Boolean>, //change in all the condition <query> is fullfilled } ) */ // more info in http://docs.mongodb.org/manual/reference/method/db.collection.update/ db.users.update({-the query object-}, {-the update object-}, -upsert boolean-); var n = {title:"Nettuts+"} db.links.find(n, {title:1}) db.links.update(n, {$inc: {favourites: 5}}) var q = {"name.last": "Doe"} db.users.find(q, {name:1}) //we can use set to update a field or add a completly new one... db.users.update(q, {$set: {"name.last": "Doetix"}}) //modifying an existing field.. db.users.update(q, {$set: {"email": "doetix81@gmail.com"}}) //inserting a new one... //to remove a field w use unset db.users.update(q, {$unset: {job: "Web developper"}}) db.users.update({"name.first":"John"}, {$set: {job:"Web developer"}}, false, true) //modifying and then inserting an object var bob = db.users.findOne({"name.first":"Bob"}) >bob { "_id" : ObjectId("525f06242df9763abe646b62"), "name" : { "first" : "Bob", "last" : "Smith" }, "age" : 31, "email" : "bob.smith@gmail.com", "passwordHash" : "last_password_hash" } > bob.job = "Thick Brush Painter" > db.users.save(bob) //find and modify -- findAndModify {{}} /* The findAndModify command atomically modifies and returns a single document. By default, the returned document does not include the modifications made on the update. To return the document with the modifications made on the update, use the new option. { findAndModify: <string>, query: <document>, sort: <document>, remove: <boolean>, //one of | update: <document>, //this two | new: <boolean>, //if the new object must be shown or the old one.. fields: <document>, //fields to show in the result upsert: <boolean> } */ > db.links.findAndModify({ query:{favourites: {$gt:150}}, sort:{title:1}, update:{favourites: 333}, new: true, fields: {_id:0} }); //pulling into arrays db.links.update(n, { $push: {tags: "jobs"}}) > db.links.findOne(n).tags //several... db.links.update(n, {$pushAll:{tags: ['blogs','press','contests']}}) //on pull into the array if the new element is not present.. db.links.update(n, {$addToSet:{tags: "dev"}}) //doing the same with an array... db.links.update(n, {$addToSet:{ tags:{$each: ["dev", "interviews"]} }}) //pulling out content from the array... db.links.update(n, {$pull: {tags:'interviews'}}) //pulling several... db.links.update(n, {$pullAll: {tags: ['blogs','dev', 'contests']}}) //poping out from the beginning or the end.. db.links.update(n, {$pop: {tags: 1}}) //--from the end (-1 -- from the beginning) //positional operator... only the subobject gets updated... db.users.update({'logins.minutes': 20} , {$inc:{ 'logins.$.minutes': 10}}, false, true) db.users.update({'logins.minutes': 20} , {$set:{ 'logins.$.location': 10}}, false, true) db.users.update({'logins.minutes': 30}, {$set: {random: true}}, false, true) //renaming the fields name... db.links.update({url: {$exists: true}}, {$rename:{"url": "camino"}}, false, true); //more info on the positional operator in: http://docs.mongodb.org/manual/reference/operator/update/positional/ //taken from there: /* The positional $ operator facilitates updates to arrays that contain embedded documents. Use the positional $ operator to access the fields in the embedded documents with the dot notation on the $ operator. db.collection.update( { <query selector> }, { <update operator>: { "array.$.field" : value } } ) */ /***EXAMPLE Consider the following document in the students collection whose grades field value is an array of embedded documents: { "_id" : 4, "grades" : [ { grade: 80, mean: 75, std: 8 }, { grade: 85, mean: 90, std: 5 }, { grade: 90, mean: 85, std: 3 } ] } Use the positional $ operator to update the value of the std field in the embedded document with the grade of 85: db.students.update( { _id: 4, "grades.grade": 85 }, { $set: { "grades.$.std" : 6 } } ) ***/ //removing db.users.remove({'name.first': "John"}) //all the collections in the selected db... show collections //dropping completly a collection... db.acoll.drop() //indexes... db.links.find().explain db.links.ensureIndex({ title: 1}) //in ascending order.. in mainly important in cpompund indexes.. //a reflect of this index can be found in that db indexes collection db.system.indexes.find(); //u cound put an index to a canging value, but every time u change that value the index must be updated. keep in mind. //usually is a good idea to set the indexes at the beginning when no data is present in the collections. However, u could use the following formula to treat duplicates and unique data //keeping only the first one, deleting the others.. db.links.ensureIndex({ title: 1}, { unique: true, dropDups: true}) //when considering the case of some of the documents without the idexed field, to save mongo from storing space for this index if the field itself has not been inserted: db.links.ensureIndex({ title: 1}, {sparse: true}) //its important to think of the compund index as a nested one, an index of an index. Its related to each problem-case. Like in the case of the recepies: indexing first the ingredient and the the recepie, makes more sense than indexing in reverse. Its all related on how u are going to search. db.links.ensureIndex({ title: 1, url: 1}) //this one means that u can search on title; or on title and url... db.links.ensureIndex({ a: 1, b: 1, c: 1}) //searches are possible on a; a, b; a, b, c //deleting indexes db.links.dropIndex("title_1"); //the same way that appears in system.index collection... /*concepts to follow*/ //Sharding and Replica Set... http://www.slideshare.net/Dataversity/common-mongodb-use-cases-13695677 http://docs.mongodb.org/ecosystem/use-cases/product-catalog/ db.collection.update({"grades.grade":80}, { $set: {"grades.$.std": 18}})
rges-pit
Code to cross-match between existing catalogs of variable stars in the Bulge
deezer
An Android sample application, using both GraceNote and Deezer SDK to match your music with Deezer's catalog
bernabasy
Catalog of my things This console app helps users keep track of their possessions like books, music albums, movies, and games based on a UML class diagram. It stores data in JSON files and has a corresponding database with tables that match the program's class structure.
dfm
A cross match of the Gaia and Kepler catalogs
Gabriel-p
Cross-match an observed catalog with one obtained using astroquery
Yajie-Z
An efficient heterogeneous cross-matcher for large catalogs
Scripts and documentation for evaluating ISBN matches in OCLC for cataloging queues
TrystanScottLambert
Function to Cross match astronomical catalogs in either cartesian, equitorial, or galactic coordinates with a preference on on-sky position
Cade-Bray
Program uses SNHU's API to pull the current catalog and match certificates to a class code. Searchable by class code.
Iryna-Vyshniak
This is a simple car rental application . It allows users to browse a catalog of cars available for rent, add cars to their favorites, and view detailed information about each car. The application also includes filtering options to help users find cars that match their preferences.
Iryna-Vyshniak
This is a simple test Nike shop application. It allows users to browse a catalog of sport shoes and add them to their favorites and cart. Users can also view detailed information about each pair of shoes. The application includes filtering options to help users find shoes by color or size that match their preferences
swisscom
DEPRECATED - The idea is that we have a way to check within the catalog whether a package currently installed matches the one we want to install.
sciserver
In-Database Spatial Cross-Match of Astronomical Catalogs
mahmud-nobe
Cross Matcher of selected portion of two astronomical catalogs (radio-wavelength and optical)
semanticart
scrape the digital minuteman library catalog and see what matches your goodreads list
A commandline Ruby script to parse image urls from catalog.xml, name them to match the item ID, and zip them.
manuelbarzi
Daz allows you to catalog your folders by title, description, and tags, and search them in a simple way (it can also search in package.json files to match npm modules)
This repository contains a product ID mapping solution using TF-IDF vectorizer for weighted text vectors, Facebook AI Similarity Search (FAISS) for coarse filtering with cosine similarity, and Levenshtein distance for refined matching against the Blinkit catalog. Achieved 11.45% match for Zepto and 11.48% for Instamart.
Venu-Guptha
Indexing is the process of adding web pages into Google search. Google tries to understand what the page is about and analyzes the content of the page, catalogs, images and video files implanted in a page. The Google Search index contains hundreds of billions of webpages and is well over 100,000,000 gigabytes in size. Search engines crawl the internet to discover keyword attached to websites and pages. These results are stored and organized into a database called an “index” for quick retrieval. Once content has been indexed, it can be served up on search engine results pages (SERPs) for relevant search queries. Today, Google Search can help you search text from millions of books from major libraries. In short, if you want your content to be found, it needs to be indexed for the opportunity to be seen. There are a few methods for inviting a search engine to crawl a page in order to be indexed more quickly. XML Site Maps Robot Meta Tags Fetch as Google Submit URL Hosting Content The Sitemaps protocol allows a webmaster to inform search engines about URLs on a website that are available for crawling. A Sitemap is an XML file that lists the URLs for a site. Meta tags are essentially little content descriptors that help tell search engines what a web page is about. Meta elements are tags used in HTML and XHTML documents to provide structured metadata about a Web page. Crawling is the first part of having a search engine recognize your page and show it in search results. Crawling is the process by which Googlebot visits new and updated pages to be added to the Google index. New sites, changes to existing sites, and dead links are noted and used to update the Google index. Crawlers use algorithms to establish the frequency with which they scan a specific page and how many pages of the website it must scan. Googlebot is the web crawler software used by Google, which collects documents from the web to build a searchable index for the Google Search engine. Ranking - Once a keyword is entered into a search box, search engines will check for pages within their index that are a closest match; a score will be assigned to these pages based on an algorithm consisting of hundreds of different ranking signals. These pages (or images & videos) will then be displayed to the user in order of score. So in order for your site to rank well in search results pages, it’s important to make sure search engines can crawl and index your site correctly – otherwise they will be unable to appropriately rank your website’s content in search results.
DavidEdwardDarden
Little Debbie's Collector Club Each year, fans of Little Debbie (LD) gather to tell stories about the snacks they have encountered over the last 12 months. The annual dues allow the members to enhance the very popular, very sophisticated Little Debbie Collector Club (LDCC) website, which catalogs the details of their beloved LD snacks. Currently, the LDCC members are not able to update or create new content themselves - they have always relied on us, the developers, to handle this functionality. This year, however, they have a little more funding! So they’ve hired us to add some new features to the site, like the ability to update (PUT) and create (POST) content, along with a few other vital requirements... We will be hiring an intern to add numerous snacks to the DB. If you would like to be considered, please send your resume to LD at LDCC.com MVP Features Last year, they ran out of money before the details page could be completed. The first feature is to display the full value of each property on the details page(not the id). Hint: Use one of json-servers relationship features. Anyone visiting the site needs to log in, however there is only one admin, LD herself. Everyone else will not be an admin. This needs to be reflected in the data. Hint: you will need to add isAdmin: false to the register user object. Check your data for accuracy. The number of toppings has exploded! LD has been going over the top with so many variations... LDCC can barely keep up! To help out, it has been determined that there should be a list of toppings stored in the database, and there should be some way to mix and match the toppings with the snacks! Each snack should be able to have multiple toppings, (or no toppings at all). Each topping should be able to go on multiple snacks, (or no snacks at all). Hint: What type of relationship will this be? What tables will need to be created in the database? Be sure to share your ERD with the instruction team. The snack detail will need to display all the toppings for the one snack. Make this a comma separated list in a paragraph. With the new topping tracking system, the club members would love to have the option to display snacks with particular toppings. Something like, show me all snacks with chocolate icing. The dropdown menu should read from the toppings list in the DB and be displayed in the navbar. The dropdown list of all toppings should trigger a call to DB for only those snacks and then display them. For the first time in years, LD has a new type of snack - cereal. It is expected that over the next few years the trend of new snacks will continue. The club would like the ability to add a new snack type to the type table in the DB. They have also requested that the new Oatmeal Creme Pie Cereal be added immediately to the list of snacks in the DB. You will need to make an object that includes the properties of the snack table in the ERD and post it to the DB with Postman. Only the admin user should have the ability to Add A Type. Currently the only admin is LD herself. Only display the Add Type if LD is logged in. ERD Before you begin any code, use the ERD script and paste it into DBDiagram. Complete the relationships based on the MVP requirements. Share with the instruction team to get an updated snacks.json file. Bonus Add the functionality to add and edit a Topping - but only for admin users. Add the functionality to add and edit a snack - but only for admin users. This one is tricky since there is an option to have multiple toppings. Add the functionality to edit a snack - but only for admin users. Notes Ask questions about the requirements to ensure you are meeting expectations. After you complete each feature add, commit, push, and merge to github. Share your progress with the instruction team. To run this project run json-server in the API directory. json-server -p 8088 -w snacks.json serve the index.html on your local machine. This exercise utilizes the following: Javascript modules Javascript object fundamentals: properties, key, and value Adding/augmenting an object Loops/iteration Conditionals eventListeners Related data Filtering data DB calls: POST, PUT, GET.
KIPAC
Hierarchical Pixel-based Multi-Catalog Matcher
Gabriel-p
Cross-matched catalogs of stellar clusters
ris-tlp
Crossmatcher that finds counterparts for objects in the All Sky Galaxy catalogue and the Bright Source Survey
cosmicoder
Python codes to implement multi-wavelength cross-matching of CSV data files of SPITZER, GALEX, and GMRT mission catalogs to create a master catalog.