Found 1,391 repositories(showing 30)
OpenBB-finance
Financial data platform for analysts, quants and AI agents.
nuglifeleoji
A sophisticated LangGraph-based agent that automates financial options analysis with real-time data from Polygon.io, smart caching, persistent memory, and professional-grade analysis. Built for traders, analysts, and developers who need intelligent options data processing
HFTHaidra
HFT Arbitrage EA is A trading system based on a backlog of data feed. HFT Arbitrage EA To work successfully, it needs a faster data feed agent and slow forex broker where data feed lag. Data feed lags occur because of the operation of the software error broker and problems on its server. Just the broker can use the bridge (Bridge), which connects it with the liquidity provider. In this way, data feed may also be breaking. Especially on time of the release of important news, analysts rating agencies, changes in economic data, and so on, can we see a noticeable difference in the data feed. HFT Arbitrage EA receives data feed every millisecond from Darwinex (MT5 ) and compares them with the prices in the terminal broker. When there is a backlog of data feed, expert arbitrage trading algorithm starts trading and allows to obtain the maximum profit from each signal.
ComposioHQ
No description available
zhound420
Multi-agent AI trading system using LLM-powered analyst agents (Buffett, Munger, Burry, etc.) with free data sources (SEC EDGAR + yfinance) and Alpaca integration.
yashksaini-coder
Developed a Web search and Financial Analyst AI agent team Scraping web to provide real time data, and streamlined in real time as API service
siddharth-Kharche
No description available
chenningling
基于大模型的自动化数据分析工具,支持上传 Excel/CSV 数据,通过 AI Agent 自动规划和执行分析任务,生成带可视化的复盘报告。 项目说明:这是一个学习项目,在探索不同 Agent 架构模式的过程中,迭代了多个版本的循环实现。因此项目中包含了 5 种不同的 Agent 运行模式,每种模式代表了不同的设计思路和实现方式,便于学习和对比。
0PeterAdel
Data-Verse is an end-to-end AI data analysis agent that automates data ingestion, cleaning, pattern extraction, and predictive modeling, culminating in interactive visualizations—providing a comprehensive alternative to traditional data analysts.
DevanshSrajput
AI-powered Streamlit app for analyzing, summarizing, and chatting with documents (PDF, DOCX, CSV, images, etc.) using LLMs and Together AI.
aws-samples
Data Analyst Agent, based on the Strands SDK, with the capability to introspect the schema of the SQL database under context, and to answer user queries based on the database content: writing SQL queries, processing the data with pandas, and generating visualizations with Matplotlib.
yaninsanity
TubeWhale is an open-source AI framework that automates YouTube video search, transcript retrieval, and content summarization. Users input a keyword, and TubeWhale's multi-agent system expands the search, processes video data, and provides structured summaries. Ideal for researchers and analysts, it simplifies video content exploration.
sk-wang
A modern database management tool supporting PostgreSQL and MySQL. Core features: 🤖 discover your database via agent powered by Anthropic Claude. Supports large databases with thousands tables. Ideal for daily use by data analysts and developer.
affaan-m
Current working version of the Data Aggregator + Analyst agent in one solution via the CLI
This project showcases CrewAI for collaborative cryptocurrency trading analysis. It uses a Jupyter Notebook to orchestrate multiple AI agents (Data Analyst, Trading Strategy Developer, Trade Advisor, and Risk Advisor) that work together to analyze BTCUSDT market data, develop strategies, plan executions, and assess risks.
shakil1819
This Agentic system is designed to handle complex data analysis tasks by breaking them down into smaller subtasks and delegating them to specialized agents. The system uses a multi-agent architecture to improve performance and scalability.
The Financial Analysis Crew is a Streamlit app that simplifies financial stock analysis. With the power of LLM-driven agents, users can seamlessly gather and analyze stock market data to generate comprehensive financial insights. Perfect for investors, analysts, and anyone interested in making data-driven financial decisions.
ShreshtaSutar
An intelligent, end-to-end data analysis assistant that processes datasets, answers business questions, generates insights, and produces visual reports. All with the power of Generative AI + Python.
imnotdev25
No description available
Build a production-ready multi-agent quantitative analyst system where AI agents scrape market data, analyze trends, and generate investment reports. Uses FastAPI for orchestration, Streamlit UI, Azure PostgreSQL for logging, and Azure Blob Storage for archiving.
geoffkip
Clinical Trial Inspector is an advanced AI agent designed to revolutionize how researchers, clinicians, and analysts explore clinical trial data. By combining Semantic Search, Retrieval-Augmented Generation (RAG), and Visual Analytics, it transforms raw data from ClinicalTrials.gov into actionable insights.
Aryia-Behroziuan
The earliest work in computerized knowledge representation was focused on general problem solvers such as the General Problem Solver (GPS) system developed by Allen Newell and Herbert A. Simon in 1959. These systems featured data structures for planning and decomposition. The system would begin with a goal. It would then decompose that goal into sub-goals and then set out to construct strategies that could accomplish each subgoal. In these early days of AI, general search algorithms such as A* were also developed. However, the amorphous problem definitions for systems such as GPS meant that they worked only for very constrained toy domains (e.g. the "blocks world"). In order to tackle non-toy problems, AI researchers such as Ed Feigenbaum and Frederick Hayes-Roth realized that it was necessary to focus systems on more constrained problems. These efforts led to the cognitive revolution in psychology and to the phase of AI focused on knowledge representation that resulted in expert systems in the 1970s and 80s, production systems, frame languages, etc. Rather than general problem solvers, AI changed its focus to expert systems that could match human competence on a specific task, such as medical diagnosis. Expert systems gave us the terminology still in use today where AI systems are divided into a Knowledge Base with facts about the world and rules and an inference engine that applies the rules to the knowledge base in order to answer questions and solve problems. In these early systems the knowledge base tended to be a fairly flat structure, essentially assertions about the values of variables used by the rules.[2] In addition to expert systems, other researchers developed the concept of frame-based languages in the mid-1980s. A frame is similar to an object class: It is an abstract description of a category describing things in the world, problems, and potential solutions. Frames were originally used on systems geared toward human interaction, e.g. understanding natural language and the social settings in which various default expectations such as ordering food in a restaurant narrow the search space and allow the system to choose appropriate responses to dynamic situations. It was not long before the frame communities and the rule-based researchers realized that there was synergy between their approaches. Frames were good for representing the real world, described as classes, subclasses, slots (data values) with various constraints on possible values. Rules were good for representing and utilizing complex logic such as the process to make a medical diagnosis. Integrated systems were developed that combined Frames and Rules. One of the most powerful and well known was the 1983 Knowledge Engineering Environment (KEE) from Intellicorp. KEE had a complete rule engine with forward and backward chaining. It also had a complete frame based knowledge base with triggers, slots (data values), inheritance, and message passing. Although message passing originated in the object-oriented community rather than AI it was quickly embraced by AI researchers as well in environments such as KEE and in the operating systems for Lisp machines from Symbolics, Xerox, and Texas Instruments.[3] The integration of Frames, rules, and object-oriented programming was significantly driven by commercial ventures such as KEE and Symbolics spun off from various research projects. At the same time as this was occurring, there was another strain of research that was less commercially focused and was driven by mathematical logic and automated theorem proving. One of the most influential languages in this research was the KL-ONE language of the mid-'80s. KL-ONE was a frame language that had a rigorous semantics, formal definitions for concepts such as an Is-A relation.[4] KL-ONE and languages that were influenced by it such as Loom had an automated reasoning engine that was based on formal logic rather than on IF-THEN rules. This reasoner is called the classifier. A classifier can analyze a set of declarations and infer new assertions, for example, redefine a class to be a subclass or superclass of some other class that wasn't formally specified. In this way the classifier can function as an inference engine, deducing new facts from an existing knowledge base. The classifier can also provide consistency checking on a knowledge base (which in the case of KL-ONE languages is also referred to as an Ontology).[5] Another area of knowledge representation research was the problem of common sense reasoning. One of the first realizations learned from trying to make software that can function with human natural language was that humans regularly draw on an extensive foundation of knowledge about the real world that we simply take for granted but that is not at all obvious to an artificial agent. Basic principles of common sense physics, causality, intentions, etc. An example is the frame problem, that in an event driven logic there need to be axioms that state things maintain position from one moment to the next unless they are moved by some external force. In order to make a true artificial intelligence agent that can converse with humans using natural language and can process basic statements and questions about the world, it is essential to represent this kind of knowledge. One of the most ambitious programs to tackle this problem was Doug Lenat's Cyc project. Cyc established its own Frame language and had large numbers of analysts document various areas of common sense reasoning in that language. The knowledge recorded in Cyc included common sense models of time, causality, physics, intentions, and many others.[6] The starting point for knowledge representation is the knowledge representation hypothesis first formalized by Brian C. Smith in 1985:[7] Any mechanically embodied intelligent process will be comprised of structural ingredients that a) we as external observers naturally take to represent a propositional account of the knowledge that the overall process exhibits, and b) independent of such external semantic attribution, play a formal but causal and essential role in engendering the behavior that manifests that knowledge. Currently one of the most active areas of knowledge representation research are projects associated with the Semantic Web. The Semantic Web seeks to add a layer of semantics (meaning) on top of the current Internet. Rather than indexing web sites and pages via keywords, the Semantic Web creates large ontologies of concepts. Searching for a concept will be more effective than traditional text only searches. Frame languages and automatic classification play a big part in the vision for the future Semantic Web. The automatic classification gives developers technology to provide order on a constantly evolving network of knowledge. Defining ontologies that are static and incapable of evolving on the fly would be very limiting for Internet-based systems. The classifier technology provides the ability to deal with the dynamic environment of the Internet. Recent projects funded primarily by the Defense Advanced Research Projects Agency (DARPA) have integrated frame languages and classifiers with markup languages based on XML. The Resource Description Framework (RDF) provides the basic capability to define classes, subclasses, and properties of objects. The Web Ontology Language (OWL) provides additional levels of semantics and enables integration with classification engines.[8][9]
The Finance Monitoring AI Agent 📊💹 analyzes specific tickers, gathering and processing data to generate insightful reports 📈📉. Designed for investors and analysts, this agent provides detailed performance analysis and trends. 🚀
oelbourki
An AI-powered data analysis platform that converts natural language queries into executable Python code and runs it securely. Fully compatible with Streamlit Cloud, using LangGraph for workflow orchestration, Google Gemini for code generation, and codibox for dual backend execution (Host/Docker).
Yadav-Aayansh
No description available
sahilpmehra
Data Analyst AI Agent
abhishekkumar62000
OUR DATA ANALYST AGENT
ajaymauryabbn
No description available
Frida7771
📈 AI-powered financial analyst with real-time stock data, SEC 10-K analysis, and LangGraph agents. Built with FastAPI, React, FAISS, and Yahoo Finance API.
yagami24
The Financial Analysis Crew is a Streamlit app that simplifies financial stock analysis. With the power of LLM-driven agents, users can seamlessly gather and analyze stock market data to generate comprehensive financial insights. Perfect for investors, analysts, and anyone interested in making data-driven financial decisions.