Found 485 repositories(showing 30)
e-johnstonn
Generate a picture book from a single prompt using OpenAI function calling, replicate, and Deep Lake
jettbrains
W3C Strategic Highlights September 2019 This report was prepared for the September 2019 W3C Advisory Committee Meeting (W3C Member link). See the accompanying W3C Fact Sheet — September 2019. For the previous edition, see the April 2019 W3C Strategic Highlights. For future editions of this report, please consult the latest version. A Chinese translation is available. ☰ Contents Introduction Future Web Standards Meeting Industry Needs Web Payments Digital Publishing Media and Entertainment Web & Telecommunications Real-Time Communications (WebRTC) Web & Networks Automotive Web of Things Strengthening the Core of the Web HTML CSS Fonts SVG Audio Performance Web Performance WebAssembly Testing Browser Testing and Tools WebPlatform Tests Web of Data Web for All Security, Privacy, Identity Internationalization (i18n) Web Accessibility Outreach to the world W3C Developer Relations W3C Training Translations W3C Liaisons Introduction This report highlights recent work of enhancement of the existing landscape of the Web platform and innovation for the growth and strength of the Web. 33 working groups and a dozen interest groups enable W3C to pursue its mission through the creation of Web standards, guidelines, and supporting materials. We track the tremendous work done across the Consortium through homogeneous work-spaces in Github which enables better monitoring and management. We are in the middle of a period where we are chartering numerous working groups which demonstrate the rapid degree of change for the Web platform: After 4 years, we are nearly ready to publish a Payment Request API Proposed Recommendation and we need to soon charter follow-on work. In the last year we chartered the Web Payment Security Interest Group. In the last year we chartered the Web Media Working Group with 7 specifications for next generation Media support on the Web. We have Accessibility Guidelines under W3C Member review which includes Silver, a new approach. We have just launched the Decentralized Identifier Working Group which has tremendous potential because Decentralized Identifier (DID) is an identifier that is globally unique, resolveable with high availability, and cryptographically verifiable. We have Privacy IG (PING) under W3C Member review which strengthens our focus on the tradeoff between privacy and function. We have a new CSS charter under W3C Member review which maps the group's work for the next three years. In this period, W3C and the WHATWG have succesfully completed the negotiation of a Memorandum of Understanding rooted in the mutual belief that that having two distinct specifications claiming to be normative is generally harmful for the Web community. The MOU, signed last May, describes how the two organizations are to collaborate on the development of a single authoritative version of the HTML and DOM specifications. W3C subsequently rechartered the HTML Working Group to assist the W3C community in raising issues and proposing solutions for the HTML and DOM specifications, and for the production of W3C Recommendations from WHATWG Review Drafts. As the Web evolves continuously, some groups are looking for ways for specifications to do so as well. So-called "evergreen recommendations" or "living standards" aim to track continuous development (and maintenance) of features, on a feature-by-feature basis, while getting review and patent commitments. We see the maturation and further development of an incredible number of new technologies coming to the Web. Continued progress in many areas demonstrates the vitality of the W3C and the Web community, as the rest of the report illustrates. Future Web Standards W3C has a variety of mechanisms for listening to what the community thinks could become good future Web standards. These include discussions with the Membership, discussions with other standards bodies, the activities of thousands of participants in over 300 community groups, and W3C Workshops. There are lots of good ideas. The W3C strategy team has been identifying promising topics and invites public participation. Future, recent and under consideration Workshops include: Inclusive XR (5-6 November 2019, Seattle, WA, USA) to explore existing and future approaches on making Virtual and Augmented Reality experiences more inclusive, including to people with disabilities; W3C Workshop on Data Models for Transportation (12-13 September 2019, Palo Alto, CA, USA) W3C Workshop on Web Games (27-28 June 2019, Redmond, WA, USA), view report Second W3C Workshop on the Web of Things (3-5 June 2019, Munich, Germany) W3C Workshop on Web Standardization for Graph Data; Creating Bridges: RDF, Property Graph and SQL (4-6 March 2019, Berlin, Germany), view report Web & Machine Learning. The Strategy Funnel documents the staff's exploration of potential new work at various phases: Exploration and Investigation, Incubation and Evaluation, and eventually to the chartering of a new standards group. The Funnel view is a GitHub Project where new area are issues represented by “cards” which move through the columns, usually from left to right. Most cards start in Exploration and move towards Chartering, or move out of the funnel. Public input is welcome at any stage but particularly once Incubation has begun. This helps W3C identify work that is sufficiently incubated to warrant standardization, to review the ecosystem around the work and indicate interest in participating in its standardization, and then to draft a charter that reflects an appropriate scope. Ongoing feedback can speed up the overall standardization process. Since the previous highlights document, W3C has chartered a number of groups, and started discussion on many more: Newly Chartered or Rechartered Web Application Security WG (03-Apr) Web Payment Security IG (17-Apr) Patent and Standards IG (24-Apr) Web Applications WG (14-May) Web & Networks IG (16-May) Media WG (23-May) Media and Entertainment IG (06-Jun) HTML WG (06-Jun) Decentralized Identifier WG (05-Sep) Extended Privacy IG (PING) (30-Sep) Verifiable Claims WG (30-Sep) Service Workers WG (31-Dec) Dataset Exchange WG (31-Dec) Web of Things Working Group (31-Dec) Web Audio Working Group (31-Dec) Proposed charters / Advance Notice Accessibility Guidelines WG Privacy IG (PING) RDF Literal Direction WG Timed Text WG CSS WG Web Authentication WG Closed Internationalization Tag Set IG Meeting Industry Needs Web Payments All Web Payments specifications W3C's payments standards enable a streamlined checkout experience, enabling a consistent user experience across the Web with lower front end development costs for merchants. Users can store and reuse information and more quickly and accurately complete online transactions. The Web Payments Working Group has republished Payment Request API as a Candidate Recommendation, aiming to publish a Proposed Recommendation in the Fall 2019, and is discussing use cases and features for Payment Request after publication of the 1.0 Recommendation. Browser vendors have been finalizing implementation of features added in the past year (view the implementation report). As work continues on the Payment Handler API and its implementation (currently in Chrome and Edge Canary), one focus in 2019 is to increase adoption in other browsers. Recently, Mastercard demonstrated the use of Payment Request API to carry out EMVCo's Secure Remote Commerce (SRC) protocol whose payment method definition is being developed with active participation by Visa, Mastercard, American Express, and Discover. Payment method availability is a key factor in merchant considerations about adopting Payment Request API. The ability to get uniform adoption of a new payment method such as Secure Remote Commerce (SRC) also depends on the availability of the Payment Handler API in browsers, or of proprietary alternatives. Web Monetization, which the Web Payments Working Group will discuss again at its face-to-face meeting in September, can be used to enable micropayments as an alternative revenue stream to advertising. Since the beginning of 2019, Amazon, Brave Software, JCB, Certus Cybersecurity Solutions and Netflix have joined the Web Payments Working Group. In April, W3C launched the Web Payment Security Group to enable W3C, EMVCo, and the FIDO Alliance to collaborate on a vision for Web payment security and interoperability. Participants will define areas of collaboration and identify gaps between existing technical specifications in order to increase compatibility among different technologies, such as: How do SRC, FIDO, and Payment Request relate? The Payment Services Directive 2 (PSD2) regulations in Europe are scheduled to take effect in September 2019. What is the role of EMVCo, W3C, and FIDO technologies, and what is the current state of readiness for the deadline? How can we improve privacy on the Web at the same time as we meet industry requirements regarding user identity? Digital Publishing All Digital Publishing specifications, Publication milestones The Web is the universal publishing platform. Publishing is increasingly impacted by the Web, and the Web increasingly impacts Publishing. Topic of particular interest to Publishing@W3C include typography and layout, accessibility, usability, portability, distribution, archiving, offline access, print on demand, and reliable cross referencing. And the diverse publishing community represented in the groups consist of the traditional "trade" publishers, ebook reading system manufacturers, but also publishers of audio book, scholarly journals or educational materials, library scientists or browser developers. The Publishing Working Group currently concentrates on Audiobooks which lack a comprehensive standard, thus incurring extra costs and time to publish in this booming market. Active development is ongoing on the future standard: Publication Manifest Audiobook profile for Web Publications Lightweight Packaging Format The BD Comics Manga Community Group, the Synchronized Multimedia for Publications Community Group, the Publishing Community Group and a future group on archival, are companions to the working group where specific work is developed and incubated. The Publishing Community Group is a recently launched incubation channel for Publishing@W3C. The goal of the group is to propose, document, and prototype features broadly related to: publications on the Web reading modes and systems and the user experience of publications The EPUB 3 Community Group has successfully completed the revision of EPUB 3.2. The Publishing Business Group fosters ongoing participation by members of the publishing industry and the overall ecosystem in the development of Web infrastructure to better support the needs of the industry. The Business Group serves as an additional conduit to the Publishing Working Group and several Community Groups for feedback between the publishing ecosystem and W3C. The Publishing BG has played a vital role in fostering and advancing the adoption and continued development of EPUB 3. In particular the BG provided critical support to the update of EPUBCheck to validate EPUB content to the new EPUB 3.2 specification. This resulted in the development, in conjunction with the EPUB3 Community Group, of a new generation of EPUBCheck, i.e., EPUBCheck 4.2 production-ready release. Media and Entertainment All Media specifications The Media and Entertainment vertical tracks media-related topics and features that create immersive experiences for end users. HTML5 brought standard audio and video elements to the Web. Standardization activities since then have aimed at turning the Web into a professional platform fully suitable for the delivery of media content and associated materials, enabling missing features to stream video content on the Web such as adaptive streaming and content protection. Together with Microsoft, Comcast, Netflix and Google, W3C received an Technology & Engineering Emmy Award in April 2019 for standardization of a full TV experience on the Web. Current goals are to: Reinforce core media technologies: Creation of the Media Working Group, to develop media-related specifications incubated in the WICG (e.g. Media Capabilities, Picture-in-picture, Media Session) and maintain maintain/evolve Media Source Extensions (MSE) and Encrypted Media Extensions (EME). Improve support for Media Timed Events: data cues incubation. Enhance color support (HDR, wide gamut), in scope of the CSS WG and in the Color on the Web CG. Reduce fragmentation: Continue annual releases of a common and testable baseline media devices, in scope of the Web Media APIs CG and in collaboration with the CTA WAVE Project. Maintain the Road-map of Media Technologies for the Web which highlights Web technologies that can be used to build media applications and services, as well as known gaps to enable additional use cases. Create the future: Discuss perspectives for Media and Entertainment for the Web. Bring the power of GPUs to the Web (graphics, machine learning, heavy processing), under incubation in the GPU for the Web CG. Transition to a Working Group is under discussion. Determine next steps after the successful W3C Workshop on Web Games of June 2019. View the report. Timed Text The Timed Text Working Group develops and maintains formats used for the representation of text synchronized with other timed media, like audio and video, and notably works on TTML, profiles of TTML, and WebVTT. Recent progress includes: A robust WebVTT implementation report poises the specification for publication as a proposed recommendation. Discussions around re-chartering, notably to add a TTML Profile for Audio Description deliverable to the scope of the group, and clarify that rendering of captions within XR content is also in scope. Immersive Web Hardware that enables Virtual Reality (VR) and Augmented Reality (AR) applications are now broadly available to consumers, offering an immersive computing platform with both new opportunities and challenges. The ability to interact directly with immersive hardware is critical to ensuring that the web is well equipped to operate as a first-class citizen in this environment. The Immersive Web Working Group has been stabilizing the WebXR Device API while the companion Immersive Web Community Group incubates the next series of features identified as key for the future of the Immersive Web. W3C plans a workshop focused on the needs and benefits at the intersection of VR & Accessibility (Inclusive XR), on 5-6 November 2019 in Seattle, WA, USA, to explore existing and future approaches on making Virtual and Augmented Reality experiences more inclusive. Web & Telecommunications The Web is the Open Platform for Mobile. Telecommunication service providers and network equipment providers have long been critical actors in the deployment of Web technologies. As the Web platform matures, it brings richer and richer capabilities to extend existing services to new users and devices, and propose new and innovative services. Real-Time Communications (WebRTC) All Real-Time Communications specifications WebRTC has reshaped the whole communication landscape by making any connected device a potential communication end-point, bringing audio and video communications anywhere, on any network, vastly expanding the ability of operators to reach their customers. WebRTC serves as the corner-stone of many online communication and collaboration services. The WebRTC Working Group aims to bringing WebRTC 1.0 (and companion specification Media Capture and Streams) to Recommendation by the end of 2019. Intense efforts are focused on testing (supported by a dedicated hackathon at IETF 104) and interoperability. The group is considering pushing features that have not gotten enough traction to separate modules or to a later minor revision of the spec. Beyond WebRTC 1.0, the WebRTC Working Group will focus its efforts on WebRTC NV which the group has started documenting by identifying use cases. Web & Networks Recently launched, in the wake of the May 2018 Web5G workshop, the Web & Networks Interest Group is chaired by representatives from AT&T, China Mobile and Intel, with a goal to explore solutions for web applications to achieve better performance and resource allocation, both on the device and network. The group's first efforts are around use cases, privacy & security requirements and liaisons. Automotive All Automotive specifications To create a rich application ecosystem for vehicles and other devices allowed to connect to the vehicle, the W3C Automotive Working Group is delivering a service specification to expose all common vehicle signals (engine temperature, fuel/charge level, range, tire pressure, speed, etc.) The Vehicle Information Service Specification (VISS), which is a Candidate Recommendation, is seeing more implementations across the industry. It provides the access method to a common data model for all the vehicle signals –presently encapsulating a thousand or so different data elements– and will be growing to accommodate the advances in automotive such as autonomous and driver assist technologies and electrification. The group is already working on a successor to VISS, leveraging the underlying data model and the VIWI submission from Volkswagen, for a more robust means of accessing vehicle signals information and the same paradigm for other automotive needs including location-based services, media, notifications and caching content. The Automotive and Web Platform Business Group acts as an incubator for prospective standards work. One of its task forces is using W3C VISS in performing data sampling and off-boarding the information to the cloud. Access to the wealth of information that W3C's auto signals standard exposes is of interest to regulators, urban planners, insurance companies, auto manufacturers, fleet managers and owners, service providers and others. In addition to components needed for data sampling and edge computing, capturing user and owner consent, information collection methods and handling of data are in scope. The upcoming W3C Workshop on Data Models for Transportation (September 2019) is expected to focus on the need of additional ontologies around transportation space. Web of Things All Web of Things specifications W3C's Web of Things work is designed to bridge disparate technology stacks to allow devices to work together and achieve scale, thus enabling the potential of the Internet of Things by eliminating fragmentation and fostering interoperability. Thing descriptions expressed in JSON-LD cover the behavior, interaction affordances, data schema, security configuration, and protocol bindings. The Web of Things complements existing IoT ecosystems to reduce the cost and risk for suppliers and consumers of applications that create value by combining multiple devices and information services. There are many sectors that will benefit, e.g. smart homes, smart cities, smart industry, smart agriculture, smart healthcare and many more. The Web of Things Working Group is finishing the initial Web of Things standards, with support from the Web of Things Interest Group: Web of Things Architecture Thing Descriptions Strengthening the Core of the Web HTML The HTML Working Group was chartered early June to assist the W3C community in raising issues and proposing solutions for the HTML and DOM specifications, and to produce W3C Recommendations from WHATWG Review Drafts. A few days before, W3C and the WHATWG signed a Memorandum of Understanding outlining the agreement to collaborate on the development of a single version of the HTML and DOM specifications. Issues and proposed solutions for HTML and DOM done via the newly rechartered HTML Working Group in the WHATWG repositories The HTML Working Group is targetting November 2019 to bring HTML and DOM to Candidate Recommendations. CSS All CSS specifications CSS is a critical part of the Open Web Platform. The CSS Working Group gathers requirements from two large groups of CSS users: the publishing industry and application developers. Within W3C, those groups are exemplified by the Publishing groups and the Web Platform Working Group. The former requires things like better pagination support and advanced font handling, the latter needs intelligent (and fast!) scrolling and animations. What we know as CSS is actually a collection of almost a hundred specifications, referred to as ‘modules’. The current state of CSS is defined by a snapshot, updated once a year. The group also publishes an index defining every term defined by CSS specifications. Fonts All Fonts specifications The Web Fonts Working Group develops specifications that allow the interoperable deployment of downloadable fonts on the Web, with a focus on Progressive Font Enrichment as well as maintenance of WOFF Recommendations. Recent and ongoing work includes: Early API experiments by Adobe and Monotype have demonstrated the feasibility of a font enrichment API, where a server delivers a font with minimal glyph repertoire and the client can query the full repertoire and request additional subsets on-the-fly. In other experiments, the Brotli compression used in WOFF 2 was extended to support shared dictionaries and patch update. Metrics to quantify improvement are a current hot discussion topic. The group will meet at ATypi 2019 in Japan, to gather requirements from the international typography community. The group will first produce a report summarizing the strengths and weaknesses of each prototype solution by Q2 2020. SVG All SVG specifications SVG is an important and widely-used part of the Open Web Platform. The SVG Working Group focuses on aligning the SVG 2.0 specification with browser implementations, having split the specification into a currently-implemented 2.0 and a forward-looking 2.1. Current activity is on stabilization, increased integration with the Open Web Platform, and test coverage analysis. The Working Group was rechartered in March 2019. A new work item concerns native (non-Web-browser) uses of SVG as a non-interactive, vector graphics format. Audio The Web Audio Working Group was extended to finish its work on the Web Audio API, expecting to publish it as a Recommendation by year end. The specification enables synthesizing audio in the browser. Audio operations are performed with audio nodes, which are linked together to form a modular audio routing graph. Multiple sources — with different types of channel layout — are supported. This modular design provides the flexibility to create complex audio functions with dynamic effects. The first version of Web Audio API is now feature complete and is implemented in all modern browsers. Work has started on the next version, and new features are being incubated in the Audio Community Group. Performance Web Performance All Web Performance specifications There are currently 18 specifications in development in the Web Performance Working Group aiming to provide methods to observe and improve aspects of application performance of user agent features and APIs. The W3C team is looking at related work incubated in the W3C GPU for the Web (WebGPU) Community Group which is poised to transition to a W3C Working Group. A preliminary draft charter is available. WebAssembly All WebAssembly specifications WebAssembly improves Web performance and power by being a virtual machine and execution environment enabling loaded pages to run native (compiled) code. It is deployed in Firefox, Edge, Safari and Chrome. The specification will soon reach Candidate Recommendation. WebAssembly enables near-native performance, optimized load time, and perhaps most importantly, a compilation target for existing code bases. While it has a small number of native types, much of the performance increase relative to Javascript derives from its use of consistent typing. WebAssembly leverages decades of optimization for compiled languages and the byte code is optimized for compactness and streaming (the web page starts executing while the rest of the code downloads). Network and API access all occurs through accompanying Javascript libraries -- the security model is identical to that of Javascript. Requirements gathering and language development occur in the Community Group while the Working Group manages test development, community review and progression of specifications on the Recommendation Track. Testing Browser testing plays a critical role in the growth of the Web by: Improving the reliability of Web technology definitions; Improving the quality of implementations of these technologies by helping vendors to detect bugs in their products; Improving the data available to Web developers on known bugs and deficiencies of Web technologies by publishing results of these tests. Browser Testing and Tools The Browser Testing and Tools Working Group is developing WebDriver version 2, having published last year the W3C Recommendation of WebDriver. WebDriver acts as a remote control interface that enables introspection and control of user agents, provides a platform- and language-neutral wire protocol as a way for out-of-process programs to remotely instruct the behavior of Web, and emulates the actions of a real person using the browser. WebPlatform Tests The WebPlatform Tests project now provides a mechanism which allows to fully automate tests that previously needed to be run manually: TestDriver. TestDriver enables sending trusted key and mouse events, sending complex series of trusted pointer and key interactions for things like in-content drag-and-drop or pinch zoom, and even file upload. Since 2014 W3C began work on this coordinated open-source effort to build a cross-browser test suite for the Web Platform, which WHATWG, and all major browsers adopted. Web of Data All Data specifications There have been several great success stories around the standardization of data on the web over the past year. Verifiable Claims seems to have significant uptake. It is also significant that the Distributed Identifier WG charter has received numerous favorable reviews, and was just recently launched. JSON-LD has been a major success with the large deployment on Web sites via schema.org. JSON-LD 1.1 completed technical work, about to transition to CR More than 25% of websites today include schema.org data in JSON-LD The Web of Things description is in CR since May, making use of JSON-LD Verifiable Credentials data model is in CR since July, also making use of JSON-LD Continued strong interest in decentralized identifiers Engagement from the TAG with reframing core documents, such as Ethical Web Principles, to include data on the web within their scope Data is increasingly important for all organizations, especially with the rise of IoT and Big Data. W3C has a mature and extensive suite of standards relating to data that were developed over two decades of experience, with plans for further work on making it easier for developers to work with graph data and knowledge graphs. Linked Data is about the use of URIs as names for things, the ability to dereference these URIs to get further information and to include links to other data. There are ever-increasing sources of open Linked Data on the Web, as well as data services that are restricted to the suppliers and consumers of those services. The digital transformation of industry is seeking to exploit advanced digital technologies. This will facilitate businesses to integrate horizontally along the supply and value chains, and vertically from the factory floor to the office floor. W3C is seeking to make it easier to support enterprise-wide data management and governance, reflecting the strategic importance of data to modern businesses. Traditional approaches to data have focused on tabular databases (SQL/RDBMS), Comma Separated Value (CSV) files, and data embedded in PDF documents and spreadsheets. We're now in midst of a major shift to graph data with nodes and labeled directed links between them. Graph data is: Faster than using SQL and associated JOIN operations More favorable to integrating data from heterogeneous sources Better suited to situations where the data model is evolving In the wake of the recent W3C Workshop on Graph Data we are in the process of launching a Graph Standardization Business Group to provide a business perspective with use cases and requirements, to coordinate technical standards work and liaisons with external organizations. Web for All Security, Privacy, Identity All Security specifications, all Privacy specifications Authentication on the Web As the WebAuthn Level 1 W3C Recommendation published last March is seeing wide implementation and adoption of strong cryptographic authentication, work is proceeding on Level 2. The open standard Web API gives native authentication technology built into native platforms, browsers, operating systems (including mobile) and hardware, offering protection against hacking, credential theft, phishing attacks, thus aiming to end the era of passwords as a security construct. You may read more in our March press release. Privacy An increasing number of W3C specifications are benefitting from Privacy and Security review; there are security and privacy aspects to every specification. Early review is essential. Working with the TAG, the Privacy Interest Group has updated the Self-Review Questionnaire: Security and Privacy. Other recent work of the group includes public blogging further to the exploration of anti-patterns in standards and permission prompts. Security The Web Application Security Working Group adopted Feature Policy, aiming to allow developers to selectively enable, disable, or modify the behavior of some of these browser features and APIs within their application; and Fetch Metadata, aiming to provide servers with enough information to make a priori decisions about whether or not to service a request based on the way it was made, and the context in which it will be used. The Web Payment Security Interest Group, launched last April, convenes members from W3C, EMVCo, and the FIDO Alliance to discuss cooperative work to enhance the security and interoperability of Web payments (read more about payments). Internationalization (i18n) All Internationalization specifications, educational articles related to Internationalization, spec developers checklist Only a quarter or so current Web users use English online and that proportion will continue to decrease as the Web reaches more and more communities of limited English proficiency. If the Web is to live up to the "World Wide" portion of its name, and for the Web to truly work for stakeholders all around the world engaging with content in various languages, it must support the needs of worldwide users as they engage with content in the various languages. The growth of epublishing also brings requirements for new features and improved typography on the Web. It is important to ensure the needs of local communities are captured. The W3C Internationalization Initiative was set up to increase in-house resources dedicated to accelerating progress in making the World Wide Web "worldwide" by gathering user requirements, supporting developers, and education & outreach. For an overview of current projects see the i18n radar. W3C's Internationalization efforts progressed on a number of fronts recently: Requirements: New African and European language groups will work on the gap analysis, errata and layout requirements. Gap analysis: Japanese, Devanagari, Bengali, Tamil, Lao, Khmer, Javanese, and Ethiopic updated in the gap-analysis documents. Layout requirements document: notable progress tracked in the Southeast Asian Task Force while work continues on Chinese layout requirements. Developer support: Spec reviews: the i18n WG continues active review of specifications of the WHATWG and other W3C Working Groups. Short review checklist: easy way to begin a self-review to help spec developers understand what aspects of their spec are likely to need attention for internationalization, and points them to more detailed checklists for the relevant topics. It also helps those reviewing specs for i18n issues. Strings on the Web: Language and Direction Metadata lays out issues and discusses potential solutions for passing information about language and direction with strings in JSON or other data formats. The document was rewritten for clarity, and expanded. The group is collaborating with the JSON-LD and Web Publishing groups to develop a plan for updating RDF, JSON-LD and related specifications to handle metadata for base direction of text (bidi). User-friendly test format: a new format was developed for Internationalization Test Suite tests, which displays helpful information about how the test works. This particularly useful because those tests are pointed to by educational materials and gap-analysis documents. Web Platform Tests: a large number of tests in the i18n test suite have been ported to the WPT repository, including: css-counter-styles, css-ruby, css-syntax, css-test, css-text-decor, css-writing-modes, and css-pseudo. Education & outreach: (for all educational materials, see the HTML & CSS Authoring Techniques) Web Accessibility All Accessibility specifications, WAI resources The Web Accessibility Initiative supports W3C's Web for All mission. Recent achievements include: Education and training: Inaccessibility of CAPTCHA updated to bring our analysis and recommendations up to date with CAPTCHA practice today, concluding two years of extensive work and invaluable input from the public (read more on the W3C Blog Learn why your web content and applications should be accessible. The Education and Outreach Working Group has completed revision and updating of the Business Case for Digital Accessibility. Accessibility guidelines: The Accessibility Guidelines Working Group has continued to update WCAG Techniques and Understanding WCAG 2.1; and published a Candidate Recommendation of Accessibility Conformance Testing Rules Format 1.0 to improve inter-rater reliability when evaluating conformance of web content to WCAG An updated charter is being developed to host work on "Silver", the next generation accessibility guidelines (WCAG 2.2) There are accessibility aspects to most specifications. Check your work with the FAST checklist. Outreach to the world W3C Developer Relations To foster the excellent feedback loop between Web Standards development and Web developers, and to grow participation from that diverse community, recent W3C Developer Relations activities include: @w3cdevs tracks the enormous amount of work happening across W3C W3C Track during the Web Conference 2019 in San Francisco Tech videos: W3C published the 2019 Web Games Workshop videos The 16 September 2019 Developer Meetup in Fukuoka, Japan, is open to all and will combine a set of technical demos prepared by W3C groups, and a series of talks on a selected set of W3C technologies and projects W3C is involved with Mozilla, Google, Samsung, Microsoft and Bocoup in the organization of ViewSource 2019 in Amsterdam (read more on the W3C Blog) W3C Training In partnership with EdX, W3C's MOOC training program, W3Cx offers a complete "Front-End Web Developer" (FEWD) professional certificate program that consists of a suite of five courses on the foundational languages that power the Web: HTML5, CSS and JavaScript. We count nearly 900K students from all over the world. Translations Many Web users rely on translations of documents developed at W3C whose official language is English. W3C is extremely grateful to the continuous efforts of its community in ensuring our various deliverables in general, and in our specifications in particular, are made available in other languages, for free, ensuring their exposure to a much more diverse set of readers. Last Spring we developed a more robust system, a new listing of translations of W3C specifications and updated the instructions on how to contribute to our translation efforts. W3C Liaisons Liaisons and coordination with numerous organizations and Standards Development Organizations (SDOs) is crucial for W3C to: make sure standards are interoperable coordinate our respective agenda in Internet governance: W3C participates in ICANN, GIPO, IGF, the I* organizations (ICANN, IETF, ISOC, IAB). ensure at the government liaison level that our standards work is officially recognized when important to our membership so that products based on them (often done by our members) are part of procurement orders. W3C has ARO/PAS status with ISO. W3C participates in the EU MSP and Rolling Plan on Standardization ensure the global set of Web and Internet standards form a compatible stack of technologies, at the technical and policy level (patent regime, fragmentation, use in policy making) promote Standards adoption equally by the industry, the public sector, and the public at large Coralie Mercier, Editor, W3C Marketing & Communications $Id: Overview.html,v 1.60 2019/10/15 12:05:52 coralie Exp $ Copyright © 2019 W3C ® (MIT, ERCIM, Keio, Beihang) Usage policies apply.
LLM Prompting Engineering Simplified Book
teal33t
Persian Translation of Google Prompt Engineering Book
hackcrypto
Fluxion is a security auditing and social-engineering research tool. It is a remake of linset by vk496 with (hopefully) less bugs and more functionality. The script attempts to retrieve the WPA/WPA2 key from a target access point by means of a social engineering (phishing) attack. It's compatible with the latest release of Kali (rolling). Fluxion's attacks' setup is mostly manual, but experimental auto-mode handles some of the attacks' setup parameters. Read the FAQ before requesting issues. If you need quick help, fluxion is also avaible on gitter. You can talk with us on Gitter or on Discord. Installation Read here before you do the following steps. Download the latest revision git clone --recursive git@github.com:FluxionNetwork/fluxion.git Switch to tool's directory cd fluxion Run fluxion (missing dependencies will be auto-installed) ./fluxion.sh Fluxion is also available in arch cd bin/arch makepkg or using the blackarch repo pacman -S fluxion scroll Changelog Fluxion gets weekly updates with new features, improvements, and bugfixes. Be sure to check out the changelog here. :octocat: How to contribute All contributions are welcome! Code, documentation, graphics, or even design suggestions are welcome; use GitHub to its fullest. Submit pull requests, contribute tutorials or other wiki content -- whatever you have to offer, it'll be appreciated but please follow the style guide. book How it works Scan for a target wireless network. Launch the Handshake Snooper attack. Capture a handshake (necessary for password verification). Launch Captive Portal attack. Spawns a rogue (fake) AP, imitating the original access point. Spawns a DNS server, redirecting all requests to the attacker's host running the captive portal. Spawns a web server, serving the captive portal which prompts users for their WPA/WPA2 key. Spawns a jammer, deauthenticating all clients from original AP and lureing them to the rogue AP. All authentication attempts at the captive portal are checked against the handshake file captured earlier. The attack will automatically terminate once a correct key has been submitted. The key will be logged and clients will be allowed to reconnect to the target access point. For a guide to the Captive Portal attack, read the Captive Portal attack guide exclamation Requirements A Linux-based operating system. We recommend Kali Linux 2 or Kali rolling. Kali 2 & rolling support the latest aircrack-ng versions. An external wifi card is recommended. Related work For development I use vim and tmux. Here are my dotfiles :octocat: Credits l3op - contributor dlinkproto - contributor vk496 - developer of linset Derv82 - @Wifite/2 Princeofguilty - @webpages and @buteforce Photos for wiki @http://www.kalitutorials.net Ons Ali @wallpaper PappleTec @sites MPX4132 - Fluxion V3 Disclaimer Authors do not own the logos under the /attacks/Captive Portal/sites/ directory. Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. The usage of Fluxion for attacking infrastructures without prior mutual consent could be considered an illegal activity, and is highly discouraged by its authors/developers. It is the end user's responsibility to obey all applicable local, state and federal laws. Authors assume no liability and are not responsible for any misuse or damage caused by this program. Note Beware of sites pretending to be related with the Fluxion Project. These may be delivering malware. Fluxion DOES NOT WORK on Linux Subsystem For Windows 10, because the subsystem doesn't allow access to network interfaces. Any Issue regarding the same would be Closed Immediately Links Fluxion website: https://fluxionnetwork.github.io/fluxion/ Discord: https://discordapp.com/invite/G43gptk Gitter: https://gitter.im/FluxionNetwork/Lobby
LC1332
The Silk Magic Book will record the Magic Prompts on some very Large LLMs. The Silk Magic Book belongs to the project Luotuo(骆驼), which created by 李鲁鲁, 冷子昂, 陈启源
microsoft-partner-solutions-ai
Curated AI prompts for Microsoft architects and engineers to accelerate solution discovery and prototyping with customers — from use case ideation to code generation.
mehdikiani
prompt engineering for beginners: a persian book
X-TOOL-S
# Git Credential Manager [](https://github.com/GitCredentialManager/git-credential-manager/actions/workflows/continuous-integration.yml) --- [Git Credential Manager](https://github.com/GitCredentialManager/git-credential-manager) (GCM) is a secure Git credential helper built on [.NET](https://dotnet.microsoft.com) that runs on Windows, macOS, and Linux. Compared to Git's [built-in credential helpers]((https://git-scm.com/book/en/v2/Git-Tools-Credential-Storage)) (Windows: wincred, macOS: osxkeychain, Linux: gnome-keyring/libsecret) which provides single-factor authentication support working on any HTTP-enabled Git repository, GCM provides multi-factor authentication support for [Azure DevOps](https://dev.azure.com/), Azure DevOps Server (formerly Team Foundation Server), GitHub, Bitbucket, and GitLab. Git Credential Manager (GCM) replaces the .NET Framework-based [Git Credential Manager for Windows](https://github.com/microsoft/Git-Credential-Manager-for-Windows) (GCM), and the Java-based [Git Credential Manager for Mac and Linux](https://github.com/microsoft/Git-Credential-Manager-for-Mac-and-Linux) (Java GCM), providing a consistent authentication experience across all platforms. ## Current status Git Credential Manager is currently available for Windows, macOS, and Linux\*. GCM only works with HTTP(S) remotes; you can still use Git with SSH: - [Azure DevOps SSH](https://docs.microsoft.com/en-us/azure/devops/repos/git/use-ssh-keys-to-authenticate?view=azure-devops) - [GitHub SSH](https://help.github.com/en/articles/connecting-to-github-with-ssh) - [Bitbucket SSH](https://confluence.atlassian.com/bitbucket/ssh-keys-935365775.html) Feature|Windows|macOS|Linux -|:-:|:-:|:-: Installer/uninstaller|✓|✓|✓\* Secure platform credential storage|✓ [(see more)](docs/credstores.md)|✓ [(see more)](docs/credstores.md)|✓ [(see more)](docs/credstores.md) Multi-factor authentication support for Azure DevOps|✓|✓|✓ Two-factor authentication support for GitHub|✓|✓|✓ Two-factor authentication support for Bitbucket|✓|✓|✓ Two-factor authentication support for GitLab|✓|✓|✓ Windows Integrated Authentication (NTLM/Kerberos) support|✓|_N/A_|_N/A_ Basic HTTP authentication support|✓|✓|✓ Proxy support|✓|✓|✓ `amd64` support|✓|✓|✓ `x86` support|✓|_N/A_|✗ `arm64` support|best effort|via Rosetta 2|best effort, no packages `armhf` support|_N/A_|_N/A_|best effort, no packages (\*) GCM guarantees support for the below Linux distributions. GCM maintainers also monitor and evaluate issues opened against other distributions to determine community interest/engagement and whether an emerging platform should become fully-supported. - Debian/Ubuntu/Linux Mint - Fedora/CentOS/RHEL - Alpine ## Download and Install ### macOS Homebrew The preferred installation mechanism is using Homebrew; we offer a Cask in our custom Tap. To install, run the following: ```shell brew tap microsoft/git brew install --cask git-credential-manager-core ``` After installing you can stay up-to-date with new releases by running: ```shell brew upgrade git-credential-manager-core ``` #### Git Credential Manager for Mac and Linux (Java-based GCM) If you have an existing installation of the 'Java GCM' on macOS and you have installed this using Homebrew, this installation will be unlinked (`brew unlink git-credential-manager`) when GCM is installed. #### Uninstall To uninstall, run the following: ```shell brew uninstall --cask git-credential-manager-core ``` --- ### macOS Package We also provide a [.pkg installer](https://github.com/GitCredentialManager/git-credential-manager/releases/latest) with each release. To install, double-click the installation package and follow the instructions presented. #### Uninstall To uninstall, run the following: ```shell sudo /usr/local/share/gcm-core/uninstall.sh ``` --- <!-- this explicit anchor should stay stable so that external docs can link here --> <!-- markdownlint-disable-next-line no-inline-html --> <a name="linux-install-instructions"></a> ### Linux #### Experimental: install from source helper script If you would like to help dogfood our new install from source helper script, run the following: 1. To ensure `curl` is installed: ```shell curl --version ``` If `curl` is not installed, please use your distribution's package manager to install it. 1. To download and run the script: ```shell curl -LO https://raw.githubusercontent.com/GitCredentialManager/git-credential-manager/main/src/linux/Packaging.Linux/install-from-source.sh && sh ./install-from-source.sh && git-credential-manager-core configure ``` **Note:** You will be prompted to enter your credentials so that the script can download GCM's dependencies using your distribution's package manager. #### Ubuntu/Debian distributions Download the latest [.deb package](https://github.com/GitCredentialManager/git-credential-manager/releases/latest), and run the following: ```shell sudo dpkg -i <path-to-package> git-credential-manager-core configure ``` **Note:** Although packages were previously offered on certain [Microsoft Ubuntu package feeds](https://packages.microsoft.com/repos/), GCM no longer publishes to these repositories. Please install the Debian package using the above instructions instead. To uninstall: ```shell git-credential-manager-core unconfigure sudo dpkg -r gcmcore ``` #### Other distributions Download the latest [tarball](https://github.com/GitCredentialManager/git-credential-manager/releases/latest), and run the following: ```shell tar -xvf <path-to-tarball> -C /usr/local/bin git-credential-manager-core configure ``` To uninstall: ```shell git-credential-manager-core unconfigure rm $(command -v git-credential-manager-core) ``` **Note:** all Linux distributions [require additional configuration](https://aka.ms/gcm/credstores) to use GCM. --- ### Windows GCM is included with [Git for Windows](https://gitforwindows.org/), and the latest version is included in each new Git for Windows release. This is the preferred way to install GCM on Windows. During installation you will be asked to select a credential helper, with GCM being set as the default.  #### Standalone installation You can also download the [latest installer](https://github.com/GitCredentialManager/git-credential-manager/releases/latest) for Windows to install GCM standalone. **:warning: Important :warning:** Installing GCM as a standalone package on Windows will forcibly override the version of GCM that is bundled with Git for Windows, **even if the version bundled with Git for Windows is a later version**. There are two flavors of standalone installation on Windows: - User (preferred) (`gcmcoreuser-win*`): Does not require administrator rights. Will install only for the current user and updates only the current user's Git configuration. - System (`gcmcore-win*`): Requires administrator rights. Will install for all users on the system and update the system-wide Git configuration. To install, double-click the desired installation package and follow the instructions presented. #### Uninstall (Windows 10) To uninstall, open the Settings app and navigate to the Apps section. Select "Git Credential Manager" and click "Uninstall". #### Uninstall (Windows 7-8.1) To uninstall, open Control Panel and navigate to the Programs and Features screen. Select "Git Credential Manager" and click "Remove". #### Windows Subsystem for Linux (WSL) Git Credential Manager can be used with the [Windows Subsystem for Linux (WSL)](https://aka.ms/wsl) to enable secure authentication of your remote Git repositories from inside of WSL. [Please see the GCM on WSL docs](docs/wsl.md) for more information. ## Supported Git versions Git Credential Manager tries to be compatible with the broadest set of Git versions (within reason). However there are some know problematic releases of Git that are not compatible. - Git 1.x The initial major version of Git is not supported or tested with GCM. - Git 2.26.2 This version of Git introduced a breaking change with parsing credential configuration that GCM relies on. This issue was fixed in commit [`12294990`](https://github.com/git/git/commit/12294990c90e043862be9eb7eb22c3784b526340) of the Git project, and released in Git 2.27.0. ## How to use Once it's installed and configured, Git Credential Manager is called implicitly by Git. You don't have to do anything special, and GCM isn't intended to be called directly by the user. For example, when pushing (`git push`) to [Azure DevOps](https://dev.azure.com), [Bitbucket](https://bitbucket.org), or [GitHub](https://github.com), a window will automatically open and walk you through the sign-in process. (This process will look slightly different for each Git host, and even in some cases, whether you've connected to an on-premises or cloud-hosted Git host.) Later Git commands in the same repository will re-use existing credentials or tokens that GCM has stored for as long as they're valid. Read full command line usage [here](docs/usage.md). ### Configuring a proxy See detailed information [here](https://aka.ms/gcm/httpproxy). ## Additional Resources - [Frequently asked questions](docs/faq.md) - [Development and debugging](docs/development.md) - [Command-line usage](docs/usage.md) - [Configuration options](docs/configuration.md) - [Environment variables](docs/environment.md) - [Enterprise configuration](docs/enterprise-config.md) - [Network and HTTP configuration](docs/netconfig.md) - [Credential stores](docs/credstores.md) - [Architectural overview](docs/architecture.md) - [Host provider specification](docs/hostprovider.md) - [Azure Repos OAuth tokens](docs/azrepos-users-and-tokens.md) - [GitLab support](docs/gitlab.md) ## Experimental Features - [Windows broker (experimental)](docs/windows-broker.md) ## Contributing This project welcomes contributions and suggestions. See the [contributing guide](CONTRIBUTING.md) to get started. This project follows [GitHub's Open Source Code of Conduct](CODE_OF_CONDUCT.md). ## License We're [MIT](LICENSE) licensed. When using GitHub logos, please be sure to follow the [GitHub logo guidelines](https://github.com/logos).
ajinkyalahade
This Python script reads an e-book in PDF format, splits the text into prompts, and uses the OpenAI GPT-3 API to generate completions for each prompt. The completions are then saved in three different file formats (Excel, JSON, and text) that can be used to fine-tune your own GPT-3 model. Requirements
paulmcfe
Prompts and other code for the book Creating Your Own Website with ChatGPT
82goober82
Fluxion is the future of MITM WPA attacks Fluxion is a security auditing and social-engineering research tool. It is a remake of linset by vk496 with (hopefully) less bugs and more functionality. The script attempts to retrieve the WPA/WPA2 key from a target access point by means of a social engineering (phishing) attack. It's compatible with the latest release of Kali (rolling). Fluxion's attacks' setup is mostly manual, but experimental auto-mode handles some of the attacks' setup parameters. Read the FAQ before requesting issues. If you need quick help, fluxion is also avaible on gitter. You can talk with us on Gitter or on Discord. Installation Read here before you do the following steps. Download the latest revision git clone --recursive git@github.com:FluxionNetwork/fluxion.git Switch to tool's directory cd fluxion Run fluxion (missing dependencies will be auto-installed) ./fluxion.sh Fluxion is also available in arch cd bin/arch makepkg or using the blackarch repo pacman -S fluxion scroll Changelog Fluxion gets weekly updates with new features, improvements, and bugfixes. Be sure to check out the changelog here. :octocat: How to contribute All contributions are welcome! Code, documentation, graphics, or even design suggestions are welcome; use GitHub to its fullest. Submit pull requests, contribute tutorials or other wiki content -- whatever you have to offer, it'll be appreciated but please follow the style guide. book How it works Scan for a target wireless network. Launch the Handshake Snooper attack. Capture a handshake (necessary for password verification). Launch Captive Portal attack. Spawns a rogue (fake) AP, imitating the original access point. Spawns a DNS server, redirecting all requests to the attacker's host running the captive portal. Spawns a web server, serving the captive portal which prompts users for their WPA/WPA2 key. Spawns a jammer, deauthenticating all clients from original AP and lureing them to the rogue AP. All authentication attempts at the captive portal are checked against the handshake file captured earlier. The attack will automatically terminate once a correct key has been submitted. The key will be logged and clients will be allowed to reconnect to the target access point. For a guide to the Captive Portal attack, read the Captive Portal attack guide exclamation Requirements A Linux-based operating system. We recommend Kali Linux 2 or Kali rolling. Kali 2 & rolling support the latest aircrack-ng versions. An external wifi card is recommended. Related work For development I use vim and tmux. Here are my dotfiles :octocat: Credits l3op - contributor dlinkproto - contributor vk496 - developer of linset Derv82 - @Wifite/2 Princeofguilty - @webpages and @buteforce Photos for wiki @http://www.kalitutorials.net Ons Ali @wallpaper PappleTec @sites MPX4132 - Fluxion V3 Disclaimer Authors do not own the logos under the /attacks/Captive Portal/sites/ directory. Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. The usage of Fluxion for attacking infrastructures without prior mutual consent could be considered an illegal activity, and is highly discouraged by its authors/developers. It is the end user's responsibility to obey all applicable local, state and federal laws. Authors assume no liability and are not responsible for any misuse or damage caused by this program. Note Beware of sites pretending to be related with the Fluxion Project. These may be delivering malware. Fluxion DOES NOT WORK on Linux Subsystem For Windows 10, because the subsystem doesn't allow access to network interfaces. Any Issue regarding the same would be Closed Immediately Links Fluxion website: https://fluxionnetwork.github.io/fluxion/ Discord: https://discordapp.com/invite/G43gptk Gitter: https://gitter.im/FluxionNetwork/Lobby
ac1982
Native prompt engineering translate script that can keep e-book format, URL, image and more.
abdullah-afzal
It is a Java desktop application, a cinema ticket booking system, in which a user can book some movies. And the movies will be scheduled according to dates. And only one movie can be scheduled at a specific time in a hall. There will also be a record of seats in the system. And only limited users can book seats at a specific schedule. Many movies can be scheduled at a different schedule. Each user will sign up and then log in to enter the system. User can see and book movies as many as want. Each user has a dashboard in which it can see booked movies. There can be multiple admins of the systems. They can add, remove, update, and schedule the movies. In order to schedule the movies, if time overlaps or the scheduled date is in past dates, then the admin will be prompted to schedule movies invalid time and date.
KardelRuveyda
Prompt Engineering'i öğrenebileceğiniz Erdoğan Eker ile birlikte oluşturduğumuz bir kitap.
nikuscs
📖 Store, search & instantly copy AI prompts from your macOS menu bar — built with Tauri v2 + React
speakleash
Speaklaesh resource of prompts from various domains
akshaykhanna
Website is created via ASP .NET using C#. Software used are Visual Studio 2010 ( for front + back end development) , SQL server management 2008 ( for DB) and Crystal Report ( for generating booking receipts). For reference ITC hotels are taken for booking . User can book rooms in hotels of ITC using it only if there desired room in desired hotel is available (i.e vacant rooms of that type are available.). If that is not case then user will not be allowed to proceed for booking and instead will be prompted to choose different date or room. For doing all this a great logic is developed.To know more regarding this logic checkout this link https://akshaykhanna93.wordpress.com/2014/09/14/searching-for-booking-website-logic/
fantabhelpuri
An app that let's you create an outline of a book and then convert the outline into prompts that can be fed into an AI to get back prose
82goober82
Fluxion is the future of MITM WPA attacks Fluxion is a security auditing and social-engineering research tool. It is a remake of linset by vk496 with (hopefully) less bugs and more functionality. The script attempts to retrieve the WPA/WPA2 key from a target access point by means of a social engineering (phishing) attack. It's compatible with the latest release of Kali (rolling). Fluxion's attacks' setup is mostly manual, but experimental auto-mode handles some of the attacks' setup parameters. Read the FAQ before requesting issues. If you need quick help, fluxion is also avaible on gitter. You can talk with us on Gitter or on Discord. Installation Read here before you do the following steps. Download the latest revision git clone --recursive git@github.com:FluxionNetwork/fluxion.git Switch to tool's directory cd fluxion Run fluxion (missing dependencies will be auto-installed) ./fluxion.sh Fluxion is also available in arch cd bin/arch makepkg or using the blackarch repo pacman -S fluxion scroll Changelog Fluxion gets weekly updates with new features, improvements, and bugfixes. Be sure to check out the changelog here. :octocat: How to contribute All contributions are welcome! Code, documentation, graphics, or even design suggestions are welcome; use GitHub to its fullest. Submit pull requests, contribute tutorials or other wiki content -- whatever you have to offer, it'll be appreciated but please follow the style guide. book How it works Scan for a target wireless network. Launch the Handshake Snooper attack. Capture a handshake (necessary for password verification). Launch Captive Portal attack. Spawns a rogue (fake) AP, imitating the original access point. Spawns a DNS server, redirecting all requests to the attacker's host running the captive portal. Spawns a web server, serving the captive portal which prompts users for their WPA/WPA2 key. Spawns a jammer, deauthenticating all clients from original AP and lureing them to the rogue AP. All authentication attempts at the captive portal are checked against the handshake file captured earlier. The attack will automatically terminate once a correct key has been submitted. The key will be logged and clients will be allowed to reconnect to the target access point. For a guide to the Captive Portal attack, read the Captive Portal attack guide exclamation Requirements A Linux-based operating system. We recommend Kali Linux 2 or Kali rolling. Kali 2 & rolling support the latest aircrack-ng versions. An external wifi card is recommended. Related work For development I use vim and tmux. Here are my dotfiles :octocat: Credits l3op - contributor dlinkproto - contributor vk496 - developer of linset Derv82 - @Wifite/2 Princeofguilty - @webpages and @buteforce Photos for wiki @http://www.kalitutorials.net Ons Ali @wallpaper PappleTec @sites MPX4132 - Fluxion V3 Disclaimer Authors do not own the logos under the /attacks/Captive Portal/sites/ directory. Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. The usage of Fluxion for attacking infrastructures without prior mutual consent could be considered an illegal activity, and is highly discouraged by its authors/developers. It is the end user's responsibility to obey all applicable local, state and federal laws. Authors assume no liability and are not responsible for any misuse or damage caused by this program. Note Beware of sites pretending to be related with the Fluxion Project. These may be delivering malware. Fluxion DOES NOT WORK on Linux Subsystem For Windows 10, because the subsystem doesn't allow access to network interfaces. Any Issue regarding the same would be Closed Immediately Links Fluxion website: https://fluxionnetwork.github.io/fluxion/ Discord: https://discordapp.com/invite/G43gptk Gitter: https://gitter.im/FluxionNetwork/Lobby
mamiriqbal1
A simple demonstration of how you can implement retrieval augmented generation (RAG) for a book.
richardleighdavies
Practical code examples and implementations from the book "Prompt Engineering in Practice". Demonstrates text generation, prompt chaining, and prompt routing using Python and LangChain. Features real-world examples of interacting with OpenAI's GPT models, structured output handling, and multi-step prompt workflows.
Anubhob435
Multi-agent AI system for automated book creation. Editor House uses Google ADK and Gemini models to plan, write, edit, illustrate, and compile complete books from a single topic prompt, with persistent metadata tracking via MongoDB
oleonardosilva
This is a C# project of a simple system for managing book records through the command prompt. The system allows users to interact with a user-friendly command-line interface to perform various book-related operations such as adding, retrieving, editing, and deleting book records in a database.
tachibana-shin
SmartDelete is a KOReader plugin that enhances file deletion safety by prompting users before deleting files, with special attention to book data such as bookmarks and reading history.
陈财猫的提示工程手册的GitHub仓库。您在此可以找到配套的PPT,以及书中的所有提示和相关内容。This is the official GitHub repository for Chen Caimao's Prompt Engineering textbook. Here, you'll find accompanying PPTs and all the prompts from the book.
krishnakumarsingh
Setting Up an Angular 2 Environment Using Typescript, Npm and Webpack PreviousNext This Angular 2 tutorial serves for anyone looking to get up and running with Angular 2 and TypeScript fast. Angular 2 Beta Udemy Last week I’ve read the great Angular 2 book from Ninja Squad. Therefore, I figured it was time to put pen to paper and start building Angular 2 applications using TypeScript. That’s why in this tutorial, we’ll learn how to start an Angular 2 project from scratch and go further by building a development environment with Webpack and more. Getting Started 1. Developing and Building a TypeScript App Let’s start by building our first Angular 2 application using Typescript. First, make sure you have Node.js and npm installed. You can refer to the official website for more information about the installation procedure. Then, install Typescript globally via npm by running the following command in your terminal : 1 2 3 npm install -g typescript Once it is installed, we’ll setup our Typescript project by creating a tsconfig.json file in which we specify the compilation options to use for compiling our project. The typescript NPM module we just installed comes with a compiler, named tsc, that we are going to use for initializing a fresh Typescript project : 1 2 3 4 5 6 7 # Create a new project folder and go inside it mkdir angular2-starter && cd angular2-starter # Generate the Typescript configurations file tsc --init --target es5 --sourceMap --experimentalDecorators --emitDecoratorMetadata Running tsc --init create the tsconfig.json in our project directory, which looks like this : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 { "compilerOptions": { "target": "es5", "sourceMap": true, "experimentalDecorators": true, "emitDecoratorMetadata": true, "module": "commonjs", "noImplicitAny": false, "outDir": "built" }, "exclude": [ "node_modules" ] } Along with the --init parameter, we passed the following options to the compiler : --target es5 : specify that we want our code to transpile to ECMASCRIPT 5. Thus, it could be run in every browser. --sourceMap : generate source maps files. It helps when debugging ES5 code with the original Typescript code in the chrome devtools. --experimentalDecorators and --emitDecoratorMetadata : allow to use Typescript with decorators. Also notice that options such as module, outDir or rootDir have been added by default. Feel free to read the documentation for more compiler options. So hit npm init in your terminal, and fill in some answers (you can accept the default for all the prompts). Then, install angular2 by running the following command : 1 2 3 npm install --save angular2 You should now have a package.json file that looks like the following: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 { "name": "angular-starter", "version": "1.0.0", "description": "An Angular 2 Starter kit featuring Angular 2, TypeScript, and Webpack by EloquentWebApp", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "Grégory D'Angelo", "license": "ISC", "dependencies": { "angular2": "^2.0.0-beta.17", "es6-shim": "^0.35.1", "reflect-metadata": "^0.1.2", "rxjs": "^5.0.0-beta.6", "zone.js": "^0.6.17" } } As you can see, angular2 comes with the following dependencies : reflect-metadata : used to enable dependency injection through decorators es6-shim and es6-promise : librairies for ES6 compatabilities and support for ES6 Promise rxjs : a set of librairies for reactive programming zone.js : used to implement zones for Javascript, inspired from Dart. Angular 2 uses it to efficiently detect changes The fundamentals settings are now in place. Let’s create our first Angular 2 application. 2. Creating our First Component The first step is to create a Typescript file at the root folder, and name it app.component.ts. Our application itself will be a component. To do so, we’ll use the @Component decorator by importing it from ‘angular2/core‘. That’s all we need to create our Angular 2 component. 1 2 3 4 5 6 import { Component } from 'angular2/core'; @Component() export class AppComponent { } By prefixing the class by this decorator, it tells Angular that this class is an Angular component. In Angular 2, components are a fundamental concept. It is the way we define views and control the logic on the page. Here’s how to do it : 1 2 3 4 5 6 7 8 9 import { Component } from 'angular2/core'; @Component({ selector: 'app', template: '<h1>Hello, Angular2</h1>' }) export class AppComponent { } We passed in a configuration object to the component decorator. This object has two properties : selector and template. The selector is the HTML element that Angular will looking for. Every times it founds one, Angular will instantiate a new instance of our AppComponent class, and place our template. As you may also notice we export our class at the end. This is our first class so we’ll keep it empty for simplicity. 3. Bootstrapping the App Finally, we need to launch our application. For this, we only need two things : the Angular’s browser bootstrap method, and the application root component that we just wrote. To separate the concerns, create a new file, bootstrap.ts, and import the dependencies : 1 2 3 4 5 6 7 8 9 ///<reference path="node_modules/angular2/typings/browser.d.ts" /> import { bootstrap } from 'angular2/platform/browser'; import { AppComponent } from './app.component'; bootstrap(AppComponent) .catch(err => console.log(err)); As you can see, we call the bootstrap method, passing in our component, AppComponent. Moreover, as stated in the CHANGELOG since 2.0.0-beta.6 (2016-02-11) we may need to add the <reference ... /> line at the top of our bootstrap.ts file when using --target=es5. Feel free to check the CHANGELOG for more details. Last but not least, we need to create an index.html file to host our Angular application. Start by pasting the following lines : 1 2 3 4 5 6 7 8 9 10 11 12 <!DOCTYPE html> <html> <head></head> <body> <app>Loading...</app> </body> </html> For now, it’s a very basic HTML file in which we’ve put the selector <app> that corresponds to our application root component. But we need to add 2 more things in order to launch our application. Indeed, we need to rely on a tool to load application and library modules. For now, we’ll use SystemJS as the module loader. We’ll see later in this tutorial how to install and configure Webpack for our Angular 2 project. And finally, we need to include script dependencies in our HTML file. Let’s do it together step by step. First, start by installing SystemJS : 1 2 3 npm install --save systemjs Then, load it statically in the index.html just after angular2-polyfills. angular2-polyfills is essentially a mashup of zone.js and reflect-metadata. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 <!DOCTYPE html> <html> <head> <script src="node_modules/angular2/bundles/angular2-polyfills.js"></script> <script src="node_modules/systemjs/dist/system.js"></script> </head> <body> <app>Loading...</app> </body> </html> Finally, we need to tell SystemJS where is our bootstrap module and where to find the dependencies used in our application (angular2 and rxjs) : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 <!DOCTYPE html> <html> <head> <script src="node_modules/angular2/bundles/angular2-polyfills.js"></script> <script src="node_modules/systemjs/dist/system.js"></script> <script> System.config({ // we want to import modules without writing .js at the end defaultJSExtensions: true, // the app will need the following dependencies map: { 'angular2': 'node_modules/angular2', 'rxjs': 'node_modules/rxjs' } }); // and to finish, let's boot the app! System.import('built/bootstrap'); </script> </head> <body> <app>Loading...</app> </body> </html> OK! We’re done with the settings and we can now compile and run our application. In order to handle common tasks, include the following npm scripts in the package.json file : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 { "name": "angular-starter", "version": "1.0.0", "description": "An Angular 2 Starter kit featuring Angular 2, TypeScript, and Webpack by EloquentWebApp", "main": "index.js", "scripts": { "start": "concurrently \"npm run watch\" \"npm run serve\"", "watch": "tsc -w", "serve": "lite-server" }, "author": "Grégory D'Angelo", "license": "ISC", "dependencies": { "angular2": "^2.0.0-beta.11", "es6-promise": "^3.1.2", "es6-shim": "^0.35.0", "reflect-metadata": "^0.1.2", "rxjs": "^5.0.0-beta.2", "systemjs": "^0.19.24", "zone.js": "^0.6.5" }, "devDependencies": { "concurrently": "^2.2.0", "lite-server": "^2.2.2" } } The watch script runs the TypeScript compiler in watch mode. It watches TypeScript files and triggers recompilation on changes. The serve script runs an HTTP server to serve our application, and refresh the browser on changes. I’ve used lite-server for that purpose. Install it via npm : 1 2 3 npm install --save-dev lite-server And, the start run the previous 2 scripts concurrently using the concurrently npm package : 1 2 3 npm install --save-dev concurrently So, run npm start and open your browser to http://localhost:3000. You should now briefly see “Loading…”, and then “Hello, Angular2” should appear. Congratulations! We’ve have just finished the first part of this tutorial. Keep going to see how to set a build system using Webpack for working with TypeScript. Creating a useful project structure and toolchain 1. Project Structure As far, we’ve built a basic Angular 2 application with the minimum required dependencies and tools. In this section, we’ll refactor our project structure to ease the development of more complex Angular 2 applications. By the end of this section, you will be able to build your own starter kit to get up and running with Angular 2 and TypeScript fast. More importantly, you will understand how to structure your project and what each tool is responsible for. Sounds great, isn’t it? Let’s do it! The first step is to revamp the file structure of our project. Here’s how it will look : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 angular2-starter/ ├──src/ | ├──bootstrap.ts | ├──index.html | ├──polyfills.ts │ │ │ ├──app/ │ │ ├──app.component.ts │ │ └──app.html │ │ │ └──assets/ │ └──css/ │ └──styles.css │ ├──tsconfig.json ├──typings.json ├──package.json │ └──webpack.config.js There are some new files, but don’t worry we will dive into each one of them through this section. What’s important for now, it’s to understand that we’ll use the component approach in our application project. This is a great way to ensure maintainable code by encapsulation of our behavior logic. Hence, each component will live in a single folder with each concern as a file: style, template, specs, e2e, and component class. Before going further let’s reorganize our files as follow : 1 2 3 4 5 6 7 8 9 10 11 12 angular2-starter/ ├──src/ | ├──bootstrap.ts | ├──index.html │ │ │ └──app/ │ └──app.component.ts │ ├──tsconfig.json └──package.json You should also update the path in bootstrap.ts : 1 2 3 4 5 6 7 8 9 ///<reference path="../node_modules/angular2/typings/browser.d.ts" /> import { bootstrap } from 'angular2/platform/browser'; import { AppComponent } from './app/app.component'; bootstrap(AppComponent) .catch(err => console.log(err)); Great! Now it’s time to dive in into Webpack. 2. Installing and Configuring Webpack Webpack will replace SystemJS that we have used until now, as a module loader. If you need an explanation on what is Webpack for, I highly recommand you to take a look at the official documentation. In short, webpack is a module bundler. “It takes modules with dependencies and generates static assets representing those modules“. Start with installing webpack, webpack-dev-server, and the webpack plugins locally, and save them as project dependencies : 1 2 3 4 5 6 7 8 9 10 # First, remove SystemJS. We don't need it anymore. npm uninstall --save systemjs # Then, install Typescript locally npm install --save typescript # Finally, install webpack npm install --save-dev webpack webpack-dev-server html-webpack-plugin copy-webpack-plugin Now, let’s configure Webpack for our development workflow. For this purpose we’ll create a webpack.config.js. Add the following settings in your config file : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 var path = require('path'); var webpack = require('webpack'); var CopyWebpackPlugin = require('copy-webpack-plugin'); var HtmlWebpackPlugin = require('html-webpack-plugin'); var ENV = process.env.ENV = 'development'; var HOST = process.env.HOST || 'localhost'; var PORT = process.env.PORT || 8080; var metadata = { host: HOST, port: PORT, ENV: ENV }; /* * config */ module.exports = { // static data for index.html metadata: metadata, // Emit SourceMap to enhance debugging devtool: 'source-map', devServer: { // This is required for webpack-dev-server. The path should // be an absolute path to your build destination. outputPath: path.join(__dirname, 'dist') }, // Switch loaders to debug mode debug: true, // Our angular app entry: { 'polyfills': path.resolve(__dirname, "src/polyfills.ts"), 'app': path.resolve(__dirname, "src/bootstrap.ts") }, // Config for our build file output: { path: path.resolve(__dirname, "dist"), filename: '[name].bundle.js', sourcemapFilename: '[name].map' }, resolve: { // Add `.ts` and `.tsx` as a resolvable extension. extensions: ['', '.ts', '.tsx', '.js'] }, module: { loaders: [ // Support for .ts files { test: /\.tsx?$/, loader: 'ts-loader', include: [ path.resolve(__dirname, "./src") ] }, // Support for .html as raw text { test: /\.html$/, loader: 'raw-loader', exclude: [ path.resolve(__dirname, "src/index.html") ] } ] }, plugins: [ // Copy static assets to the build folder new CopyWebpackPlugin([{ from: 'src/assets', to: 'assets' }]), // Generate the index.html new HtmlWebpackPlugin({ template: 'src/index.html' }) ] } The entry specifies the entry files of our Angular application. It will be use by Webpack as the starting point for the bundling process. As you may notice we specify our bootstrap file, but also a new file named polyfills.ts. It will contain all the dependencies needed to run our Angular2 application. Before that, we’ve put those deps directly inside our index.html. They now live in a separate file : 1 2 3 4 5 // polyfills.ts import 'angular2/bundles/angular2-polyfills'; import 'rxjs'; The output tells Webpack what to do after completing the bundling process. In our case, the dist/ directory will be use to output the bundled files named app.bundle.js and polyfills.bundle.js with th following source-map files. The ts-loader is used to transpile our Typescript files that match the defined test regex. In our case it will process all files with a .ts or .tsx extension. The raw-loader is used to support html files as raw text. Hence, we could write our component views in separate files and include them afterward in our components. You need to install them using npm : 1 2 3 npm install --save-dev ts-loader raw-loader The CopyWebpackPlugin is used to copy the static assets into the build folder. Finally, the metadata are used by the HtmlWebpackplugin to generate our index.html file. In the index.html, we use the host and port data to run the webpack dev server in development environment. See how this file has been simplified : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 <!DOCTYPE html> <html> <head> <link rel="stylesheet" href="./assets/css/styles.css" /> </head> <body> <app>Loading...</app> </body> <% if (webpackConfig.metadata.ENV === 'development') { %> <!-- Webpack Dev Server --> <script src="http://<%= webpackConfig.metadata.host %>:<%= webpackConfig.metadata.port %>/webpack-dev-server.js"></script> <% } %> </html> Feel free to add you own stylesheets files under /src/assets/css as I did with my styles.css file. You should now have a project structured like so : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 angular2-starter/ ├──src/ | ├──bootstrap.ts | ├──index.html | ├──polyfills.ts │ │ │ ├──app/ │ │ └──app.component.ts │ │ │ └──assets/ │ └──css/ │ └──styles.css │ ├──tsconfig.json ├──package.json │ └──webpack.config.js We need one more thing to be all set up. As mentionned before, we will write the views in separated file. So, create an app.html file and refer to it in your app.components.ts. 1 2 3 4 <!-- app.html --> <h1>Hello, Angular2</h1> 1 2 3 4 5 6 7 8 9 10 // app.component.ts import { Component } from 'angular2/core'; @Component({ selector: 'app', template: require('./app.html') }) export class AppComponent { } Finally, we have to install the node typings definition to be able to require file inside our component as we did for the view. Hence, to do so run the following commands, and complete the tsconfig.json to exclude some files : 1 2 3 4 5 6 7 8 9 10 # Install Typings CLI utility npm install typings --global # Init the typings.json typings init # Install typings typings install env~node --global --save As you can notice in my tsconfig.json file below, there are some extra options that are Atom IDE specific features. Feel free to read the documentation about it: atom-typescript/tsconfig.json. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 { "compilerOptions": { "target": "es5", "sourceMap": true, "experimentalDecorators": true, "emitDecoratorMetadata": true, "module": "commonjs", "noImplicitAny": false, "outDir": "built", "rootDir": "." }, "exclude": [ "node_modules", "typings/main.d.ts", "typings/main" ], "filesGlob": [ "./src/**/*.ts", "!./node_modules/**/*.ts", "typings/browser.d.ts" ], "compileOnSave": false, "buildOnSave": false } If you want to know more about typings read the following pages on Github : Microsoft/TypeScript and typings/typings. Ok! Now it’s time to build and run our application using Webpack. Let’s create some npm scripts to handle those operations. 3. Using npm as a Task Runner We will simply use npm to define and run our tasks : one for the build process, and one for running the development server. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 { "name": "angular2-starter", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "build:dev": "webpack --progress --colors", "server:dev": "webpack-dev-server --hot --progress --colors --content-base dist/", "start": "npm run server:dev" }, ... } We can now run npm start and visit http://localhost:8080 to see our app running.
SmisoMazibuko
The Book of Prompts: A collection of prompts for large language models like ChatGPT. Collaborate, share, and refine prompts for various domains. Join the community and help build the ultimate resource!
altontonn
A library app that prompt user to add and delete a book in the list. Built with JavaScript, CSS, HTML.
devalexandre
No description available