Found 162 repositories(showing 30)
ayushdixit487
Complete high-quality practice tests of 50 questions each will help you master your Confluent Certified Developer for Apache Kafka (CCDAK) exam: These practice exams will help you assess and ensure that you are fully prepared for the final examination. Any question you might fail will contain an explanation to understand the correct answer You will need to work on acquiring knowledge on Apache Kafka using the courses from the Apache Kafka Series Topics Covered in the Practice Exams: Apache Kafka Fundamentals: Brokers, Topics, Zookeeper, Producers, Consumers, Configurations, Security Kafka Extended APIs: Kafka Connect & Kafka Streams Confluent Components: Confluent Schema Registry, Confluent REST Proxy, KSQL Are you ready to assess yourself and practice the Confluent Certified Developer for Apache Kafka (CCDAK) exam? See you in the course! Note: these are not exam dumps. The questions' main goal is to assess your Apache Kafka & Confluent Ecosystem. What you’ll learn Practice for the CCDAK exam with practice exams of 50 questions each Consolidate and validate the learning from the Apache Kafka Series courses
ayushdixit487
Complete high-quality practice tests of 50 questions each will help you master your Confluent Certified Developer for Apache Kafka (CCDAK) exam: These practice exams will help you assess and ensure that you are fully prepared for the final examination. Any question you might fail will contain an explanation to understand the correct answer You will need to work on acquiring knowledge on Apache Kafka using the courses from the Apache Kafka Series Topics Covered in the Practice Exams: Apache Kafka Fundamentals: Brokers, Topics, Zookeeper, Producers, Consumers, Configurations, Security Kafka Extended APIs: Kafka Connect & Kafka Streams Confluent Components: Confluent Schema Registry, Confluent REST Proxy, KSQL Are you ready to assess yourself and practice the Confluent Certified Developer for Apache Kafka (CCDAK) exam? See you in the course! Note: these are not exam dumps. The questions' main goal is to assess your Apache Kafka & Confluent Ecosystem. What you’ll learn Practice for the CCDAK exam with practice exams of 50 questions each Consolidate and validate the learning from the Apache Kafka Series courses
asyncapi
The project's aim is to develop a chatbot that can help people create spec documents without knowing the specification.To get started with, the bot will consume the spec, JSON schema and serves the user as an expert. So based on a set of questions and answers it will generate an AsyncApi spec document according to the use cases.
HlaingPhyoAung
Usage: python sqlmap.py [options] Options: -h, --help Show basic help message and exit -hh Show advanced help message and exit --version Show program's version number and exit -v VERBOSE Verbosity level: 0-6 (default 1) Target: At least one of these options has to be provided to define the target(s) -d DIRECT Connection string for direct database connection -u URL, --url=URL Target URL (e.g. "http://www.site.com/vuln.php?id=1") -l LOGFILE Parse target(s) from Burp or WebScarab proxy log file -x SITEMAPURL Parse target(s) from remote sitemap(.xml) file -m BULKFILE Scan multiple targets given in a textual file -r REQUESTFILE Load HTTP request from a file -g GOOGLEDORK Process Google dork results as target URLs -c CONFIGFILE Load options from a configuration INI file Request: These options can be used to specify how to connect to the target URL --method=METHOD Force usage of given HTTP method (e.g. PUT) --data=DATA Data string to be sent through POST --param-del=PARA.. Character used for splitting parameter values --cookie=COOKIE HTTP Cookie header value --cookie-del=COO.. Character used for splitting cookie values --load-cookies=L.. File containing cookies in Netscape/wget format --drop-set-cookie Ignore Set-Cookie header from response --user-agent=AGENT HTTP User-Agent header value --random-agent Use randomly selected HTTP User-Agent header value --host=HOST HTTP Host header value --referer=REFERER HTTP Referer header value -H HEADER, --hea.. Extra header (e.g. "X-Forwarded-For: 127.0.0.1") --headers=HEADERS Extra headers (e.g. "Accept-Language: fr\nETag: 123") --auth-type=AUTH.. HTTP authentication type (Basic, Digest, NTLM or PKI) --auth-cred=AUTH.. HTTP authentication credentials (name:password) --auth-file=AUTH.. HTTP authentication PEM cert/private key file --ignore-401 Ignore HTTP Error 401 (Unauthorized) --proxy=PROXY Use a proxy to connect to the target URL --proxy-cred=PRO.. Proxy authentication credentials (name:password) --proxy-file=PRO.. Load proxy list from a file --ignore-proxy Ignore system default proxy settings --tor Use Tor anonymity network --tor-port=TORPORT Set Tor proxy port other than default --tor-type=TORTYPE Set Tor proxy type (HTTP (default), SOCKS4 or SOCKS5) --check-tor Check to see if Tor is used properly --delay=DELAY Delay in seconds between each HTTP request --timeout=TIMEOUT Seconds to wait before timeout connection (default 30) --retries=RETRIES Retries when the connection timeouts (default 3) --randomize=RPARAM Randomly change value for given parameter(s) --safe-url=SAFEURL URL address to visit frequently during testing --safe-post=SAFE.. POST data to send to a safe URL --safe-req=SAFER.. Load safe HTTP request from a file --safe-freq=SAFE.. Test requests between two visits to a given safe URL --skip-urlencode Skip URL encoding of payload data --csrf-token=CSR.. Parameter used to hold anti-CSRF token --csrf-url=CSRFURL URL address to visit to extract anti-CSRF token --force-ssl Force usage of SSL/HTTPS --hpp Use HTTP parameter pollution method --eval=EVALCODE Evaluate provided Python code before the request (e.g. "import hashlib;id2=hashlib.md5(id).hexdigest()") Optimization: These options can be used to optimize the performance of sqlmap -o Turn on all optimization switches --predict-output Predict common queries output --keep-alive Use persistent HTTP(s) connections --null-connection Retrieve page length without actual HTTP response body --threads=THREADS Max number of concurrent HTTP(s) requests (default 1) Injection: These options can be used to specify which parameters to test for, provide custom injection payloads and optional tampering scripts -p TESTPARAMETER Testable parameter(s) --skip=SKIP Skip testing for given parameter(s) --skip-static Skip testing parameters that not appear dynamic --dbms=DBMS Force back-end DBMS to this value --dbms-cred=DBMS.. DBMS authentication credentials (user:password) --os=OS Force back-end DBMS operating system to this value --invalid-bignum Use big numbers for invalidating values --invalid-logical Use logical operations for invalidating values --invalid-string Use random strings for invalidating values --no-cast Turn off payload casting mechanism --no-escape Turn off string escaping mechanism --prefix=PREFIX Injection payload prefix string --suffix=SUFFIX Injection payload suffix string --tamper=TAMPER Use given script(s) for tampering injection data Detection: These options can be used to customize the detection phase --level=LEVEL Level of tests to perform (1-5, default 1) --risk=RISK Risk of tests to perform (1-3, default 1) --string=STRING String to match when query is evaluated to True --not-string=NOT.. String to match when query is evaluated to False --regexp=REGEXP Regexp to match when query is evaluated to True --code=CODE HTTP code to match when query is evaluated to True --text-only Compare pages based only on the textual content --titles Compare pages based only on their titles Techniques: These options can be used to tweak testing of specific SQL injection techniques --technique=TECH SQL injection techniques to use (default "BEUSTQ") --time-sec=TIMESEC Seconds to delay the DBMS response (default 5) --union-cols=UCOLS Range of columns to test for UNION query SQL injection --union-char=UCHAR Character to use for bruteforcing number of columns --union-from=UFROM Table to use in FROM part of UNION query SQL injection --dns-domain=DNS.. Domain name used for DNS exfiltration attack --second-order=S.. Resulting page URL searched for second-order response Fingerprint: -f, --fingerprint Perform an extensive DBMS version fingerprint Enumeration: These options can be used to enumerate the back-end database management system information, structure and data contained in the tables. Moreover you can run your own SQL statements -a, --all Retrieve everything -b, --banner Retrieve DBMS banner --current-user Retrieve DBMS current user --current-db Retrieve DBMS current database --hostname Retrieve DBMS server hostname --is-dba Detect if the DBMS current user is DBA --users Enumerate DBMS users --passwords Enumerate DBMS users password hashes --privileges Enumerate DBMS users privileges --roles Enumerate DBMS users roles --dbs Enumerate DBMS databases --tables Enumerate DBMS database tables --columns Enumerate DBMS database table columns --schema Enumerate DBMS schema --count Retrieve number of entries for table(s) --dump Dump DBMS database table entries --dump-all Dump all DBMS databases tables entries --search Search column(s), table(s) and/or database name(s) --comments Retrieve DBMS comments -D DB DBMS database to enumerate -T TBL DBMS database table(s) to enumerate -C COL DBMS database table column(s) to enumerate -X EXCLUDECOL DBMS database table column(s) to not enumerate -U USER DBMS user to enumerate --exclude-sysdbs Exclude DBMS system databases when enumerating tables --pivot-column=P.. Pivot column name --where=DUMPWHERE Use WHERE condition while table dumping --start=LIMITSTART First query output entry to retrieve --stop=LIMITSTOP Last query output entry to retrieve --first=FIRSTCHAR First query output word character to retrieve --last=LASTCHAR Last query output word character to retrieve --sql-query=QUERY SQL statement to be executed --sql-shell Prompt for an interactive SQL shell --sql-file=SQLFILE Execute SQL statements from given file(s) Brute force: These options can be used to run brute force checks --common-tables Check existence of common tables --common-columns Check existence of common columns User-defined function injection: These options can be used to create custom user-defined functions --udf-inject Inject custom user-defined functions --shared-lib=SHLIB Local path of the shared library File system access: These options can be used to access the back-end database management system underlying file system --file-read=RFILE Read a file from the back-end DBMS file system --file-write=WFILE Write a local file on the back-end DBMS file system --file-dest=DFILE Back-end DBMS absolute filepath to write to Operating system access: These options can be used to access the back-end database management system underlying operating system --os-cmd=OSCMD Execute an operating system command --os-shell Prompt for an interactive operating system shell --os-pwn Prompt for an OOB shell, Meterpreter or VNC --os-smbrelay One click prompt for an OOB shell, Meterpreter or VNC --os-bof Stored procedure buffer overflow exploitation --priv-esc Database process user privilege escalation --msf-path=MSFPATH Local path where Metasploit Framework is installed --tmp-path=TMPPATH Remote absolute path of temporary files directory Windows registry access: These options can be used to access the back-end database management system Windows registry --reg-read Read a Windows registry key value --reg-add Write a Windows registry key value data --reg-del Delete a Windows registry key value --reg-key=REGKEY Windows registry key --reg-value=REGVAL Windows registry key value --reg-data=REGDATA Windows registry key value data --reg-type=REGTYPE Windows registry key value type General: These options can be used to set some general working parameters -s SESSIONFILE Load session from a stored (.sqlite) file -t TRAFFICFILE Log all HTTP traffic into a textual file --batch Never ask for user input, use the default behaviour --binary-fields=.. Result fields having binary values (e.g. "digest") --charset=CHARSET Force character encoding used for data retrieval --crawl=CRAWLDEPTH Crawl the website starting from the target URL --crawl-exclude=.. Regexp to exclude pages from crawling (e.g. "logout") --csv-del=CSVDEL Delimiting character used in CSV output (default ",") --dump-format=DU.. Format of dumped data (CSV (default), HTML or SQLITE) --eta Display for each output the estimated time of arrival --flush-session Flush session files for current target --forms Parse and test forms on target URL --fresh-queries Ignore query results stored in session file --hex Use DBMS hex function(s) for data retrieval --output-dir=OUT.. Custom output directory path --parse-errors Parse and display DBMS error messages from responses --save=SAVECONFIG Save options to a configuration INI file --scope=SCOPE Regexp to filter targets from provided proxy log --test-filter=TE.. Select tests by payloads and/or titles (e.g. ROW) --test-skip=TEST.. Skip tests by payloads and/or titles (e.g. BENCHMARK) --update Update sqlmap Miscellaneous: -z MNEMONICS Use short mnemonics (e.g. "flu,bat,ban,tec=EU") --alert=ALERT Run host OS command(s) when SQL injection is found --answers=ANSWERS Set question answers (e.g. "quit=N,follow=N") --beep Beep on question and/or when SQL injection is found --cleanup Clean up the DBMS from sqlmap specific UDF and tables --dependencies Check for missing (non-core) sqlmap dependencies --disable-coloring Disable console output coloring --gpage=GOOGLEPAGE Use Google dork results from specified page number --identify-waf Make a thorough testing for a WAF/IPS/IDS protection --skip-waf Skip heuristic detection of WAF/IPS/IDS protection --mobile Imitate smartphone through HTTP User-Agent header --offline Work in offline mode (only use session data) --page-rank Display page rank (PR) for Google dork results --purge-output Safely remove all content from output directory --smart Conduct thorough tests only if positive heuristic(s) --sqlmap-shell Prompt for an interactive sqlmap shell --wizard Simple wizard interface for beginner users
ManmeetBains
A Hotel Reservation system Database was designed in Oracle SQL. Database was created and loaded with data using SQL queries. Then to answer Business questions and creates reports on Hotel Business, data was extracted using complex SQL queries. Database was also connected with Tableau to create visualizations to easily understand the Key data insights. TheFinal Project report contains the Database Schema, SQL codes, SQL queries to extract data and visualizations generated from Tableau.
smith-jj
# Employee Database: A Mystery in Two Parts  ## Background It is a beautiful spring day, and it is two weeks since you have been hired as a new data engineer at Pewlett Hackard. Your first major task is a research project on employees of the corporation from the 1980s and 1990s. All that remain of the database of employees from that period are six CSV files. In this assignment, you will design the tables to hold data in the CSVs, import the CSVs into a SQL database, and answer questions about the data. In other words, you will perform: 1. Data Modeling 2. Data Engineering 3. Data Analysis ## Instructions #### Data Modeling Inspect the CSVs and sketch out an ERD of the tables. Feel free to use a tool like [http://www.quickdatabasediagrams.com](http://www.quickdatabasediagrams.com). #### Data Engineering * Use the information you have to create a table schema for each of the six CSV files. Remember to specify data types, primary keys, foreign keys, and other constraints. * Import each CSV file into the corresponding SQL table. #### Data Analysis Once you have a complete database, do the following: 1. List the following details of each employee: employee number, last name, first name, gender, and salary. 2. List employees who were hired in 1986. 3. List the manager of each department with the following information: department number, department name, the manager's employee number, last name, first name, and start and end employment dates. 4. List the department of each employee with the following information: employee number, last name, first name, and department name. 5. List all employees whose first name is "Hercules" and last names begin with "B." 6. List all employees in the Sales department, including their employee number, last name, first name, and department name. 7. List all employees in the Sales and Development departments, including their employee number, last name, first name, and department name. 8. In descending order, list the frequency count of employee last names, i.e., how many employees share each last name. ## Bonus (Optional) As you examine the data, you are overcome with a creeping suspicion that the dataset is fake. You surmise that your boss handed you spurious data in order to test the data engineering skills of a new employee. To confirm your hunch, you decide to take the following steps to generate a visualization of the data, with which you will confront your boss: 1. Import the SQL database into Pandas. (Yes, you could read the CSVs directly in Pandas, but you are, after all, trying to prove your technical mettle.) This step may require some research. Feel free to use the code below to get started. Be sure to make any necessary modifications for your username, password, host, port, and database name: ```sql from sqlalchemy import create_engine engine = create_engine('postgresql://localhost:5432/<your_db_name>') connection = engine.connect() ``` * Consult [SQLAlchemy documentation](https://docs.sqlalchemy.org/en/latest/core/engines.html#postgresql) for more information. * If using a password, do not upload your password to your GitHub repository. See [https://www.youtube.com/watch?v=2uaTPmNvH0I](https://www.youtube.com/watch?v=2uaTPmNvH0I) and [https://martin-thoma.com/configuration-files-in-python/](https://martin-thoma.com/configuration-files-in-python/) for more information. 2. Create a bar chart of average salary by title. 3. You may also include a technical report in markdown format, in which you outline the data engineering steps taken in the homework assignment. ## Epilogue Evidence in hand, you march into your boss's office and present the visualization. With a sly grin, your boss thanks you for your work. On your way out of the office, you hear the words, "Search your ID number." You look down at your badge to see that your employee ID number is 499942. ## Submission * Create an image file of your ERD. * Create a `.sql` file of your table schemata. * Create a `.sql` file of your queries. * (Optional) Create a Jupyter Notebook of the bonus analysis. * Create and upload a repository with the above files to GitHub and post a link on BootCamp Spot.
AndrejaCH
In this project, I am using ERDs and Schemas to design databases and writing intermediate-level SQL queries to answer important business questions for the company’s HR department. The result is a well-structured database with implemented constraints, foreign and primary keys.
kristinvmartin
This is a dimensional data warehouse that seeks to provide insights into the raw data that FEMA provides publicly for its Individual and Housing Program. I used Jupyter Notebook, Python (Pandas, NumPy, Pyodbc), and SQL to perform ETL on the dataset, loading the warehouse based on the schema I designed. I created visualizations using Tableau from the data warehouse to provide targeted insights that answered the key business questions of the project (see README file). Note: If etl_IHP.ipynb is throwing an error on load, it can be viewed using nbviewer by following this link: https://nbviewer.jupyter.org/github/kristinvmartin/datawarehouse-fema-bu/blob/main/etl_IHP.ipynb, or you can view the CODEONLY file, which has the scripts without the output.
presidentmanny
It is a beautiful spring day, and it is two weeks since you have been hired as a new data engineer at Pewlett Hackard. Your first major task is a research project on employees of the corporation from the 1980s and 1990s. All that remain of the database of employees from that period are six CSV files. In this assignment, you will design the tables to hold data in the CSVs, import the CSVs into a SQL database, and answer questions about the data. In other words, you will perform: Data Modeling Data Engineering Data Analysis Before You Begin Create a new folder in your homework repository called sql-challenge. Inside your local git repository, create a directory for the SQL challenge. Use a folder name to correspond to the challenge: EmployeeSQL. Add your files to this folder. Push the above changes to GitHub. Instructions Data Modeling Inspect the CSVs and sketch out an ERD of the tables. Feel free to use a tool like http://www.quickdatabasediagrams.com. Data Engineering Use the information you have to create a table schema for each of the six CSV files. Remember to specify data types, primary keys, foreign keys, and other constraints. Import each CSV file into the corresponding SQL table. Data Analysis Once you have a complete database, do the following: List the following details of each employee: employee number, last name, first name, gender, and salary. List employees who were hired in 1986. List the manager of each department with the following information: department number, department name, the manager's employee number, last name, first name, and start and end employment dates. List the department of each employee with the following information: employee number, last name, first name, and department name. List all employees whose first name is "Hercules" and last names begin with "B." List all employees in the Sales department, including their employee number, last name, first name, and department name. List all employees in the Sales and Development departments, including their employee number, last name, first name, and department name. In descending order, list the frequency count of employee last names, i.e., how many employees share each last name. Bonus (Optional) As you examine the data, you are overcome with a creeping suspicion that the dataset is fake. You surmise that your boss handed you spurious data in order to test the data engineering skills of a new employee. To confirm your hunch, you decide to take the following steps to generate a visualization of the data, with which you will confront your boss: Import the SQL database into Pandas. (Yes, you could read the CSVs directly in Pandas, but you are, after all, trying to prove your technical mettle.) This step may require some research. Feel free to use the code below to get started. Be sure to make any necessary modifications for your username, password, host, port, and database name: from sqlalchemy import create_engine engine = create_engine('postgresql://localhost:5432/<your_db_name>') connection = engine.connect() Consult SQLAlchemy documentation for more information. If using a password, do not upload your password to your GitHub repository. See https://www.youtube.com/watch?v=2uaTPmNvH0I and https://martin-thoma.com/configuration-files-in-python/ for more information. Create a histogram to visualize the most common salary ranges for employees. Create a bar chart of average salary by title. Epilogue Evidence in hand, you march into your boss's office and present the visualization. With a sly grin, your boss thanks you for your work. On your way out of the office, you hear the words, "Search your ID number." You look down at your badge to see that your employee ID number is 499942. Submission Create an image file of your ERD. Create a .sql file of your table schemata. Create a .sql file of your queries. (Optional) Create a Jupyter Notebook of the bonus analysis. Create and upload a repository with the above files to GitHub and post a link on BootCamp Spot.
We are all well aware of coaches working in sport but what exactly do they do and what is coaching outside of sport? Well the answer is that coaching is the action of improving personal knowledge and development and growth in any area. Coaching enables individuals to look at their lives in the whole taking into account all areas of a persons' life and reconsidering the goals, activity and direction in each area as well as the balance between competing demands. The main reason why people turn to coaches simply is to get more out of life, whether this is in terms of relationships, career, health, sport, well being, spirituality, learning, life-style, or creativity. Typically people turn to coaches when they are no longer getting the results that they are seeking. This point can be a crisis, period of transition, or it can even be a frustration in a particular aspect of personal performance. Many people, however, are now aware of the benefits of coaching and are realizing that they can get more out of any aspect of their lives through working with a coach well before aspects of their lives begin to become problematic. A keen equestrian has a fall and loses confidence and works with a coach to rebuild her confidence. Realizing the changes, she continues to develop her understanding of her work life values and relationships and makes changes in her life and work as a result. A highly successful businessman works with a coach to improve his golf game - realizes the approaches are applicable to his business life and continues to develop his goals and motivation together with a breakthrough session to remove limiting beliefs which he found holding him back. His target - to double his business in twelve months. He takes a life coach for personal development Course and develops a new range of approaches to working with himself and with others clearly confident in doubling his income which he does. A teacher takes a distance learning coaching course to work with students better, realities he has no life goals or targets himself and sets about transforming his life, doubles his salary, gets a new house and eventually lets go of old beliefs and values and embarks on a totally new career and life-style. Life coach for personal development is about personal transformation and growth. It teaches us that we have options in every aspect of our lives. It shows us why we live within frameworks of values, beliefs, behaviors, and habits and it gives us choice in every area. So why do so may people spend so much time and money on self help books with such limited impact? The reason is simple. When you are learning about your own mind, it's far more effective to work with another, more skilled mind to get the changes you want. Coach for life personal development compel to action simply through natural human interaction. They challenge with skillful questions at the right time. They provoke within a context of rapport to challenge beliefs and values. They enlighten with a range of approaches and techniques to reveal to the client representations of the clients own experience which they simply were not aware of. A life coach for personal development work within what they call a framework of ecology. What this means is that they consider all the consequences of any course of action for a client. Yes a client could double his or her income but if this had a negative impact on their health or relationships the coach can easily lead the client to a deep understanding of all the consequences of their action so that they can adjust behavior accordingly. The personal development Coaching Process The coaching process is focused on you. It begins with a detailed personal history which is much more than it sounds. It's not just a detailed personal history in terms of your upbringing relationships, career etc. It will start your internal unconscious processes working to re-evaluate your values and aspirations before any formal work begins. You may complete an audit and review of your current context and then choose an area to work on. Breakthrough The coach then begins an intensive programme called a breakthrough process which elicits and ranks your core values in a chosen area of your life - typically these are broad contexts such as Relationships, Career, Wealth, Health, and can also focus on personal contexts such as creativity, spirituality, and sporting interests. Addressing limiting beliefs, behaviors and values In order to be able to move forward in your life the coach works with you to identify the limits which your previous learning, experience and upbringing may be placing on you. Having identified these the coach will use specific techniques which will then give you choice over whether to modify these or not. Learning that you have choice over these aspects of you make up is a tremendously liberating experience - hence the breakthrough tag on the process. Creating a new pattern to live to We all work and life to our own internal patterns, goals and aspirations. We do this whether we acknowledge this or not. These patterns shape our daily and lifelong behavior. Becoming aware of the programmes we operate by through working with a skilled professional coach and giving ourselves choice over these is the key point of the whole coaching process. Working with your coach, using specific techniques, the coach will assist you in the development of a revised personal conceptualization - plan - vision - sense of - schema - of your present and future life. The imprinting events which shape our lives for decades are acquired really quickly and can be revised just a quickly. In fact, when we realist where our imprinting events are we can be shocked at how casually we adopt these and the lasting impact they have on us long, long after they were relevant or served our personal needs. The life coach for personal development knows how to develop & rebuild your new internal aspirations and programmes with you so that they impact on your daily beliefs and behaviours and then works with you to achieve this. Follow up This is the followed up by a process of clean up of any further limiting beliefs, less helpful behaviors or sustaining new changes. Ideally you will now have a very detailed written description of your future in every respect. The writing process is important to pattern unconscious learning into conscious thought - this directs the attention of the unconscious mind which then automatically matches actions, thoughts, opportunity taking to meet the new programmes you are working towards. Choosing a life coach for personal development, There are no standard qualifications which coaches have to have but there are some standards which will give you a great deal of security in choosing someone to work with. You will see many coaching courses for life personal development available however these may not give the coach the full range of skills needed. The main ones to check for, are that they have an advanced qualification in Life Coach Wise Programming.They might also have additional coaching qualifications but these are generally focused on goal setting and low level rational processes which are limited in effectiveness without the Practitioner Certification. You will see Coaches with Life Coach Wise Coaching Certification as well -such coaches are well prepared to support you in your development. Life Coach Wise Master Practitioner This is the ideal training needed to coach you. Master Practitioners are able to work with values, beliefs, behaviors, limiting beliefs, early patterning issues and even significant trauma. Ideally they will have additional certification in Hypnotherapy and Cognitive Behavioural Therapy thought this is a useful bonus and not essential. Life Coach Wise Trainer Such coaches have generally the highest level of capability of skills available and can work fluidly and flexibly to coach you to the outcomes you are looking for. They should be actively training and you should be able to assess their abilities by attending one of their shorter seminars. Life personal development coaching is about working with you to achieve personal excellence in any area of your life. Many coaches set themselves up in response to some sort of unwanted transition in their lives. Why model these people? Choose a life coach for personal development who is successful and effective in their own lives. You may not personally like them but you are not looking for a friend - you may need someone who can lead you out of your comfort zone What you can do on your own There are many 'Be Your Own Life Coach For Personal Development' books available which will give you a good introduction to personal development.
Joeeel17
SQL schema and answers directed to solve questions for Danny Ma's #8WeekSQLChallenge
Jeffreylarbiakor
In this challenge, I'd write SQL queries to answer some questions about Wave’s business. The PostgreSQL schema in this repository is a slightly-simplified glimpse of some tables from the actual PostgreSQL schema.
richardgourley
A simplified introdocutory data analysis report to show how using SQL queries can answer questions from a data warehouse with a star schema. The report answers why server downtime happens and how it can be fixed.
MeetChauhan03
Auto-FAQ-Schema-Generator it is used for generating FAQ schema for FAQ page of your website , by just entering the FAQ page URL, Question class name,& Answer class name just in second it will generate for you
jacquie0583
Creating ERD Diagrams, perform data modeling, complete analysis on employee database pgAdmin using SQL: QuickDBD and Schemas to design databases and writing intermediate-level SQL queries to answer important business questions for the company’s HR department. Utilizing PostreSQL a data base system to load, build, and host company data and pgAdmin and the result is a well-designed database with reporting capabilities.
samuelnoye
We have put together a table of restaurants called nomnom and we need your help to answer some questions. Use the SQL commands you just learned and find the best dinner spots in the city. The schema of this table is available here. Let’s begin!
xxl4tomxu98
This is question and answer app trying to explore complicated database schema and find unique functionalities. Preliminary system design on schema, read and write efficiency and data storage considerations over estimated number of users and application durations. Attempted Implementations of search algorithm that is compatible for the functionality of the App, i.e. topics and key words semantically. Both docker container based and Heroku dyno based deployment on Heroku are presented.
MaxineXiong
This project builds a cloud-based ETL pipeline for Sparkify to move data to a cloud data warehouse. It extracts song and user activity data from AWS S3, stages it in Redshift, and transforms it into a star-schema data model with fact and dimension tables, enabling efficient querying to answer business questions.
divyar2630
The aim of the project is to build a relational database management system for Airbnb. By making a good schema and design of the database we can search, retrieve and analyze data to gather insights which would be extremely helpful in answering relevant questions like what is the occupancy rate of a listing Or what is the income per month of each listing Or evaluate discrete time slots available for a property or find a super host based on some eligibility criteria
aminura
Background It is a beautiful spring day, and it is two weeks since you have been hired as a new data engineer at Pewlett Hackard. Your first major task is a research project on employees of the corporation from the 1980s and 1990s. All that remain of the database of employees from that period are six CSV files. In this assignment, you will design the tables to hold data in the CSVs, import the CSVs into a SQL database, and answer questions about the data. In other words, you will perform: Data Modeling Data Engineering Data Analysis Instructions Data Modeling Inspect the CSVs and sketch out an ERD of the tables. Feel free to use a tool like http://www.quickdatabasediagrams.com. Data Engineering Use the information you have to create a table schema for each of the six CSV files. Remember to specify data types, primary keys, foreign keys, and other constraints. Import each CSV file into the corresponding SQL table. Data Analysis Once you have a complete database, do the following: List the following details of each employee: employee number, last name, first name, gender, and salary. List employees who were hired in 1986. List the manager of each department with the following information: department number, department name, the manager's employee number, last name, first name, and start and end employment dates. List the department of each employee with the following information: employee number, last name, first name, and department name. List all employees whose first name is "Hercules" and last names begin with "B." List all employees in the Sales department, including their employee number, last name, first name, and department name. List all employees in the Sales and Development departments, including their employee number, last name, first name, and department name. In descending order, list the frequency count of employee last names, i.e., how many employees share each last name.
abdallahmokarb
SQL Server database schema for an Examination System, designed to manage students, instructors, courses, exams, questions, and student answers in an academic setting. The database supports tracking student performance, exam scheduling, and question management with multiple-choice (MSQ) and true/false (TF) question types
This plugin helps in implementation of schema.org/QAPage, schema.org/Answer and schema.org/Question specs from schema.org
fairpoints
collect and serve schema.org Q&A (QAPage, Question, Answer) structured data
vaibhavpalkar21
Zomato dummy database design - Designed and Build a dummy database for Zomato for multiple tables and retrieve information from it to get the answers to the question regarding the users and delivery guy. Here I used MySQL to design and writing the queries for getting answers. Designed schema and ER diagram.
FafCerebrate
For the SQL Project We were tasked with creating 2-3 Business Questions that use the HR Schema from Oracle's Sample Schemas. We then create SQL Scripts to answer the questions.
denverprophitjr
A WordPress Q&A plugin fork of https://wordpress.org/plugins/dw-question-answer/ to include schema Questions & Yoast SEO Breadcrumbs.
Muzammil-GulamGaus-Shaikh
SQL-based Sales Analytics project using a star-schema dataset to perform EDA (Exploratory Data Analysis) and answer business questions.
Saad-al-abeed
This repo contains a retail sales dataset which is integrated into a database schema for some data preprocessing and then some business standard questions were answered accordingly.
Saad-al-abeed
This repo contains an sql library management system which is integrated into a database schema for some data preprocessing and then some business standard questions were answered accordingly.
salmantamimi
Analyzed a music store dataset using PostgreSQL and pgAdmin 4 and answered a set of real-world questions, such as finding the best customers, top cities, popular genres, and longest tracks. I wrote and ran SQL queries to get insights. Uploaded files contain the questions, code, and schema.