Found 853 repositories(showing 30)
grafana
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
sherifabdlnaby
🐳 Elastic Stack (ELK) v9+ on Docker with Compose. Pre-configured out of the box to enable Logging, Metrics, APM, Alerting, ML, and SIEM features. Up with a Single Command.
slog-rs
Structured, contextual, extensible, composable logging for Rust
debezium
Examples for running Debezium (Configuration, Docker Compose files etc.). Please log issues at https://github.com/debezium/dbz/issues.
Lifailon
TUI for viewing logs from journald, auditd, file system, Docker and Podman containers, Compose stacks and Kubernetes pods with support for log highlighting and several filtering modes.
inconshreveable
Structured, composable logging for Go
apssouza22
A full microservice architecture with Java, Spring Cloud, Log management with ELK, Server load balancing with Nginx, Infrastructure management with Docker-compose, JMX application monitoring, JWT, Aspect OP, Distributed events with Kafka, Event Sourcing, CQRS, REST, Web Sockets, Continuous deploy with Jenkins and more
JuliaLogging
Composable Loggers for the Julia Logging StdLib
DiamondEdge1
Kotlin multiplatform logging. High performance, composable and simple to use.
neuecc
Logs are event streams. EtwStream provides In-Process and Out-of-Process ObservableEventListener. Everything can compose and output to anywhere by Reactive Extensions.
WoLfulus
Truly highly composable logging utility
RiFi2k
Docker compose a VM to get LetsEncrypt / NGINX proxy auto provisioning, ELK logging, Prometheus / Grafana monitoring, Portainer GUI, and more...
abhir98
Project Summary This project was developed for the Computer Security course at my academic degree. Basically, it will encrypt your files in background using AES-256-CTR, a strong encryption algorithm, using RSA-4096 to secure the exchange with the server, optionally using the Tor SOCKS5 Proxy. The base functionality is what you see in the famous ransomware Cryptolocker. The project is composed by three parts, the server, the malware and the unlocker. The server store the victim's identification key along with the encryption key used by the malware. The malware encrypt with a RSA-4096 (RSA-OAEP-4096 + SHA256) public key any payload before send then to the server. This approach with the optional Tor Proxy and a .onion domain allow you to hide almost completely your server. Features Run in Background (or not) Encrypt files using AES-256-CTR(Counter Mode) with random IV for each file. Multithreaded. RSA-4096 to secure the client/server communication. Includes an Unlocker. Optional TOR Proxy support. Use an AES CTR Cypher with stream encryption to avoid load an entire file into memory. Walk all drives by default. Docker image for compilation. Building the binaries DON'T RUN ransomware.exe IN YOUR PERSONAL MACHINE, EXECUTE ONLY IN A TEST ENVIRONMENT! I'm not resposible if you acidentally encrypt all of your disks! First of all download the project outside your $GOPATH: git clone github.com/mauri870/ransomware cd ransomware If you have Docker skip to the next section. You need Go at least 1.11.2 with the $GOPATH/bin in your $PATH and $GOROOT pointing to your Go installation folder. For me: export GOPATH=~/gopath export PATH=$PATH:$GOPATH/bin export GOROOT=/usr/local/go Build the project require a lot of steps, like the RSA key generation, build three binaries, embed manifest files, so, let's leave make do your job: make deps make You can build the server for windows with make -e GOOS=windows. Docker ./build-docker.sh make Config Parameters You can change some of the configs during compilation. Instead of run only make, you can use the following variables: HIDDEN='-H windowsgui' # optional. If present the malware will run in background USE_TOR=true # optional. If present the malware will download the Tor proxy and use it to contact the server SERVER_HOST=mydomain.com # the domain used to connect to your server. localhost, 0.0.0.0, 127.0.0.1 works too if you run the server on the same machine as the malware SERVER_PORT=8080 # the server port, if using a domain you can set this to 80 GOOS=linux # the target os to compile the server. Eg: darwin, linux, windows Example: make -e USE_TOR=true SERVER_HOST=mydomain.com SERVER_PORT=80 GOOS=darwin The SERVER_ variables above only apply to the malware. The server has a flag --port that you can use to change the port that it will listen on. DON'T RUN ransomware.exe IN YOUR PERSONAL MACHINE, EXECUTE ONLY IN A TEST ENVIRONMENT! I'm not resposible if you acidentally encrypt all of your disks! Step by Step Demo and How it Works For this demo I'll use two machines, my personal linux machine and a windows 10 VM. For the sake of simplicity, I have a folder mapped to the VM, so I can compile from my linux and copy to the vm. In this demo we will use the Ngrok tool, this will allow us to expose our server using a domain, but you can use your own domain or ip address if you want. We are also going to enable the Tor transport, so .onion domains will work without problems. First of all lets start our external domain: ngrok http 8080 This command will give us a url like http://2af7161c.ngrok.io. Keep this command running otherwise the malware won't reach our server. Let's compile the binaries (remember to replace the domain): make -e SERVER_HOST=2af7161c.ngrok.io SERVER_PORT=80 USE_TOR=true The SERVER_PORT needs to be 80 in this case, since ngrok redirects 2af7161c.ngrok.io:80 to your local server port 8080. After build, a binary called ransomware.exe, and unlocker.exe along with a folder called server will be generated in the bin folder. The execution of ransomware.exe and unlocker.exe (even if you use a diferent GOOS variable during compilation) is locked to windows machines only. Enter the server directory from another terminal and start it: cd bin/server && ./server --port 8080 To make sure that all is working correctly, make a http request to http://2af7161c.ngrok.io: curl http://2af7161c.ngrok.io If you see a OK and some logs in the server output you are ready to go. Now move the ransomware.exe and unlocker.exe to the VM along with some dummy files to test the malware. You can take a look at cmd/common.go to see some configuration options like file extensions to match, directories to scan, skipped folders, max size to match a file among others. Then simply run the ransomware.exe and see the magic happens 😄. The window that you see can be hidden using the HIDDEN option described in the compilation section. After download, extract and start the Tor proxy, the malware waits until the tor bootstrapping is done and then proceed with the key exchange with the server. The client/server handshake takes place and the client payload, encrypted with an RSA-4096 public key must be correctly decrypted on the server. The victim identification and encryption keys are stored in a Golang embedded database called BoltDB (it also persists on disk). When completed we get into the find, match and encrypt phase, up to N-cores workers start to encrypt files matched by the patterns defined. This proccess is really quick and in seconds all of your files will be gone. The encryption key exchanged with the server was used to encrypt all of your files. Each file has a random primitive called IV, generated individually and saved as the first 16 bytes of the encrypted content. The algorithm used is AES-256-CTR, a good AES cypher with streaming mode of operation such that the file size is left intact. The only two sources of information available about what just happen are the READ_TO_DECRYPT.html and FILES_ENCRYPTED.html in the Desktop. In theory, to decrypt your files you need to send an amount of BTC to the attacker's wallet, followed by a contact sending your ID(located on the file created on desktop). If the attacker can confirm your payment it will possibly(or maybe not) return your encryption key and the unlocker.exe and you can use then to recover your files. This exchange can be accomplished in several ways and WILL NOT be implemented in this project for obvious reasons. Let's suppose you get your encryption key back. To recover the correct key point to the following url: curl -k http://2af7161c.ngrok.io/api/keys/:id Where :id is your identification stored in the file on desktop. After, run the unlocker.exe by double click and follow the instructions. That's it, got your files back 😄 The server has only two endpoints: POST api/keys/add - Used by the malware to persist new keys. Some verifications are made, like the verification of the RSA autenticity. Returns 204 (empty content) in case of success or a json error. GET api/keys/:id - Id is a 32 characters parameter, representing an Id already persisted. Returns a json containing the encryption key or a json error The end As you can see, building a functional ransomware, with some of the best existing algorithms is not difficult, anyone with some programming skills can build that in any programming language.
stcarrez
Ada Utility Library - Composing streams, processes, logs, serialization, encoders and more
gnokoheat
ELK with Filebeat by Docker-compose - Simple & Easy way to file logging
skl256
Ready stack of Grafana, Prometheus, Pushgateway, Loki, Promtail for collecting and visualizing logs from docker swarm, docker compose and docker services. Logs are collected from all containers in the swarm cluster without the need to install additional software on the nodes in the cluster.
livingdocsIO
Grafana, Prometheus, Loki, Jaeger and Opentelemetry in an easy-to-use docker compose example setup to extract logs, metrics and traces in development.
swarmstack
Docker compose for Grafana Loki, like Prometheus but for logs
Kostiantyn-Salnykov
Initial FastAPI project with SQLAlchemy (asyncpg), Alembic, Pydantic, Pytest, Poetry Gunicorn, Docker, docker-compose, Ruff, coverage, logging
Nneji123
This project demonstrates the integration of FastAPI, SQLModel, PostgreSQL, Redis, Next.js, Docker, and Docker Compose, showcasing essential API features like rate limiting, pagination, logging, and metrics using Prometheus and Grafana.
anilrajrimal1
A real-time, interactive CLI dashboard for monitoring Docker containers. View status, health, CPU, and memory usage with a clean, color-coded interface. Supports docker-compose grouping and hotkeys for logs, restarts, and shell access — all from the terminal.
mercerheather476
 [](https://search.maven.org/search?q=g:net.openid%20appauth) [](http://javadoc.io/doc/net.openid/appauth) [](https://github.com/openid/AppAuth-Android/actions/workflows/build.yml) [](https://codecov.io/github/openid/AppAuth-Android?branch=master) AppAuth for Android is a client SDK for communicating with [OAuth 2.0](https://tools.ietf.org/html/rfc6749) and [OpenID Connect](http://openid.net/specs/openid-connect-core-1_0.html) providers. It strives to directly map the requests and responses of those specifications, while following the idiomatic style of the implementation language. In addition to mapping the raw protocol flows, convenience methods are available to assist with common tasks like performing an action with fresh tokens. The library follows the best practices set out in [RFC 8252 - OAuth 2.0 for Native Apps](https://tools.ietf.org/html/rfc8252), including using [Custom Tabs](https://developer.chrome.com/multidevice/android/customtabs) for authorization requests. For this reason, `WebView` is explicitly *not* supported due to usability and security reasons. The library also supports the [PKCE](https://tools.ietf.org/html/rfc7636) extension to OAuth which was created to secure authorization codes in public clients when custom URI scheme redirects are used. The library is friendly to other extensions (standard or otherwise) with the ability to handle additional parameters in all protocol requests and responses. A talk providing an overview of using the library for enterprise single sign-on (produced by Google) can be found here: [Enterprise SSO with Chrome Custom Tabs](https://www.youtube.com/watch?v=DdQTXrk6YTk). ## Download AppAuth for Android is available on [MavenCentral](https://search.maven.org/search?q=g:net.openid%20appauth) ```groovy implementation 'net.openid:appauth:<version>' ``` ## Requirements AppAuth supports Android API 16 (Jellybean) and above. Browsers which provide a custom tabs implementation are preferred by the library, but not required. Both Custom URI Schemes (all supported versions of Android) and App Links (Android M / API 23+) can be used with the library. In general, AppAuth can work with any Authorization Server (AS) that supports native apps as documented in [RFC 8252](https://tools.ietf.org/html/rfc8252), either through custom URI scheme redirects, or App Links. AS's that assume all clients are web-based or require clients to maintain confidentiality of the client secrets may not work well. ## Demo app A demo app is contained within this repository. For instructions on how to build and configure this app, see the [demo app readme](https://github.com/openid/AppAuth-Android/blob/master/app/README.md). ## Conceptual overview AppAuth encapsulates the authorization state of the user in the [net.openid.appauth.AuthState](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/AuthState.java) class, and communicates with an authorization server through the use of the [net.openid.appauth.AuthorizationService](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/AuthorizationService.java) class. AuthState is designed to be easily persistable as a JSON string, using the storage mechanism of your choice (e.g. [SharedPreferences](https://developer.android.com/training/basics/data-storage/shared-preferences.html), [sqlite](https://developer.android.com/training/basics/data-storage/databases.html), or even just [in a file](https://developer.android.com/training/basics/data-storage/files.html)). AppAuth provides data classes which are intended to model the OAuth2 specification as closely as possible; this provides the greatest flexibility in interacting with a wide variety of OAuth2 and OpenID Connect implementations. Authorizing the user occurs via the user's web browser, and the request is described using instances of [AuthorizationRequest](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/AuthorizationRequest.java). The request is dispatched using [performAuthorizationRequest()](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/AuthorizationService.java#L159) on an AuthorizationService instance, and the response (an [AuthorizationResponse](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/AuthorizationResponse.java) instance) will be dispatched to the activity of your choice, expressed via an Intent. Token requests, such as obtaining a new access token using a refresh token, follow a similar pattern: [TokenRequest](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/TokenRequest.java) instances are dispatched using [performTokenRequest()](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/AuthorizationService.java#L252) on an AuthorizationService instance, and a [TokenResponse](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/TokenResponse.java) instance is returned via a callback. Responses can be provided to the [update()](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/AuthState.java#L367) methods on AuthState in order to track and persist changes to the authorization state. Once in an authorized state, the [performActionWithFreshTokens()](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/AuthState.java#L449) method on AuthState can be used to automatically refresh access tokens as necessary before performing actions that require valid tokens. ## Implementing the authorization code flow It is recommended that native apps use the [authorization code](https://tools.ietf.org/html/rfc6749#section-1.3.1) flow with a public client to gain authorization to access user data. This has the primary advantage for native clients that the authorization flow, which must occur in a browser, only needs to be performed once. This flow is effectively composed of four stages: 1. Discovering or specifying the endpoints to interact with the provider. 2. Authorizing the user, via a browser, in order to obtain an authorization code. 3. Exchanging the authorization code with the authorization server, to obtain a refresh token and/or ID token. 4. Using access tokens derived from the refresh token to interact with a resource server for further access to user data. At each step of the process, an AuthState instance can (optionally) be updated with the result to help with tracking the state of the flow. ### Authorization service configuration First, AppAuth must be instructed how to interact with the authorization service. This can be done either by directly creating an [AuthorizationServiceConfiguration](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/AuthorizationServiceConfiguration.java#L102) instance, or by retrieving an OpenID Connect discovery document. Directly specifying an AuthorizationServiceConfiguration involves providing the URIs of the authorization endpoint and token endpoint, and optionally a dynamic client registration endpoint (see "Dynamic client registration" for more info): ```java AuthorizationServiceConfiguration serviceConfig = new AuthorizationServiceConfiguration( Uri.parse("https://idp.example.com/auth"), // authorization endpoint Uri.parse("https://idp.example.com/token")); // token endpoint ``` Where available, using an OpenID Connect discovery document is preferable: ```java AuthorizationServiceConfiguration.fetchFromIssuer( Uri.parse("https://idp.example.com"), new AuthorizationServiceConfiguration.RetrieveConfigurationCallback() { public void onFetchConfigurationCompleted( @Nullable AuthorizationServiceConfiguration serviceConfiguration, @Nullable AuthorizationException ex) { if (ex != null) { Log.e(TAG, "failed to fetch configuration"); return; } // use serviceConfiguration as needed } }); ``` This will attempt to download a discovery document from the standard location under this base URI, `https://idp.example.com/.well-known/openid-configuration`. If the discovery document for your IDP is in some other non-standard location, you can instead provide the full URI as follows: ```java AuthorizationServiceConfiguration.fetchFromUrl( Uri.parse("https://idp.example.com/exampletenant/openid-config"), new AuthorizationServiceConfiguration.RetrieveConfigurationCallback() { ... } }); ``` If desired, this configuration can be used to seed an AuthState instance, to persist the configuration easily: ```java AuthState authState = new AuthState(serviceConfig); ``` ### Obtaining an authorization code An authorization code can now be acquired by constructing an AuthorizationRequest, using its Builder. In AppAuth, the builders for each data class accept the mandatory parameters via the builder constructor: ```java AuthorizationRequest.Builder authRequestBuilder = new AuthorizationRequest.Builder( serviceConfig, // the authorization service configuration MY_CLIENT_ID, // the client ID, typically pre-registered and static ResponseTypeValues.CODE, // the response_type value: we want a code MY_REDIRECT_URI); // the redirect URI to which the auth response is sent ``` Other optional parameters, such as the OAuth2 [scope string](https://tools.ietf.org/html/rfc6749#section-3.3) or OpenID Connect [login hint](http://openid.net/specs/openid-connect-core-1_0.html#rfc.section.3.1.2.1) are specified through set methods on the builder: ```java AuthorizationRequest authRequest = authRequestBuilder .setScope("openid email profile https://idp.example.com/custom-scope") .setLoginHint("jdoe@user.example.com") .build(); ``` This request can then be dispatched using one of two approaches. a `startActivityForResult` call using an Intent returned from the `AuthorizationService`, or by calling `performAuthorizationRequest` and providing pending intent for completion and cancelation handling activities. The `startActivityForResult` approach is simpler to use but may require more processing of the result: ```java private void doAuthorization() { AuthorizationService authService = new AuthorizationService(this); Intent authIntent = authService.getAuthorizationRequestIntent(authRequest); startActivityForResult(authIntent, RC_AUTH); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == RC_AUTH) { AuthorizationResponse resp = AuthorizationResponse.fromIntent(data); AuthorizationException ex = AuthorizationException.fromIntent(data); // ... process the response or exception ... } else { // ... } } ``` If instead you wish to directly transition to another activity on completion or cancelation, you can use `performAuthorizationRequest`: ```java AuthorizationService authService = new AuthorizationService(this); authService.performAuthorizationRequest( authRequest, PendingIntent.getActivity(this, 0, new Intent(this, MyAuthCompleteActivity.class), 0), PendingIntent.getActivity(this, 0, new Intent(this, MyAuthCanceledActivity.class), 0)); ``` The intents may be customized to carry any additional data or flags required for the correct handling of the authorization response. #### Capturing the authorization redirect Once the authorization flow is completed in the browser, the authorization service will redirect to a URI specified as part of the authorization request, providing the response via query parameters. In order for your app to capture this response, it must register with the Android OS as a handler for this redirect URI. We recommend using a custom scheme based redirect URI (i.e. those of form `my.scheme:/path`), as this is the most widely supported across all versions of Android. To avoid conflicts with other apps, it is recommended to configure a distinct scheme using "reverse domain name notation". This can either match your service web domain (in reverse) e.g. `com.example.service` or your package name `com.example.app` or be something completely new as long as it's distinct enough. Using the package name of your app is quite common but it's not always possible if it contains illegal characters for URI schemes (like underscores) or if you already have another handler for that scheme - so just use something else. When a custom scheme is used, AppAuth can be easily configured to capture all redirects using this custom scheme through a manifest placeholder: ```groovy android.defaultConfig.manifestPlaceholders = [ 'appAuthRedirectScheme': 'com.example.app' ] ``` Alternatively, the redirect URI can be directly configured by adding an intent-filter for AppAuth's RedirectUriReceiverActivity to your AndroidManifest.xml: ```xml <activity android:name="net.openid.appauth.RedirectUriReceiverActivity" tools:node="replace"> <intent-filter> <action android:name="android.intent.action.VIEW"/> <category android:name="android.intent.category.DEFAULT"/> <category android:name="android.intent.category.BROWSABLE"/> <data android:scheme="com.example.app"/> </intent-filter> </activity> ``` If an HTTPS redirect URI is required instead of a custom scheme, the same approach (modifying your AndroidManifest.xml) is used: ```xml <activity android:name="net.openid.appauth.RedirectUriReceiverActivity" tools:node="replace"> <intent-filter> <action android:name="android.intent.action.VIEW"/> <category android:name="android.intent.category.DEFAULT"/> <category android:name="android.intent.category.BROWSABLE"/> <data android:scheme="https" android:host="app.example.com" android:path="/oauth2redirect"/> </intent-filter> </activity> ``` HTTPS redirects can be secured by configuring the redirect URI as an [app link](https://developer.android.com/training/app-links/index.html) in Android M and above. We recommend that a fallback page be configured at the same address to forward authorization responses to your app via a custom scheme, for older Android devices. #### Handling the authorization response Upon completion of the authorization flow, the completion Intent provided to performAuthorizationRequest will be triggered. The authorization response is provided to this activity via Intent extra data, which can be extracted using the `fromIntent()` methods on AuthorizationResponse and AuthorizationException respectively: ```java public void onCreate(Bundle b) { AuthorizationResponse resp = AuthorizationResponse.fromIntent(getIntent()); AuthorizationException ex = AuthorizationException.fromIntent(getIntent()); if (resp != null) { // authorization completed } else { // authorization failed, check ex for more details } // ... } ``` The response can be provided to the AuthState instance for easy persistence and further processing: ``` authState.update(resp, ex); ``` If the full redirect URI is required in order to extract additional information that AppAuth does not provide, this is also provided to your activity: ```java public void onCreate(Bundle b) { // ... Uri redirectUri = getIntent().getData(); // ... } ``` ### Exchanging the authorization code Given a successful authorization response carrying an authorization code, a token request can be made to exchange the code for a refresh token: ```java authService.performTokenRequest( resp.createTokenExchangeRequest(), new AuthorizationService.TokenResponseCallback() { @Override public void onTokenRequestCompleted( TokenResponse resp, AuthorizationException ex) { if (resp != null) { // exchange succeeded } else { // authorization failed, check ex for more details } } }); ``` The token response can also be used to update an AuthState instance: ```java authState.update(resp, ex); ``` ### Using access tokens Finally, the retrieved access token can be used to interact with a resource server. This can be done directly, by extracting the access token from a token response. However, in most cases, it is simpler to use the `performActionWithFreshTokens` utility method provided by AuthState: ```java authState.performActionWithFreshTokens(service, new AuthStateAction() { @Override public void execute( String accessToken, String idToken, AuthorizationException ex) { if (ex != null) { // negotiation for fresh tokens failed, check ex for more details return; } // use the access token to do something ... } }); ``` This also updates the AuthState object with current access, id, and refresh tokens. If you are storing your AuthState in persistent storage, you should write the updated copy in the callback to this method. ### Ending current session Given you have a logged in session and you want to end it. In that case you need to get: - `AuthorizationServiceConfiguration` - valid Open Id Token that you should get after authentication - End of session URI that should be provided within you OpenId service config First you have to build EndSessionRequest ```java EndSessionRequest endSessionRequest = new EndSessionRequest.Builder(authorizationServiceConfiguration) .setIdTokenHint(idToken) .setPostLogoutRedirectUri(endSessionRedirectUri) .build(); ``` This request can then be dispatched using one of two approaches. a `startActivityForResult` call using an Intent returned from the `AuthorizationService`, or by calling `performEndSessionRequest` and providing pending intent for completion and cancelation handling activities. The startActivityForResult approach is simpler to use but may require more processing of the result: ```java private void endSession() { AuthorizationService authService = new AuthorizationService(this); Intent endSessionItent = authService.getEndSessionRequestIntent(endSessionRequest); startActivityForResult(endSessionItent, RC_END_SESSION); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == RC_END_SESSION) { EndSessionResonse resp = EndSessionResonse.fromIntent(data); AuthorizationException ex = AuthorizationException.fromIntent(data); // ... process the response or exception ... } else { // ... } } ``` If instead you wish to directly transition to another activity on completion or cancelation, you can use `performEndSessionRequest`: ```java AuthorizationService authService = new AuthorizationService(this); authService.performEndSessionRequest( endSessionRequest, PendingIntent.getActivity(this, 0, new Intent(this, MyAuthCompleteActivity.class), 0), PendingIntent.getActivity(this, 0, new Intent(this, MyAuthCanceledActivity.class), 0)); ``` End session flow will also work involving browser mechanism that is described in authorization mechanism session. Handling response mechanism with transition to another activity should be as follows: ```java public void onCreate(Bundle b) { EndSessionResponse resp = EndSessionResponse.fromIntent(getIntent()); AuthorizationException ex = AuthorizationException.fromIntent(getIntent()); if (resp != null) { // authorization completed } else { // authorization failed, check ex for more details } // ... } ``` ### AuthState persistence Instances of `AuthState` keep track of the authorization and token requests and responses. This is the only object that you need to persist to retain the authorization state of the session. Typically, one would do this by storing the authorization state in SharedPreferences or some other persistent store private to the app: ```java @NonNull public AuthState readAuthState() { SharedPreferences authPrefs = getSharedPreferences("auth", MODE_PRIVATE); String stateJson = authPrefs.getString("stateJson", null); if (stateJson != null) { return AuthState.jsonDeserialize(stateJson); } else { return new AuthState(); } } public void writeAuthState(@NonNull AuthState state) { SharedPreferences authPrefs = getSharedPreferences("auth", MODE_PRIVATE); authPrefs.edit() .putString("stateJson", state.jsonSerializeString()) .apply(); } ``` The demo app has an [AuthStateManager](https://github.com/openid/AppAuth-Android/blob/master/app/java/net/openid/appauthdemo/AuthStateManager.java) type which demonstrates this in more detail. ## Advanced configuration AppAuth provides some advanced configuration options via [AppAuthConfiguration](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/AppAuthConfiguration.java) instances, which can be provided to [AuthorizationService](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/AuthorizationService.java) during construction. ### Controlling which browser is used for authorization Some applications require explicit control over which browsers can be used for authorization - for example, to require that Chrome be used for second factor authentication to work, or require that some custom browser is used for authentication in an enterprise environment. Control over which browsers can be used can be achieved by defining a [BrowserMatcher](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/browser/BrowserMatcher.java), and supplying this to the builder of AppAuthConfiguration. A BrowserMatcher is suppled with a [BrowserDescriptor](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/browser/BrowserDescriptor.java) instance, and must decide whether this browser is permitted for the authorization flow. By default, [AnyBrowserMatcher](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/browser/AnyBrowserMatcher.java) is used. For your convenience, utility classes to help define a browser matcher are provided, such as: - [Browsers](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/browser/Browsers.java): contains a set of constants for the official package names and signatures of Chrome, Firefox and Samsung SBrowser. - [VersionedBrowserMatcher](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/browser/VersionedBrowserMatcher.java): will match a browser if it has a matching package name and signature, and a version number within a defined [VersionRange](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/browser/VersionRange.java). This class also provides some static instances for matching Chrome, Firefox and Samsung SBrowser. - [BrowserAllowList](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/browser/BrowserAllowList.java): takes a list of BrowserMatcher instances, and will match a browser if any of these child BrowserMatcher instances signals a match. - [BrowserDenyList](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/browser/BrowserDenyList.java): the inverse of BrowserAllowList - takes a list of browser matcher instances, and will match a browser if it _does not_ match any of these child BrowserMatcher instances. For instance, in order to restrict the authorization flow to using Chrome or SBrowser as a custom tab: ```java AppAuthConfiguration appAuthConfig = new AppAuthConfiguration.Builder() .setBrowserMatcher(new BrowserAllowList( VersionedBrowserMatcher.CHROME_CUSTOM_TAB, VersionedBrowserMatcher.SAMSUNG_CUSTOM_TAB)) .build(); AuthorizationService authService = new AuthorizationService(context, appAuthConfig); ``` Or, to prevent the use of a buggy version of the custom tabs in Samsung SBrowser: ```java AppAuthConfiguration appAuthConfig = new AppAuthConfiguration.Builder() .setBrowserMatcher(new BrowserDenyList( new VersionedBrowserMatcher( Browsers.SBrowser.PACKAGE_NAME, Browsers.SBrowser.SIGNATURE_SET, true, // when this browser is used via a custom tab VersionRange.atMost("5.3") ))) .build(); AuthorizationService authService = new AuthorizationService(context, appAuthConfig); ``` ### Customizing the connection builder for HTTP requests It can be desirable to customize how HTTP connections are made when performing token requests, for instance to use [certificate pinning](https://www.owasp.org/index.php/Certificate_and_Public_Key_Pinning) or to add additional trusted certificate authorities for an enterprise environment. This can be achieved in AppAuth by providing a custom [ConnectionBuilder](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/connectivity/ConnectionBuilder.java) instance. For example, to custom the SSL socket factory used, one could do the following: ```java AppAuthConfiguration appAuthConfig = new AppAuthConfiguration.Builder() .setConnectionBuilder(new ConnectionBuilder() { public HttpURLConnection openConnect(Uri uri) throws IOException { URL url = new URL(uri.toString()); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); if (connection instanceof HttpsUrlConnection) { HttpsURLConnection connection = (HttpsURLConnection) connection; connection.setSSLSocketFactory(MySocketFactory.getInstance()); } } }) .build(); ``` ### Issues with [ID Token](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/IdToken.java#L118) validation ID Token validation was introduced in `0.8.0` but not all authorization servers or configurations support it correctly. - For testing environments [setSkipIssuerHttpsCheck](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/AppAuthConfiguration.java#L129) can be used to bypass the fact the issuer needs to be HTTPS. ```java AppAuthConfiguration appAuthConfig = new AppAuthConfiguration.Builder() .setSkipIssuerHttpsCheck(true) .build() ``` - For services that don't support nonce[s] resulting in **IdTokenException** `Nonce mismatch` just set nonce to `null` on the `AuthorizationRequest`. Please consider **raising an issue** with your Identity Provider and removing this once it is fixed. ```java AuthorizationRequest authRequest = authRequestBuilder .setNonce(null) .build(); ``` ## Dynamic client registration AppAuth supports the [OAuth2 dynamic client registration protocol](https://tools.ietf.org/html/rfc7591). In order to dynamically register a client, create a [RegistrationRequest](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/RegistrationRequest.java) and dispatch it using [performRegistrationRequest](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/AuthorizationService.java#L278) on your AuthorizationService instance. The registration endpoint can either be defined directly as part of your [AuthorizationServiceConfiguration](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/AuthorizationServiceConfiguration.java), or discovered from an OpenID Connect discovery document. ```java RegistrationRequest registrationRequest = new RegistrationRequest.Builder( serviceConfig, Arrays.asList(redirectUri)) .build(); ``` Requests are dispatched with the help of `AuthorizationService`. As this request is asynchronous the response is passed to a callback: ```java service.performRegistrationRequest( registrationRequest, new AuthorizationService.RegistrationResponseCallback() { @Override public void onRegistrationRequestCompleted( @Nullable RegistrationResponse resp, @Nullable AuthorizationException ex) { if (resp != null) { // registration succeeded, store the registration response AuthState state = new AuthState(resp); //proceed to authorization... } else { // registration failed, check ex for more details } } }); ``` ## Utilizing client secrets (DANGEROUS) We _strongly recommend_ you avoid using static client secrets in your native applications whenever possible. Client secrets derived via a dynamic client registration are safe to use, but static client secrets can be easily extracted from your apps and allow others to impersonate your app and steal user data. If client secrets must be used by the OAuth2 provider you are integrating with, we strongly recommend performing the code exchange step on your backend, where the client secret can be kept hidden. Having said this, in some cases using client secrets is unavoidable. In these cases, a [ClientAuthentication](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/ClientAuthentication.java) instance can be provided to AppAuth when performing a token request. This allows additional parameters (both HTTP headers and request body parameters) to be added to token requests. Two standard implementations of ClientAuthentication are provided: - [ClientSecretBasic](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/ClientSecretBasic.java): includes a client ID and client secret as an HTTP Basic Authorization header. - [ClientSecretPost](https://github.com/openid/AppAuth-Android/blob/master/library/java/net/openid/appauth/ClientSecretPost.java): includes a client ID and client secret as additional request parameters. So, in order to send a token request using HTTP basic authorization, one would write: ```java ClientAuthentication clientAuth = new ClientSecretBasic(MY_CLIENT_SECRET); TokenRequest req = ...; authService.performTokenRequest(req, clientAuth, callback); ``` This can also be done when using `performActionWithFreshTokens` on AuthState: ```java ClientAuthentication clientAuth = new ClientSecretPost(MY_CLIENT_SECRET); authState.performActionWithFreshTokens( authService, clientAuth, action); ``` ## Modifying or contributing to AppAuth This project requires the Android SDK for API level 25 (Nougat) to build, though the produced binaries only require API level 16 (Jellybean) to be used. We recommend that you fork and/or clone this repository to make modifications; downloading the source has been known to cause some developers problems. For contributors, see the additional instructions in [CONTRIBUTING.md](https://github.com/openid/AppAuth-Android/blob/master/CONTRIBUTING.md). ### Building from the Command line AppAuth for Android uses Gradle as its build system. In order to build the library and app binaries, run `./gradlew assemble`. The library AAR files are output to `library/build/outputs/aar`, while the demo app is output to `app/build/outputs/apk`. In order to run the tests and code analysis, run `./gradlew check`. ### Building from Android Studio In AndroidStudio, File -> New -> Import project. Select the root folder (the one with the `build.gradle` file).
anndrox
Docker compose for self-hosted brewing. Includes recipes, logs, information, and tools.
Scarlet-Pan
A KMP (Kotlin Multiplatform) logging library with Android-style API. Write once, log everywhere — composable, lazy, and zero-boilerplate.
Exemplo de uso de OpenTelemetry + Grafana + Tempo (trace) + Loki (logs) + Prometheus (métricas) em uma API REST de contagem de acessos baseada em .NET 10 + ASP.NET Core e que utiliza uma base de dados PostgreSQL. Inclui um script do Docker Compose para criação do ambiente de testes + script do k6 para testes de carga.
alyiox
Using Docker/Compose to build a logging solution based on EFK (Elasticsearch, Fluent bit, Kibana).
ultranet1
Project Description: A music streaming company wants to introduce more automation and monitoring to their data warehouse ETL pipelines and they have come to the conclusion that the best tool to achieve this is Apache Airflow. As their Data Engineer, I was tasked to create a reusable production-grade data pipeline that incorporates data quality checks and allows for easy backfills. Several analysts and Data Scientists rely on the output generated by this pipeline and it is expected that the pipeline runs daily on a schedule by pulling new data from the source and store the results to the destination. Data Description: The source data resides in S3 and needs to be processed in a data warehouse in Amazon Redshift. The source datasets consist of JSON logs that tell about user activity in the application and JSON metadata about the songs the users listen to. Data Pipeline design: At a high-level the pipeline does the following tasks. Extract data from multiple S3 locations. Load the data into Redshift cluster. Transform the data into a star schema. Perform data validation and data quality checks. Calculate the most played songs for the specified time interval. Load the result back into S3. dag Structure of the Airflow DAG Design Goals: Based on the requirements of our data consumers, our pipeline is required to adhere to the following guidelines: The DAG should not have any dependencies on past runs. On failure, the task is retried for 3 times. Retries happen every 5 minutes. Catchup is turned off. Do not email on retry. Pipeline Implementation: Apache Airflow is a Python framework for programmatically creating workflows in DAGs, e.g. ETL processes, generating reports, and retraining models on a daily basis. The Airflow UI automatically parses our DAG and creates a natural representation for the movement and transformation of data. A DAG simply is a collection of all the tasks you want to run, organized in a way that reflects their relationships and dependencies. A DAG describes how you want to carry out your workflow, and Operators determine what actually gets done. By default, airflow comes with some simple built-in operators like PythonOperator, BashOperator, DummyOperator etc., however, airflow lets you extend the features of a BaseOperator and create custom operators. For this project, I developed several custom operators. operators The description of each of these operators follows: StageToRedshiftOperator: Stages data to a specific redshift cluster from a specified S3 location. Operator uses templated fields to handle partitioned S3 locations. LoadFactOperator: Loads data to the given fact table by running the provided sql statement. Supports delete-insert and append style loads. LoadDimensionOperator: Loads data to the given dimension table by running the provided sql statement. Supports delete-insert and append style loads. SubDagOperator: Two or more operators can be grouped into one task using the SubDagOperator. Here, I am grouping the tasks of checking if the given table has rows and then run a series of data quality sql commands. HasRowsOperator: Data quality check to ensure that the specified table has rows. DataQualityOperator: Performs data quality checks by running sql statements to validate the data. SongPopularityOperator: Calculates the top ten most popular songs for a given interval. The interval is dictated by the DAG schedule. UnloadToS3Operator: Stores the analysis result back to the given S3 location. Code for each of these operators is located in the plugins/operators directory. Pipeline Schedule and Data Partitioning: The events data residing on S3 is partitioned by year (2018) and month (11). Our task is to incrementally load the event json files, and run it through the entire pipeline to calculate song popularity and store the result back into S3. In this manner, we can obtain the top songs per day in an automated fashion using the pipeline. Please note, this is a trivial analyis, but you can imagine other complex queries that follow similar structure. S3 Input events data: s3://<bucket>/log_data/2018/11/ 2018-11-01-events.json 2018-11-02-events.json 2018-11-03-events.json .. 2018-11-28-events.json 2018-11-29-events.json 2018-11-30-events.json S3 Output song popularity data: s3://skuchkula-topsongs/ songpopularity_2018-11-01 songpopularity_2018-11-02 songpopularity_2018-11-03 ... songpopularity_2018-11-28 songpopularity_2018-11-29 songpopularity_2018-11-30 The DAG can be configured by giving it some default_args which specify the start_date, end_date and other design choices which I have mentioned above. default_args = { 'owner': 'shravan', 'start_date': datetime(2018, 11, 1), 'end_date': datetime(2018, 11, 30), 'depends_on_past': False, 'email_on_retry': False, 'retries': 3, 'retry_delay': timedelta(minutes=5), 'catchup_by_default': False, 'provide_context': True, } How to run this project? Step 1: Create AWS Redshift Cluster using either the console or through the notebook provided in create-redshift-cluster Run the notebook to create AWS Redshift Cluster. Make a note of: DWN_ENDPOINT :: dwhcluster.c4m4dhrmsdov.us-west-2.redshift.amazonaws.com DWH_ROLE_ARN :: arn:aws:iam::506140549518:role/dwhRole Step 2: Start Apache Airflow Run docker-compose up from the directory containing docker-compose.yml. Ensure that you have mapped the volume to point to the location where you have your DAGs. NOTE: You can find details of how to manage Apache Airflow on mac here: https://gist.github.com/shravan-kuchkula/a3f357ff34cf5e3b862f3132fb599cf3 start_airflow Step 3: Configure Apache Airflow Hooks On the left is the S3 connection. The Login and password are the IAM user's access key and secret key that you created. Basically, by using these credentials, we are able to read data from S3. On the right is the redshift connection. These values can be easily gathered from your Redshift cluster connections Step 4: Execute the create-tables-dag This dag will create the staging, fact and dimension tables. The reason we need to trigger this manually is because, we want to keep this out of main dag. Normally, creation of tables can be handled by just triggering a script. But for the sake of illustration, I created a DAG for this and had Airflow trigger the DAG. You can turn off the DAG once it is completed. After running this DAG, you should see all the tables created in the AWS Redshift. Step 5: Turn on the load_and_transform_data_in_redshift dag As the execution start date is 2018-11-1 with a schedule interval @daily and the execution end date is 2018-11-30, Airflow will automatically trigger and schedule the dag runs once per day for 30 times. Shown below are the 30 DAG runs ranging from start_date till end_date, that are trigged by airflow once per day. schedule
quochuydev
Docker Compose stack for Grafana observability: Tempo traces, Loki logs, Alloy OTLP collector - optimized for Dokploy deployments
composable-logs
Python library to run ML/data pipelines on stateless compute infrastructure (that may be ephemeral or serverless). Please see the documentation site with more details and demo:
vimalgandhi
# Docker Commands, Help & Tips ### Show commands & management commands ``` $ docker ``` ### Docker version info ``` $ docker version ``` ### Show info like number of containers, etc ``` $ docker info ``` # WORKING WITH CONTAINERS ### Create an run a container in foreground ``` $ docker container run -it -p 80:80 nginx ``` ### Create an run a container in background ``` $ docker container run -d -p 80:80 nginx ``` ### Shorthand ``` $ docker container run -d -p 80:80 nginx ``` ### Naming Containers ``` $ docker container run -d -p 80:80 --name nginx-server nginx ``` ### TIP: WHAT RUN DID - Looked for image called nginx in image cache - If not found in cache, it looks to the default image repo on Dockerhub - Pulled it down (latest version), stored in the image cache - Started it in a new container - We specified to take port 80- on the host and forward to port 80 on the container - We could do "$ docker container run --publish 8000:80 --detach nginx" to use port 8000 - We can specify versions like "nginx:1.09" ### List running containers ``` $ docker container ls ``` OR ``` $ docker ps ``` ### List all containers (Even if not running) ``` $ docker container ls -a ``` ### Stop container ``` $ docker container stop [ID] ``` ### Stop all running containers ``` $ docker stop $(docker ps -aq) ``` ### Remove container (Can not remove running containers, must stop first) ``` $ docker container rm [ID] ``` ### To remove a running container use force(-f) ``` $ docker container rm -f [ID] ``` ### Remove multiple containers ``` $ docker container rm [ID] [ID] [ID] ``` ### Remove all containers ``` $ docker rm $(docker ps -aq) ``` ### Get logs (Use name or ID) ``` $ docker container logs [NAME] ``` ### List processes running in container ``` $ docker container top [NAME] ``` #### TIP: ABOUT CONTAINERS Docker containers are often compared to virtual machines but they are actually just processes running on your host os. In Windows/Mac, Docker runs in a mini-VM so to see the processes youll need to connect directly to that. On Linux however you can run "ps aux" and see the processes directly # IMAGE COMMANDS ### List the images we have pulled ``` $ docker image ls ``` ### We can also just pull down images ``` $ docker pull [IMAGE] ``` ### Remove image ``` $ docker image rm [IMAGE] ``` ### Remove all images ``` $ docker rmi $(docker images -a -q) ``` #### TIP: ABOUT IMAGES - Images are app bianaries and dependencies with meta data about the image data and how to run the image - Images are no a complete OS. No kernel, kernel modules (drivers) - Host provides the kernel, big difference between VM ### Some sample container creation NGINX: ``` $ docker container run -d -p 80:80 --name nginx nginx (-p 80:80 is optional as it runs on 80 by default) ``` APACHE: ``` $ docker container run -d -p 8080:80 --name apache httpd ``` MONGODB: ``` $ docker container run -d -p 27017:27017 --name mongo mongo ``` MYSQL: ``` $ docker container run -d -p 3306:3306 --name mysql --env MYSQL_ROOT_PASSWORD=123456 mysql ``` ## CONTAINER INFO ### View info on container ``` $ docker container inspect [NAME] ``` ### Specific property (--format) ``` $ docker container inspect --format '{{ .NetworkSettings.IPAddress }}' [NAME] ``` ### Performance stats (cpu, mem, network, disk, etc) ``` $ docker container stats [NAME] ``` ## ACCESSING CONTAINERS ### Create new nginx container and bash into ``` $ docker container run -it --name [NAME] nginx bash ``` - i = interactive Keep STDIN open if not attached - t = tty - Open prompt **For Git Bash, use "winpty"** ``` $ winpty docker container run -it --name [NAME] nginx bash ``` ### Run/Create Ubuntu container ``` $ docker container run -it --name ubuntu ubuntu ``` **(no bash because ubuntu uses bash by default)** ### You can also make it so when you exit the container does not stay by using the -rm flag ``` $ docker container run --rm -it --name [NAME] ubuntu ``` ### Access an already created container, start with -ai ``` $ docker container start -ai ubuntu ``` ### Use exec to edit config, etc ``` $ docker container exec -it mysql bash ``` ### Alpine is a very small Linux distro good for docker ``` $ docker container run -it alpine sh ``` (use sh because it does not include bash) (alpine uses apk for its package manager - can install bash if you want) # NETWORKING ### "bridge" or "docker0" is the default network ### Get port ``` $ docker container port [NAME] ``` ### List networks ``` $ docker network ls ``` ### Inspect network ``` $ docker network inspect [NETWORK_NAME] ("bridge" is default) ``` ### Create network ``` $ docker network create [NETWORK_NAME] ``` ### Create container on network ``` $ docker container run -d --name [NAME] --network [NETWORK_NAME] nginx ``` ### Connect existing container to network ``` $ docker network connect [NETWORK_NAME] [CONTAINER_NAME] ``` ### Disconnect container from network ``` $ docker network disconnect [NETWORK_NAME] [CONTAINER_NAME] ``` ### Detach network from container ``` $ docker network disconnect ``` # IMAGE TAGGING & PUSHING TO DOCKERHUB # tags are labels that point ot an image ID ``` $ docker image ls ``` Youll see that each image has a tag ### Retag existing image ``` $ docker image tag nginx btraversy/nginx ``` ### Upload to dockerhub ``` $ docker image push bradtraversy/nginx ``` ### If denied, do ``` $ docker login ``` ### Add tag to new image ``` $ docker image tag bradtraversy/nginx bradtraversy/nginx:testing ``` ### DOCKERFILE PARTS - FROM - The os used. Common is alpine, debian, ubuntu - ENV - Environment variables - RUN - Run commands/shell scripts, etc - EXPOSE - Ports to expose - CMD - Final command run when you launch a new container from image - WORKDIR - Sets working directory (also could use 'RUN cd /some/path') - COPY # Copies files from host to container ### Build image from dockerfile (reponame can be whatever) ### From the same directory as Dockerfile ``` $ docker image build -t [REPONAME] . ``` #### TIP: CACHE & ORDER - If you re-run the build, it will be quick because everythging is cached. - If you change one line and re-run, that line and everything after will not be cached - Keep things that change the most toward the bottom of the Dockerfile # EXTENDING DOCKERFILE ### Custom Dockerfile for html paqge with nginx ``` FROM nginx:latest # Extends nginx so everything included in that image is included here WORKDIR /usr/share/nginx/html COPY index.html index.html ``` ### Build image from Dockerfile ``` $ docker image build -t nginx-website ``` ### Running it ``` $ docker container run -p 80:80 --rm nginx-website ``` ### Tag and push to Dockerhub ``` $ docker image tag nginx-website:latest btraversy/nginx-website:latest ``` ``` $ docker image push bradtraversy/nginx-website ``` # VOLUMES ### Volume - Makes special location outside of container UFS. Used for databases ### Bind Mount -Link container path to host path ### Check volumes ``` $ docker volume ls ``` ### Cleanup unused volumes ``` $ docker volume prune ``` ### Pull down mysql image to test ``` $ docker pull mysql ``` ### Inspect and see volume ``` $ docker image inspect mysql ``` ### Run container ``` $ docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True mysql ``` ### Inspect and see volume in container ``` $ docker container inspect mysql ``` #### TIP: Mounts - You will also see the volume under mounts - Container gets its own uniqe location on the host to store that data - Source: xxx is where it lives on the host ### Check volumes ``` $ docker volume ls ``` **There is no way to tell volumes apart for instance with 2 mysql containers, so we used named volumes** ### Named volumes (Add -v command)(the name here is mysql-db which could be anything) ``` $ docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql-db:/var/lib/mysql mysql ``` ### Inspect new named volume ``` docker volume inspect mysql-db ``` # BIND MOUNTS - Can not use in Dockerfile, specified at run time (uses -v as well) - ... run -v /Users/brad/stuff:/path/container (mac/linux) - ... run -v //c/Users/brad/stuff:/path/container (windows) **TIP: Instead of typing out local path, for working directory use $(pwd):/path/container - On windows may not work unless you are in your users folder** ### Run and be able to edit index.html file (local dir should have the Dockerfile and the index.html) ``` $ docker container run -p 80:80 -v $(pwd):/usr/share/nginx/html nginx ``` ### Go into the container and check ``` $ docker container exec -it nginx bash $ cd /usr/share/nginx/html $ ls -al ``` ### You could create a file in the container and it will exiost on the host as well ``` $ touch test.txt ``` # DOCKER COMPOSE - Configure relationships between containers - Save our docker container run settings in easy to read file - 2 Parts: YAML File (docker.compose.yml) + CLI tool (docker-compose) ### 1. docker.compose.yml - Describes solutions for - containers - networks - volumes ### 2. docker-compose CLI - used for local dev/test automation with YAML files ### Sample compose file (From Bret Fishers course) ``` version: '2' # same as # docker run -p 80:4000 -v $(pwd):/site bretfisher/jekyll-serve services: jekyll: image: bretfisher/jekyll-serve volumes: - .:/site ports: - '80:4000' ``` ### To run ``` docker-compose up ``` ### You can run in background with ``` docker-compose up -d ``` ### To cleanup ``` docker-compose down ```