Some more readme fixes

This commit is contained in:
florian 2023-07-08 00:22:08 +02:00
parent 04c426857d
commit 1e54e27918

View File

@ -35,14 +35,14 @@ not absolutely necessary but it enabled you to use the CDN.
![](docs/bucket.png)
3. Make the bucket publicly accessible, by connecting a domain name, e.g. connect `stream.mydomain.com`. They contents
3. Make the bucket publicly accessible, by connecting a domain name, e.g. connect `stream.mydomain.com`. The contents
of the bucket will then be available under this domain name.
![](docs/public.png)
4. When you have connected a domain name and the proxy setting is selected in CloudFlare all read access is automatically cached by the CDN.
5. To be able to include the contents in the streaming website, e.g. `zap.stream`, set the CORS settings of the bucket to allow any website to use your stream, i.e. host `*`
5. To be able to include the contents in a streaming website, e.g. `zap.stream`, set the CORS settings of the bucket to allow any website to use your stream, i.e. host `*`
![](docs/cors.png)
@ -68,7 +68,7 @@ Now the Cloudflare setup is finished and we can continue to setting up the docke
## Environment
The S3 credentials need to be given as environment settings
The API Key that we saved before (S3 credentials) needs to be given as environment settings
* Endpoint for S3-compatible storage. Cloudflare uses an endpoint that contains the account ID.
```
@ -77,16 +77,16 @@ The S3 credentials need to be given as environment settings
* Credentials for the S3 bucket to store the stream in.
```
S3_ACCESS_KEY_ID=xx
S3_ACCESS_KEY_SECRET=xxx
S3_BUCKET_NAME=streams
S3_ACCESS_KEY_ID=xxxxxxxxxx
S3_ACCESS_KEY_SECRET=xxxxxxxxxx
S3_BUCKET_NAME=stream
```
# Usage
Build the docker image
```
docker build -t srs-s3 .
docker build -t srs-s3-upload .
```
The current srs setup in the environment file `conf/mysrs.conf` is copied into
@ -102,7 +102,7 @@ docker run -p 1935:1935 -it --rm \
-e "S3_ACCESS_KEY_ID=xxxxxxxxxx" \
-e "S3_ACCESS_KEY_SECRET=xxxxxxxxxx" \
-e "S3_BUCKET_NAME=stream" \
srs-s3
srs-s3-upload
```
or as a background process:
@ -113,19 +113,19 @@ docker run --name srs-s3-upload -d -p 1935:1935 \
-e "S3_ACCESS_KEY_ID=xxxxxxxxxx" \
-e "S3_ACCESS_KEY_SECRET=xxxxxxxxxx" \
-e "S3_BUCKET_NAME=stream" \
srs-s3
srs-s3-upload
```
In a streaming application use the following settings, assuming the docker image is run on the local machine:
* Server: `rtmp://localhost`
* Stream Key: `123456` (any text you like)
* Stream Key: `123456` (use any text you like)
When you start the stream, you will see the HLS data being uploaded to the S3 storage bucket. The stream will be accessible from the URL: `https://your.domain/123456/stream.m3u8`
When you start the stream, you will see the HLS data being uploaded to the S3 storage bucket. The stream will be accessible from the URL: `https://stream.mydomain.com/123456/stream.m3u8`
The directory in the S3 bucket will be created based on the stream key. It is recommended to change the stream key for each stream.
The directory in the S3/R2 bucket will be created based on the stream key. It is recommended to *change the stream key* for each stream. This prevents issues with some video files being already cached.
# Known Limitations
* Currently only streams with 1 camera and 1 format are supported.
* This upload/sync job needs to run on the same machine as SRS, since data is read from the local hard disk. This is the reason it currently runs in the same docker container.
* This upload/sync service needs to run on the same machine as SRS, since data is read from the local hard disk. This is the reason it currently runs in the same docker container.