How to Upload a Video to Amazon S3
In web and mobile applications, it's common to provide users with the ability to upload data. Your application may allow users to upload PDFs and documents, or media such as photos or videos. Every modern web server engineering science has mechanisms to allow this functionality. Typically, in the server-based environment, the process follows this menstruum:
- The user uploads the file to the application server.
- The application server saves the upload to a temporary space for processing.
- The awarding transfers the file to a database, file server, or object store for persistent storage.
While the procedure is simple, it can take meaning side-effects on the performance of the web-server in busier applications. Media uploads are typically big, so transferring these can represent a large share of network I/O and server CPU time. You must also manage the state of the transfer to ensure that the unabridged object is successfully uploaded, and manage retries and errors.
This is challenging for applications with spiky traffic patterns. For example, in a web awarding that specializes in sending vacation greetings, it may experience most traffic only effectually holidays. If thousands of users attempt to upload media around the same time, this requires y'all to scale out the application server and ensure that in that location is sufficient network bandwidth bachelor.
By directly uploading these files to Amazon S3, you lot can avoid proxying these requests through your awarding server. This can significantly reduce network traffic and server CPU usage, and enable your awarding server to handle other requests during decorated periods. S3 as well is highly available and durable, making it an ideal persistent store for user uploads.
In this blog post, I walk through how to implement serverless uploads and prove the benefits of this approach. This pattern is used in the Happy Path spider web application. You can download the lawmaking from this blog mail in this GitHub repo.
Overview of serverless uploading to S3
When you upload directly to an S3 bucket, you must kickoff asking a signed URL from the Amazon S3 service. You can then upload directly using the signed URL. This is two-stride process for your application forepart end:
- Phone call an Amazon API Gateway endpoint, which invokes the getSignedURL Lambda role. This gets a signed URL from the S3 saucepan.
- Direct upload the file from the application to the S3 bucket.
To deploy the S3 uploader example in your AWS business relationship:
- Navigate to the S3 uploader repo and install the prerequisites listed in the README.md.
- In a last window, run:
git clone https://github.com/aws-samples/amazon-s3-presigned-urls-aws-sam
cd amazon-s3-presigned-urls-aws-sam
sam deploy --guided
- At the prompts, enter s3uploader for Stack Proper name and select your preferred Region. Once the deployment is complete, annotation the APIendpoint output.The API endpoint value is the base URL. The upload URL is the API endpoint with
/uploads
appended. For example:https://ab123345677.execute-api.us-due west-2.amazonaws.com/uploads
.
Testing the application
I show ii means to test this application. The offset is with Postman, which allows you to directly telephone call the API and upload a binary file with the signed URL. The 2nd is with a basic frontend application that demonstrates how to integrate the API.
To test using Postman:
- Offset, re-create the API endpoint from the output of the deployment.
- In the Postman interface, paste the API endpoint into the box labeled Enter request URL.
- Choose Ship.
- After the request is complete, the Trunk section shows a JSON response. The uploadURL attribute contains the signed URL. Copy this attribute to the clipboard.
- Select the + icon next to the tabs to create a new request.
- Using the dropdown, change the method from GET to PUT. Paste the URL into the Enter request URL box.
- Cull the Body tab, then the binary radio push.
- Choose Select file and choose a JPG file to upload.
Choose Send. You run into a 200 OK response afterward the file is uploaded. - Navigate to the S3 console, and open the S3 saucepan created by the deployment. In the bucket, you run into the JPG file uploaded via Postman.
To test with the sample frontend awarding:
- Re-create index.html from the case's repo to an S3 saucepan.
- Update the object's permissions to make it publicly readable.
- In a browser, navigate to the public URL of alphabetize.html file.
- Select Choose file and then select a JPG file to upload in the file picker. Choose Upload image. When the upload completes, a confirmation message is displayed.
- Navigate to the S3 console, and open the S3 bucket created past the deployment. In the saucepan, y'all see the 2nd JPG file you uploaded from the browser.
Understanding the S3 uploading procedure
When uploading objects to S3 from a web application, you must configure S3 for Cross-Origin Resources Sharing (CORS). CORS rules are defined as an XML document on the bucket. Using AWS SAM, you tin can configure CORS as office of the resource definition in the AWS SAM template:
S3UploadBucket: Type: AWS::S3::Saucepan Properties: CorsConfiguration: CorsRules: - AllowedHeaders: - "*" AllowedMethods: - GET - PUT - Caput AllowedOrigins: - "*"
The preceding policy allows all headers and origins – it's recommended that yous use a more restrictive policy for production workloads.
In the first step of the procedure, the API endpoint invokes the Lambda function to make the signed URL request. The Lambda function contains the post-obit code:
const AWS = require('aws-sdk') AWS.config.update({ region: process.env.AWS_REGION }) const s3 = new AWS.S3() const URL_EXPIRATION_SECONDS = 300 // Main Lambda entry point exports.handler = async (event) => { return expect getUploadURL(result) } const getUploadURL = async role(event) { const randomID = parseInt(Math.random() * 10000000) const Key = `${randomID}.jpg` // Get signed URL from S3 const s3Params = { Bucket: process.env.UploadBucket, Key, Expires: URL_EXPIRATION_SECONDS, ContentType: 'image/jpeg' } const uploadURL = expect s3.getSignedUrlPromise('putObject', s3Params) return JSON.stringify({ uploadURL: uploadURL, Key }) }
This function determines the name, or key, of the uploaded object, using a random number. The s3Params object defines the accepted content type and also specifies the expiration of the key. In this case, the key is valid for 300 seconds. The signed URL is returned as part of a JSON object including the cardinal for the calling application.
The signed URL contains a security token with permissions to upload this single object to this bucket. To successfully generate this token, the code calling getSignedUrlPromise must accept s3:putObject permissions for the saucepan. This Lambda office is granted the S3WritePolicy policy to the bucket by the AWS SAM template.
The uploaded object must friction match the same file proper name and content type equally defined in the parameters. An object matching the parameters may be uploaded multiple times, providing that the upload process starts before the token expires. The default expiration is 15 minutes but you may want to specify shorter expirations depending upon your use case.
One time the frontend application receives the API endpoint response, it has the signed URL. The frontend application so uses the PUT method to upload binary data straight to the signed URL:
allow blobData = new Blob([new Uint8Array(array)], {type: 'image/jpeg'}) const result = await fetch(signedURL, { method: 'PUT', body: blobData })
At this point, the caller application is interacting direct with the S3 service and not with your API endpoint or Lambda role. S3 returns a 200 HTML status code one time the upload is complete.
For applications expecting a large number of user uploads, this provides a uncomplicated fashion to offload a big corporeality of network traffic to S3, abroad from your backend infrastructure.
Adding authentication to the upload process
The current API endpoint is open, available to any service on the internet. This means that anyone can upload a JPG file once they receive the signed URL. In most production systems, developers desire to utilize authentication to control who has access to the API, and who can upload files to your S3 buckets.
You can restrict access to this API by using an authorizer. This sample uses HTTP APIs, which back up JWT authorizers. This allows you to command access to the API via an identity provider, which could be a service such as Amazon Cognito or Auth0.
The Happy Path application simply allows signed-in users to upload files, using Auth0 as the identity provider. The sample repo contains a second AWS SAM template, templateWithAuth.yaml, which shows how you lot tin add together an authorizer to the API:
MyApi: Type: AWS::Serverless::HttpApi Properties: Auth: Authorizers: MyAuthorizer: JwtConfiguration: issuer: !Ref Auth0issuer audience: - https://auth0-jwt-authorizer IdentitySource: "$asking.header.Authorization" DefaultAuthorizer: MyAuthorizer
Both the issuer and audition attributes are provided past the Auth0 configuration. By specifying this authorizer every bit the default authorizer, it is used automatically for all routes using this API. Read function i of the Enquire Around Me series to acquire more than well-nigh configuring Auth0 and authorizers with HTTP APIs.
After authentication is added, the calling web application provides a JWT token in the headers of the request:
const response = look axios.get(API_ENDPOINT_URL, { headers: { Authorization: `Bearer ${token}` } })
API Gateway evaluates this token before invoking the getUploadURL Lambda function. This ensures that only authenticated users can upload objects to the S3 bucket.
Modifying ACLs and creating publicly readable objects
In the current implementation, the uploaded object is not publicly accessible. To make an uploaded object publicly readable, you must set its access control list (ACL). At that place are preconfigured ACLs available in S3, including a public-read option, which makes an object readable past anyone on the internet. Set the appropriate ACL in the params object earlier calling s3.getSignedUrl:
const s3Params = { Saucepan: procedure.env.UploadBucket, Key, Expires: URL_EXPIRATION_SECONDS, ContentType: 'image/jpeg', ACL: 'public-read' }
Since the Lambda function must have the appropriate saucepan permissions to sign the request, you must also ensure that the function has PutObjectAcl permission. In AWS SAM, you can add the permission to the Lambda function with this policy:
- Argument: - Effect: Allow Resources: !Sub 'arn:aws:s3:::${S3UploadBucket}/' Action: - s3:putObjectAcl
Conclusion
Many web and mobile applications allow users to upload information, including big media files like images and videos. In a traditional server-based application, this can create heavy load on the application server, and also utilise a considerable amount of network bandwidth.
By enabling users to upload files to Amazon S3, this serverless pattern moves the network load abroad from your service. This tin can make your application much more scalable, and capable of treatment spiky traffic.
This blog mail service walks through a sample application repo and explains the procedure for retrieving a signed URL from S3. It explains how to the test the URLs in both Postman and in a spider web application. Finally, I explain how to add together authentication and brand uploaded objects publicly accessible.
To learn more, run into this video walkthrough that shows how to upload directly to S3 from a frontend web application. For more than serverless learning resources, visit https://serverlessland.com.
Source: https://aws.amazon.com/blogs/compute/uploading-to-amazon-s3-directly-from-a-web-or-mobile-application/
0 Response to "How to Upload a Video to Amazon S3"
Post a Comment