Efficient Multi-File Upload to Azure Blob Storage with Pre-Signed URLs: A Step-by-Step Guide for Developers

Shravan K Subrahmanya
5 min readAug 12, 2024

--

Introduction

When dealing with large-scale applications, especially those involving file storage, efficiency and reliability are paramount. Traditional methods of uploading files to the backend and then pushing them to storage can be slow and prone to errors, especially when handling bulk uploads. However, leveraging Azure Blob Storage with a pre-signed URL approach can streamline this process, making it faster and more reliable. This blog will guide you through a modern and efficient way of handling multiple file uploads to Azure Blob Storage using Azure Functions and pre-signed URLs.

We’ll break down the backend and frontend code step-by-step, ensuring you understand each part of the process and how it contributes to the overall efficiency.

Why This Approach?

Before diving into the code, let’s understand why this approach is more efficient compared to traditional methods:

  1. Reduced Latency: By generating pre-signed URLs, files are uploaded directly to Azure Blob Storage from the client, bypassing the need to upload them to the server first. This reduces the overall time taken to upload files.
  2. Scalability: Each file is uploaded independently, allowing for parallel uploads. This means that if one file fails, the others are unaffected, making the process more robust and scalable.
  3. Serverless Architecture: Utilizing Azure Functions to generate upload URLs fits seamlessly into a serverless architecture, reducing costs and improving application performance.
Overall Architecture

Backend: Azure Function to Generate Pre-Signed URLs

The core of our backend lies in an Azure Function that generates unique upload URLs for each file. Here’s a breakdown of the code:

import logging
import azure.functions as func
from azure.storage.blob import BlobServiceClient, generate_blob_sas, BlobSasPermissions
import os
import json
from datetime import datetime, timedelta, timezone

file_uploads = func.Blueprint()

@file_uploads.route(route="generate-upload-urls", methods=["POST"])
def generate_upload_urls(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Generating pre-signed URLs for file uploads.')

try:
req_body = req.get_json()
files = req_body.get('files')
if not files:
return func.HttpResponse(
json.dumps({"error": "No files provided"}),
status_code=400,
mimetype="application/json"
)

# Initialize Blob Service Client
blob_service_client = BlobServiceClient.from_connection_string(os.getenv('BlobStorageConnectionString'))
container_name = "<container-name>"

try:
container_client = blob_service_client.get_container_client(container_name)
if not container_client.exists():
container_client.create_container()
except Exception as e:
return func.HttpResponse(
json.dumps({"error": f"Error creating/accessing Blob container: {e}"}),
status_code=500,
mimetype="application/json"
)

urls = []
for file in files:
file_name = file.get('fileName')
file_type = file.get('fileType')
blob_sas_token = generate_blob_sas(
account_name=blob_service_client.account_name,
container_name=container_name,
blob_name=file_name,
account_key=blob_service_client.credential.account_key,
permission=BlobSasPermissions(write=True),
expiry=datetime.now(timezone.utc) + timedelta(hours=1) # Use timezone-aware datetime
)
upload_url = f"https://{blob_service_client.account_name}.blob.core.windows.net/{container_name}/{file_name}?{blob_sas_token}"
urls.append({"fileName": file_name, "uploadUrl": upload_url})

return func.HttpResponse(
json.dumps({"files": urls}),
status_code=200,
mimetype="application/json"
)
except Exception as e:
logging.error(f"Unexpected error: {e}")
return func.HttpResponse(
json.dumps({"error": f"Unexpected error: {e}"}),
status_code=500,
mimetype="application/json"
)

Important Note: For this Azure Function to work, the file_uploads blueprint needs to be registered in your function_app.py file as follows:

from FileOperations.file_uploads import file_uploads
app = func.FunctionApp(http_auth_level=func.AuthLevel.FUNCTION)
app.register_functions(file_uploads)

Code Explanation:

  1. Blob Service Initialization: The BlobServiceClient is initialized using a connection string, and we ensure the target container exists or create it if it doesn't.
  2. URL Generation: For each file, a pre-signed URL is generated using generate_blob_sas, allowing write permissions for a specified time (in this case, one hour). These URLs are then returned to the client.
  3. Error Handling: The function includes robust error handling to manage potential issues like missing files in the request or problems accessing the Blob container.

Frontend: Uploading Files Using Pre-Signed URLs

Once the backend provides the pre-signed URLs, the frontend handles the file upload process. Here’s the client-side code:

const UploadFiles = async (files, chatId, updateFileStatus) => {
try {
const fileMetadata = files.map(file => ({
fileName: file.name,
fileType: file.type,
chatId: chatId
}));

const response = await fetch('http://localhost:7071/api/generate-upload-urls', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ files: fileMetadata })
});

if (!response.ok) {
const errorText = await response.text();
throw new Error(`Failed to get pre-signed URLs: ${errorText}`);
}

const result = await response.json();

// Upload files using the pre-signed URLs
for (let i = 0; i < files.length; i++) {
const file = files[i];
const uploadUrl = result.files[i].uploadUrl;

try {
await fetch(uploadUrl, {
method: 'PUT',
headers: {
'x-ms-blob-type': 'BlockBlob',
'Content-Type': file.type,
'Content-Length': file.size,
'x-ms-meta-chatId': chatId
},
body: file
});

// Update status to 'uploaded' after successful upload
updateFileStatus(file.name, 'uploaded');
} catch (error) {
console.error(`Error uploading file ${file.name}:`, error);
updateFileStatus(file.name, 'error');
}
}

return { message: `${files.length} files uploaded successfully.` };
} catch (error) {
console.error('Error uploading files:', error);
throw error;
}
};

export default UploadFiles;

Code Explanation:

  1. Metadata Preparation: Before sending the files to the backend, metadata such as fileName, fileType, and a chatId is prepared.
  2. Fetching URLs: The client sends a request to the backend to retrieve the pre-signed URLs for each file. If successful, these URLs are then used for uploading the files directly to Azure Blob Storage.
  3. File Upload: Each file is uploaded individually using a PUT request to the corresponding pre-signed URL. Metadata such as chatId is included in the request headers.
  4. Error Handling: The code includes error handling at multiple stages, ensuring that any failures during the process are logged and the UI is updated accordingly.

Improving Performance: Parallel File Uploads

To further enhance performance, consider implementing parallel file uploads. This approach would allow multiple files to be uploaded simultaneously, significantly reducing the total time required for bulk uploads. JavaScript’s Promise.all() can be used to achieve this, making the process even faster.

Conclusion

By utilizing Azure Functions and pre-signed URLs, we can create a highly efficient and scalable solution for handling multiple file uploads to Azure Blob Storage. This approach not only reduces the time required for uploads but also improves reliability by isolating the upload process for each file. For developers looking to build serverless applications, this method provides a robust foundation for managing file uploads in a cloud environment.

--

--

Shravan K Subrahmanya

👨‍💻 Software Engineer | Fullstack Developer | Tech Geek 🤖 Generative AI, Azure, React, Python ✍️ Writing on Programming, AI, Life Skills & Stories