Yasin

Yasin

断点续传方案

断点续传方案


是什么

大文件上传时,如果网络中断或页面关闭,不需要从头重新上传,而是从上次中断的位置继续上传。


核心思路

大文件
1. 前端切片(把文件切成多个小块)
2. 逐片上传
3. 中断后,查询已上传的分片
4. 只上传未完成的分片
5. 全部上传完,通知后端合并

完整实现

1. 文件切片

// utils/upload/slice.ts

interface FileChunk {
  chunk: Blob; // 分片数据
  hash: string; // 分片 hash(唯一标识)
  index: number; // 分片序号
  filename: string; // 原文件名
}

function createFileChunks(
  file: File,
  chunkSize = 5 * 1024 * 1024,
): FileChunk[] {
  const chunks: FileChunk[] = [];
  let index = 0;
  let start = 0;

  while (start < file.size) {
    const chunk = file.slice(start, start + chunkSize);
    chunks.push({
      chunk,
      hash: `${file.name}-${index}`,
      index,
      filename: file.name,
    });
    start += chunkSize;
    index++;
  }

  return chunks;
}

示意:

100MB 文件,每片 5MB
切成 20 个分片
[chunk-0, chunk-1, chunk-2, ... chunk-19]

2. 计算文件 hash(标识文件唯一性)

用 Web Worker 避免阻塞主线程:

// utils/upload/hash.ts

export function calculateHash(file: File): Promise<string> {
  return new Promise((resolve) => {
    const worker = new Worker(new URL("./hash.worker.ts", import.meta.url));
    worker.postMessage(file);
    worker.onmessage = (e) => {
      resolve(e.data);
      worker.terminate();
    };
  });
}
// utils/upload/hash.worker.ts
import SparkMD5 from "spark-md5";

self.onmessage = async (e: MessageEvent<File>) => {
  const file = e.data;
  const chunkSize = 5 * 1024 * 1024;
  const spark = new SparkMD5.ArrayBuffer();
  let offset = 0;

  while (offset < file.size) {
    const chunk = file.slice(offset, offset + chunkSize);
    const buffer = await chunk.arrayBuffer();
    spark.append(buffer);
    offset += chunkSize;
  }

  self.postMessage(spark.end());
};

3. 查询已上传的分片(断点续传的关键)

// 上传前先问后端:这个文件已经上传了哪些分片?
async function getUploadedChunks(fileHash: string): Promise<string[]> {
  const res = await fetch(`/api/upload/status?hash=${fileHash}`);
  const data = await res.json();
  return data.uploadedChunks; // ['chunk-0', 'chunk-1', 'chunk-3']
}

4. 上传分片(跳过已上传的)

// utils/upload/uploader.ts

interface UploadOptions {
  file: File;
  onProgress?: (percent: number) => void;
}

export async function uploadFile({ file, onProgress }: UploadOptions) {
  // 1. 计算文件 hash
  const fileHash = await calculateHash(file);

  // 2. 查询已上传分片
  const uploadedChunks = await getUploadedChunks(fileHash);

  // 3. 切片
  const chunks = createFileChunks(file);
  const totalChunks = chunks.length;

  // 4. 过滤掉已上传的分片
  const pendingChunks = chunks.filter(
    (chunk) => !uploadedChunks.includes(chunk.hash),
  );

  let uploadedCount = uploadedChunks.length;

  // 5. 并发上传(控制并发数)
  await concurrentUpload(pendingChunks, fileHash, {
    maxConcurrent: 3,
    onChunkComplete: () => {
      uploadedCount++;
      onProgress?.(Math.round((uploadedCount / totalChunks) * 100));
    },
  });

  // 6. 全部上传完,通知后端合并
  await fetch("/api/upload/merge", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      fileHash,
      filename: file.name,
      totalChunks,
    }),
  });
}

5. 并发控制

// utils/upload/concurrent.ts

interface ConcurrentOptions {
  maxConcurrent: number;
  onChunkComplete: () => void;
}

async function concurrentUpload(
  chunks: FileChunk[],
  fileHash: string,
  options: ConcurrentOptions,
) {
  const { maxConcurrent, onChunkComplete } = options;
  let index = 0;

  async function uploadNext(): Promise<void> {
    if (index >= chunks.length) return;

    const chunk = chunks[index++];
    const formData = new FormData();
    formData.append("chunk", chunk.chunk);
    formData.append("hash", chunk.hash);
    formData.append("fileHash", fileHash);
    formData.append("index", String(chunk.index));
    formData.append("filename", chunk.filename);

    await fetch("/api/upload/chunk", {
      method: "POST",
      body: formData,
    });

    onChunkComplete();
    await uploadNext();
  }

  // 控制并发数
  const tasks = Array(Math.min(maxConcurrent, chunks.length))
    .fill(null)
    .map(() => uploadNext());

  await Promise.all(tasks);
}

6. 暂停和恢复

// utils/upload/uploader.ts

let abortController: AbortController | null = null;

// 暂停
function pauseUpload() {
  abortController?.abort();
}

// 恢复(重新查询已上传分片,继续上传)
async function resumeUpload(file: File) {
  abortController = new AbortController();
  await uploadFile({ file }); // 会自动跳过已上传的分片
}

前端组件

// components/FileUploader.tsx
"use client";
import { useState, useRef } from "react";
import { uploadFile } from "@/utils/upload/uploader";

export default function FileUploader() {
  const [progress, setProgress] = useState(0);
  const [status, setStatus] = useState<
    "idle" | "uploading" | "paused" | "done"
  >("idle");
  const fileRef = useRef<File | null>(null);

  const handleFileChange = (e: React.ChangeEvent<HTMLInputElement>) => {
    fileRef.current = e.target.files?.[0] || null;
  };

  const handleUpload = async () => {
    if (!fileRef.current) return;
    setStatus("uploading");

    await uploadFile({
      file: fileRef.current,
      onProgress: (percent) => setProgress(percent),
    });

    setStatus("done");
  };

  const handlePause = () => {
    pauseUpload();
    setStatus("paused");
  };

  const handleResume = async () => {
    if (!fileRef.current) return;
    setStatus("uploading");
    await resumeUpload(fileRef.current);
    setStatus("done");
  };

  return (
    <div>
      <input type="file" onChange={handleFileChange} />

      {status === "idle" && <button onClick={handleUpload}>开始上传</button>}
      {status === "uploading" && <button onClick={handlePause}>暂停</button>}
      {status === "paused" && <button onClick={handleResume}>继续</button>}

      <div>
        <div
          style={{
            width: `${progress}%`,
            height: "20px",
            background: "#4caf50",
            transition: "width 0.3s",
          }}
        />
        <span>{progress}%</span>
      </div>

      {status === "done" && <p>上传完成 ✅</p>}
    </div>
  );
}

后端做什么(Node.js 示意)

// 接收分片
app.post("/api/upload/chunk", (req, res) => {
  // 存储到临时目录:/tmp/uploads/{fileHash}/{chunk-hash}
});

// 查询已上传分片
app.get("/api/upload/status", (req, res) => {
  // 读取 /tmp/uploads/{fileHash}/ 下已有的文件
  // 返回 { uploadedChunks: ['chunk-0', 'chunk-1'] }
});

// 合并分片
app.post("/api/upload/merge", (req, res) => {
  // 按序号读取所有分片,合并成完整文件
  // 删除临时分片
});

完整流程图

选择文件
计算文件 hash(Web Worker)
查询后端:哪些分片已上传?
文件切片,过滤已上传的
并发上传剩余分片(最多 3 个同时)
全部完成 → 通知后端合并
上传完成

中途断网/暂停:
恢复时重新查询已上传分片
只传剩余的,不重传

方案要点总结

要点 说明
文件切片 大文件切成 5MB 小块
文件 hash 标识文件唯一性,避免重复上传
查询已上传分片 断点续传的核心
并发控制 限制同时上传数量,防止带宽打满
Web Worker 算 hash 避免主线程卡顿
sendBeacon / keepalive 页面关闭时保证数据发出