我正在尝试使用 Mitchell Hashimoto 的 goamz fork 提供的 PutPart 方法。可悲的是,每次我取回一部分并检查大小时,它似乎都认为它是整个文件的大小,而不仅仅是一个块。
例如
上传 15m 文件时,我希望看到
Uploading...
Processing 1 part of 3 and uploaded 5242880.0 bytes.
Processing 2 part of 3 and uploaded 5242880.0 bytes.
Processing 3 part of 3 and uploaded 5242880.0 bytes.
相反,我看到:
Uploading...
Processing 1 part of 3 and uploaded 15728640 bytes.
Processing 2 part of 3 and uploaded 15728640 bytes.
Processing 3 part of 3 and uploaded 15728640 bytes.
这是由于 file.Read(partBuffer) 的问题吗?任何帮助将非常感激。
我在 Mac 上使用 go 1.5.1。
package main
import (
"bufio"
"fmt"
"math"
"net/http"
"os"
"github.com/mitchellh/goamz/aws"
"github.com/mitchellh/goamz/s3"
)
func check(err error) {
if err != nil {
panic(err)
}
}
func main() {
fmt.Println("Test")
auth, err := aws.GetAuth("XXXXX", "XXXXXXXXXX")
check(err)
client := s3.New(auth, aws.USWest2)
b := s3.Bucket{
S3: client,
Name: "some-bucket",
}
fileToBeUploaded := "testfile"
file, err := os.Open(fileToBeUploaded)
check(err)
defer file.Close()
fileInfo, _ := file.Stat()
fileSize := fileInfo.Size()
bytes := make([]byte, fileSize)
// read into buffer
buffer := bufio.NewReader(file)
_, err = buffer.Read(bytes)
check(err)
filetype := http.DetectContentType(bytes)
// set up for multipart upload
multi, err := b.InitMulti("/"+fileToBeUploaded, filetype, s3.ACL("bucket-owner-read"))
check(err)
const fileChunk = 5242880 // 5MB
totalPartsNum := uint64(math.Ceil(float64(fileSize) / float64(fileChunk)))
parts := []s3.Part{}
fmt.Println("Uploading...")
for i := uint64(1); i < totalPartsNum; i++ {
partSize := int(math.Min(fileChunk, float64(fileSize-int64(i*fileChunk))))
partBuffer := make([]byte, partSize)
_, err := file.Read(partBuffer)
check(err)
part, err := multi.PutPart(int(i), file) // write to S3 bucket part by part
check(err)
繁花不似锦
Qyouu
噜噜哒
相关分类