我正在用 golang 编写一个小型 webapp,它涉及解析用户上传的文件。我想自动检测文件是否被 gzip 压缩并适当地创建阅读器/扫描仪。一个转折是我无法将整个文件读入内存,我只能单独对流进行操作。这是我所拥有的:
func scannerFromFile(reader io.Reader) (*bufio.Scanner, error) {
var scanner *bufio.Scanner
//create a bufio.Reader so we can 'peek' at the first few bytes
bReader := bufio.NewReader(reader)
testBytes, err := bReader.Peek(64) //read a few bytes without consuming
if err != nil {
return nil, err
}
//Detect if the content is gzipped
contentType := http.DetectContentType(testBytes)
//If we detect gzip, then make a gzip reader, then wrap it in a scanner
if strings.Contains(contentType, "x-gzip") {
gzipReader, err := gzip.NewReader(bReader)
if (err != nil) {
return nil, err
}
scanner = bufio.NewScanner(gzipReader)
} else {
//Not gzipped, just make a scanner based on the reader
scanner = bufio.NewScanner(bReader)
}
return scanner, nil
}
这适用于纯文本,但对于 gzipped 数据,它会错误地膨胀,并且在几 kb 之后,我不可避免地会出现乱码。有没有更简单的方法?任何想法为什么在几千行之后它不正确地解压缩?
陪伴而非守候
智慧大石
相关分类