我正在处理一个对于网络用户来说相对较大的数据集,尤其是智能手机用户。我担心性能。哪个对用户来说是一个更大的问题?
强制客户端的浏览器获取/请求大数据文件 ( JSON)。
强制客户端的浏览器将较小的文件 ( CSV)重新格式化为较大的文件 ( JSON) 以便可以使用。
当我将数据编译为 时JSON,它大约为570KB——远大于我通常使用的。而且这已经被剥离了(例如,我已经将每个键减少到一个字符)。
当我将数据编译为 时CSV,它大约为220KB。但是,JSON无论如何,我都需要浏览器将其重新格式化为格式。
这是一个小例子。一个CSV文件:
"year","birth","101","102","103","104","105"
1981,"Australia",5972,1099,573,747,667
1981,"China",141,4,3,2,2
1981,"India",139,5,4,6,2
1981,"Indonesia",371,9,14,5,6
1981,"Malaysia",838,72,42,11,14
...与相同的数据相比JSON:
[{"year":1981,"birth":"Australia","101":5972,"102":1099,"103":573,"104":747,"105":667},
{year":1981,"birth":"China","101":141,"102":4,"103":3,"104":2,"105":2},
{year":1981,"birth":"India","101":139,"102":5,"103":4,"104":6,"105":2},
{year":1981,"birth":"Indonesia","101":371,"102":9,"103":14,"104":5,"105":6},
{year":1981,"birth":"Malaysia","101":838,"102":72,"103":42,"104":11,"105":14}]
TLDR:对于性能而言,什么更重要:(1) 最小化数据文件的大小,或 (2) 最小化浏览器必须处理的数据量?
子衿沉夜
相关分类