继续浏览精彩内容
慕课网APP
程序员的梦工厂
打开
继续
感谢您的支持,我会继续努力的
赞赏金额会直接到老师账户
将二维码发送给自己后长按识别
微信支付
支付宝支付

Kaggle winner 方案简介 | Understanding the Amazon from Space: 1st place

Alice嘟嘟
关注TA
已关注
手记 209
粉丝 75
获赞 279

Below is a brief introduction of the 1st place winner solution to the competition : Understanding the Amazon from Space

The target of this competition is to better track and understand causes of deforestation by analyzing the  satellite images from the Amazon basin .

This competition contains over 40,000 training images, and what we need to do is to label them.

There are 17 labels from the following 3 groups:

  • Atmospheric conditions: clear, partly cloudy, cloudy, and haze

  • Common land cover and land use types: rainforest, agriculture, rivers, towns/cities, roads, cultivation, and bare ground

  • Rare land cover and land use types: slash and burn, selective logging, blooming, conventional mining, artisanal mining, and blow down

And each image could contain multiple labels.


This is a multiple classification problem, and the labels are imbalanced.

The 1st place winner is bestfittinghttps://www.kaggle.com/bestfitting

In preprocessing section, he applies haze removal technique and resizing, as well as some data augmentation steps, such as flipping, rotating, transposing, and elastic transforming.

As to the models, his ensemble consists of 11 popular convolutional networks  which is a mixture of ResNets, DenseNets, Inception, SimpleNet with various parameters and layers. Each model is to predict the 17 labels' probabilities.

Since there is a correlation among the 17 labels, such as, the clear, partly cloudy, cloudy, and haze labels are disjoint, but habitation and agriculture labels appear together quite frequently.

And he wants to  make use of this structure, he implements two-level Ridge Regression.

One is to take advantage of the relations among the 17 labels:
That is for a single model, he takes in this model’s predictions of all 17 labels as features to predict the final probability for  each of the 17 labels.

Another one is to select the best model to predict each label:

One more special technique is to write a his own Soft F2-Loss function, since the standard F2 loss function doesn't allow his models to pay more attention to optimizing each label’s recall.


打开App,阅读手记
0人推荐
发表评论
随时随地看视频慕课网APP