IDDA vs IDDA

IDDA vs Real Dataset

The metric

Performance evaluation

To evaluate the performances of the treated methods we use the PASCAL VOC Intersection-over-Union (IoU), also known as the Jaccard Index, that is essentially a method use to quantify the area of overlap divided by the area of union between the predicted segmentation and the ground truth. Quite simply, the IoU metric measure the number of pixels common between the target and prediction masks and divide it by the total number of pixels present across both masks (TP / (TP + FP + FN), where TP, FP and FN stands for True Positive, False Positive and False Negative respectively) [1].
The IoU is computed for each class separately and then averaged over all the classes to provide a global, mean IoU score (mIoU) for our semantic segmentation prediction.

Distance among domains

To measure the distance and the similarity between two or more domains we proceed in this way:

  • for each domain, we randomly select 500 samples;
  • for each sample, we extract the feature using a ResNet-101 [3] pre-trained on ImageNet;
  • we reduce the dimensionality applying PCA and taking the first 50 principal components;
  • we compute, in one case, the mean-feature vector for each domain and use it to measure the Euclidean and Cosine distance;
  • we compute, in the other case, the feature-wise Bhattacharaya distance;
  • additionally we use t-SNE [4] to project the features extracted from the ResNet-101 in a more comprehensible 2D dimensional space.

    To reproduce our results we provide here the code for the [ResNet-101] with the t-SNE computation included.

This section and all its subsections will be constantly updated as soon as new benchmarks will be performed.
To whoever wants to submit its results will be soon deployed a dedicated page.