Documentation for Labelflipping Module¶
This module provides a function for label flipping in datasets, allowing for the simulation of label noise as a form of data poisoning. The main function modifies the labels of specific samples in a dataset based on a specified percentage and target conditions.
Function: - labelFlipping: Flips the labels of a specified portion of a dataset to random values or to a specific target label.
LabelFlippingAttack
¶
Bases: DatasetAttack
Implements an attack that flips the labels of a portion of the training dataset.
This attack alters the labels of certain data points in the training set to mislead the training process.
Source code in nebula/addons/attacks/dataset/labelflipping.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
|
__init__(engine, attack_params)
¶
Initializes the LabelFlippingAttack with the engine and attack parameters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
engine
|
The engine managing the attack context. |
required | |
attack_params
|
dict
|
Parameters for the attack, including the percentage of poisoned data, targeting options, and label specifications. |
required |
Source code in nebula/addons/attacks/dataset/labelflipping.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
|
get_malicious_dataset()
¶
Creates a malicious dataset by flipping the labels of selected data points.
Returns:
Name | Type | Description |
---|---|---|
Dataset |
The modified dataset with flipped labels. |
Source code in nebula/addons/attacks/dataset/labelflipping.py
112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
|
labelFlipping(dataset, indices, poisoned_percent=0, targeted=False, target_label=4, target_changed_label=7)
¶
Flips the labels of a specified portion of a dataset to random values or to a specific target label.
This function modifies the labels of selected samples in the dataset based on the specified poisoning percentage. Labels can be flipped either randomly or targeted to change from a specific label to another specified label.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset
|
Dataset
|
The dataset containing training data, expected to be a PyTorch dataset
with a |
required |
indices
|
list of int
|
The list of indices in the dataset to consider for label flipping. |
required |
poisoned_percent
|
float
|
The ratio of labels to change, expressed as a fraction (0 <= poisoned_percent <= 1). Default is 0. |
0
|
targeted
|
bool
|
If True, flips only labels matching |
False
|
target_label
|
int
|
The label to change when |
4
|
target_changed_label
|
int
|
The label to which |
7
|
Returns:
Name | Type | Description |
---|---|---|
Dataset |
A deep copy of the original dataset with modified labels in |
Raises:
Type | Description |
---|---|
ValueError
|
If |
Notes
- When not in targeted mode, labels are flipped for a random selection of indices based on the specified
poisoned_percent
. The new label is chosen randomly from the existing classes. - In targeted mode, labels that match
target_label
are directly changed totarget_changed_label
.
Source code in nebula/addons/attacks/dataset/labelflipping.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
|