Documentation for Datasetattack Module¶
DatasetAttack
¶
Bases: Attack
Implements an attack that replaces the training dataset with a malicious version during specific rounds of the engine's execution.
This attack modifies the dataset used by the engine's trainer to introduce malicious data, potentially impacting the model's training process.
Source code in nebula/addons/attacks/dataset/datasetattack.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
|
__init__(engine)
¶
Initializes the DatasetAttack with the given engine.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
engine
|
The engine managing the attack context. |
required |
Source code in nebula/addons/attacks/dataset/datasetattack.py
15 16 17 18 19 20 21 22 23 24 |
|
_inject_malicious_behaviour(target_function, *args, **kwargs)
async
¶
Abstract method for injecting malicious behavior into a target function.
This method is not implemented in this class and must be overridden by subclasses if additional malicious behavior is required.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
target_function
|
callable
|
The function to inject the malicious behavior into. |
required |
*args
|
Positional arguments for the malicious behavior. |
()
|
|
**kwargs
|
Keyword arguments for the malicious behavior. |
{}
|
Raises:
Type | Description |
---|---|
NotImplementedError
|
This method is not implemented in this class. |
Source code in nebula/addons/attacks/dataset/datasetattack.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
|
attack()
async
¶
Performs the attack by replacing the training dataset with a malicious version.
During the specified rounds of the attack, the engine's trainer is provided with a malicious dataset. The attack is stopped when the engine reaches the designated stop round.
Source code in nebula/addons/attacks/dataset/datasetattack.py
26 27 28 29 30 31 32 33 34 35 36 37 38 |
|
get_malicious_dataset()
abstractmethod
¶
Abstract method to retrieve the malicious dataset.
Subclasses must implement this method to define how the malicious dataset is created or retrieved.
Raises:
Type | Description |
---|---|
NotImplementedError
|
If the method is not implemented in a subclass. |
Source code in nebula/addons/attacks/dataset/datasetattack.py
57 58 59 60 61 62 63 64 65 66 67 68 |
|