Skip to content

Documentation for Datasetattack Module

DatasetAttack

Bases: Attack

Implements an attack that replaces the training dataset with a malicious version during specific rounds of the engine's execution.

This attack modifies the dataset used by the engine's trainer to introduce malicious data, potentially impacting the model's training process.

Source code in nebula/addons/attacks/dataset/datasetattack.py
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
class DatasetAttack(Attack):
    """
    Implements an attack that replaces the training dataset with a malicious version
    during specific rounds of the engine's execution.

    This attack modifies the dataset used by the engine's trainer to introduce malicious
    data, potentially impacting the model's training process.
    """
    def __init__(self, engine):
        """
        Initializes the DatasetAttack with the given engine.

        Args:
            engine: The engine managing the attack context.
        """
        self.engine = engine
        self.round_start_attack = 0
        self.round_stop_attack = 10

    async def attack(self):
        """
        Performs the attack by replacing the training dataset with a malicious version.

        During the specified rounds of the attack, the engine's trainer is provided
        with a malicious dataset. The attack is stopped when the engine reaches the
        designated stop round.
        """
        if self.engine.round in range(self.round_start_attack, self.round_stop_attack):
            logging.info("[DatasetAttack] Performing attack")
            self.engine.trainer.datamodule.train_set = self.get_malicious_dataset()
        elif self.engine.round == self.round_stop_attack + 1:
            logging.info("[DatasetAttack] Stopping attack")

    async def _inject_malicious_behaviour(self, target_function, *args, **kwargs):
        """
        Abstract method for injecting malicious behavior into a target function.

        This method is not implemented in this class and must be overridden by subclasses
        if additional malicious behavior is required.

        Args:
            target_function (callable): The function to inject the malicious behavior into.
            *args: Positional arguments for the malicious behavior.
            **kwargs: Keyword arguments for the malicious behavior.

        Raises:
            NotImplementedError: This method is not implemented in this class.
        """
        pass

    @abstractmethod
    def get_malicious_dataset(self):
        """
        Abstract method to retrieve the malicious dataset.

        Subclasses must implement this method to define how the malicious dataset
        is created or retrieved.

        Raises:
            NotImplementedError: If the method is not implemented in a subclass.
        """
        raise NotImplementedError

__init__(engine)

Initializes the DatasetAttack with the given engine.

Parameters:

Name Type Description Default
engine

The engine managing the attack context.

required
Source code in nebula/addons/attacks/dataset/datasetattack.py
15
16
17
18
19
20
21
22
23
24
def __init__(self, engine):
    """
    Initializes the DatasetAttack with the given engine.

    Args:
        engine: The engine managing the attack context.
    """
    self.engine = engine
    self.round_start_attack = 0
    self.round_stop_attack = 10

_inject_malicious_behaviour(target_function, *args, **kwargs) async

Abstract method for injecting malicious behavior into a target function.

This method is not implemented in this class and must be overridden by subclasses if additional malicious behavior is required.

Parameters:

Name Type Description Default
target_function callable

The function to inject the malicious behavior into.

required
*args

Positional arguments for the malicious behavior.

()
**kwargs

Keyword arguments for the malicious behavior.

{}

Raises:

Type Description
NotImplementedError

This method is not implemented in this class.

Source code in nebula/addons/attacks/dataset/datasetattack.py
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
async def _inject_malicious_behaviour(self, target_function, *args, **kwargs):
    """
    Abstract method for injecting malicious behavior into a target function.

    This method is not implemented in this class and must be overridden by subclasses
    if additional malicious behavior is required.

    Args:
        target_function (callable): The function to inject the malicious behavior into.
        *args: Positional arguments for the malicious behavior.
        **kwargs: Keyword arguments for the malicious behavior.

    Raises:
        NotImplementedError: This method is not implemented in this class.
    """
    pass

attack() async

Performs the attack by replacing the training dataset with a malicious version.

During the specified rounds of the attack, the engine's trainer is provided with a malicious dataset. The attack is stopped when the engine reaches the designated stop round.

Source code in nebula/addons/attacks/dataset/datasetattack.py
26
27
28
29
30
31
32
33
34
35
36
37
38
async def attack(self):
    """
    Performs the attack by replacing the training dataset with a malicious version.

    During the specified rounds of the attack, the engine's trainer is provided
    with a malicious dataset. The attack is stopped when the engine reaches the
    designated stop round.
    """
    if self.engine.round in range(self.round_start_attack, self.round_stop_attack):
        logging.info("[DatasetAttack] Performing attack")
        self.engine.trainer.datamodule.train_set = self.get_malicious_dataset()
    elif self.engine.round == self.round_stop_attack + 1:
        logging.info("[DatasetAttack] Stopping attack")

get_malicious_dataset() abstractmethod

Abstract method to retrieve the malicious dataset.

Subclasses must implement this method to define how the malicious dataset is created or retrieved.

Raises:

Type Description
NotImplementedError

If the method is not implemented in a subclass.

Source code in nebula/addons/attacks/dataset/datasetattack.py
57
58
59
60
61
62
63
64
65
66
67
68
@abstractmethod
def get_malicious_dataset(self):
    """
    Abstract method to retrieve the malicious dataset.

    Subclasses must implement this method to define how the malicious dataset
    is created or retrieved.

    Raises:
        NotImplementedError: If the method is not implemented in a subclass.
    """
    raise NotImplementedError