Skip to content

Documentation for Modelattack Module

ModelAttack

Bases: Attack

Base class for implementing model attacks, which modify the behavior of model aggregation methods.

This class defines a decorator for introducing malicious behavior into the aggregation process and requires subclasses to implement the model-specific attack logic.

Parameters:

Name Type Description Default
engine object

The engine object that manages the aggregator for model aggregation.

required
Source code in nebula/addons/attacks/model/modelattack.py
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
class ModelAttack(Attack):
    """
    Base class for implementing model attacks, which modify the behavior of 
    model aggregation methods.

    This class defines a decorator for introducing malicious behavior into the 
    aggregation process and requires subclasses to implement the model-specific 
    attack logic.

    Args:
        engine (object): The engine object that manages the aggregator for 
                         model aggregation.
    """
    def __init__(self, engine):
        """
        Initializes the ModelAttack with the specified engine.

        Args:
            engine (object): The engine object that includes the aggregator.
        """
        super().__init__()
        self.engine = engine
        self.aggregator = engine._aggregator
        self.original_aggregation = engine.aggregator.run_aggregation
        self.round_start_attack = 0
        self.round_stop_attack = 10

    def aggregator_decorator(self):
        """
        Decorator that adds a delay to the execution of the original method.

        Args:
            delay (int or float): The time in seconds to delay the method execution.

        Returns:
            function: A decorator function that wraps the target method with 
                      the delay logic and potentially modifies the aggregation 
                      behavior to inject malicious changes.
        """
        # The actual decorator function that will be applied to the target method
        def decorator(func):
            @wraps(func)  # Preserves the metadata of the original function
            def wrapper(*args):
                _, *new_args = args  # Exclude self argument
                accum = func(*new_args)
                logging.info(f"malicious_aggregate | original aggregation result={accum}")

                if new_args is not None:
                    accum = self.model_attack(accum)
                    logging.info(f"malicious_aggregate | attack aggregation result={accum}")
                return accum
            return wrapper
        return decorator

    @abstractmethod
    def model_attack(self, received_weights):
        """
        Abstract method that applies the specific model attack logic.

        This method should be implemented in subclasses to define the attack
        logic on the received model weights.

        Args:
            received_weights (any): The aggregated model weights to be modified.

        Returns:
            any: The modified model weights after applying the attack.
        """
        raise NotImplementedError

    async def _inject_malicious_behaviour(self):
        """
        Modifies the `propagate` method of the aggregator to include the delay 
        introduced by the decorator.

        This method wraps the original aggregation method with the malicious 
        decorator to inject the attack behavior into the aggregation process.
        """
        decorated_aggregation = self.aggregator_decorator()(self.aggregator.run_aggregation)
        self.aggregator.run_aggregation = types.MethodType(decorated_aggregation, self.aggregator)

    async def _restore_original_behaviour(self):
        """
        Restores the original behaviour of the `run_aggregation` method.
        """
        self.aggregator.run_aggregation = self.original_aggregation

    async def attack(self):
        """
        Initiates the malicious attack by injecting the malicious behavior 
        into the aggregation process.

        This method logs the attack and calls the method to modify the aggregator.
        """
        if self.engine.round == self.round_start_attack:
            logging.info("[ModelAttack] Injecting malicious behaviour")
            await self._inject_malicious_behaviour()
        elif self.engine.round == self.round_stop_attack + 1:
            logging.info("[ModelAttack] Stopping attack")
            await self._restore_original_behaviour()
        elif self.engine.round in range(self.round_start_attack, self.round_stop_attack):
            logging.info("[ModelAttack] Performing attack")

__init__(engine)

Initializes the ModelAttack with the specified engine.

Parameters:

Name Type Description Default
engine object

The engine object that includes the aggregator.

required
Source code in nebula/addons/attacks/model/modelattack.py
21
22
23
24
25
26
27
28
29
30
31
32
33
def __init__(self, engine):
    """
    Initializes the ModelAttack with the specified engine.

    Args:
        engine (object): The engine object that includes the aggregator.
    """
    super().__init__()
    self.engine = engine
    self.aggregator = engine._aggregator
    self.original_aggregation = engine.aggregator.run_aggregation
    self.round_start_attack = 0
    self.round_stop_attack = 10

_inject_malicious_behaviour() async

Modifies the propagate method of the aggregator to include the delay introduced by the decorator.

This method wraps the original aggregation method with the malicious decorator to inject the attack behavior into the aggregation process.

Source code in nebula/addons/attacks/model/modelattack.py
78
79
80
81
82
83
84
85
86
87
async def _inject_malicious_behaviour(self):
    """
    Modifies the `propagate` method of the aggregator to include the delay 
    introduced by the decorator.

    This method wraps the original aggregation method with the malicious 
    decorator to inject the attack behavior into the aggregation process.
    """
    decorated_aggregation = self.aggregator_decorator()(self.aggregator.run_aggregation)
    self.aggregator.run_aggregation = types.MethodType(decorated_aggregation, self.aggregator)

_restore_original_behaviour() async

Restores the original behaviour of the run_aggregation method.

Source code in nebula/addons/attacks/model/modelattack.py
89
90
91
92
93
async def _restore_original_behaviour(self):
    """
    Restores the original behaviour of the `run_aggregation` method.
    """
    self.aggregator.run_aggregation = self.original_aggregation

aggregator_decorator()

Decorator that adds a delay to the execution of the original method.

Parameters:

Name Type Description Default
delay int or float

The time in seconds to delay the method execution.

required

Returns:

Name Type Description
function

A decorator function that wraps the target method with the delay logic and potentially modifies the aggregation behavior to inject malicious changes.

Source code in nebula/addons/attacks/model/modelattack.py
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
def aggregator_decorator(self):
    """
    Decorator that adds a delay to the execution of the original method.

    Args:
        delay (int or float): The time in seconds to delay the method execution.

    Returns:
        function: A decorator function that wraps the target method with 
                  the delay logic and potentially modifies the aggregation 
                  behavior to inject malicious changes.
    """
    # The actual decorator function that will be applied to the target method
    def decorator(func):
        @wraps(func)  # Preserves the metadata of the original function
        def wrapper(*args):
            _, *new_args = args  # Exclude self argument
            accum = func(*new_args)
            logging.info(f"malicious_aggregate | original aggregation result={accum}")

            if new_args is not None:
                accum = self.model_attack(accum)
                logging.info(f"malicious_aggregate | attack aggregation result={accum}")
            return accum
        return wrapper
    return decorator

attack() async

Initiates the malicious attack by injecting the malicious behavior into the aggregation process.

This method logs the attack and calls the method to modify the aggregator.

Source code in nebula/addons/attacks/model/modelattack.py
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
async def attack(self):
    """
    Initiates the malicious attack by injecting the malicious behavior 
    into the aggregation process.

    This method logs the attack and calls the method to modify the aggregator.
    """
    if self.engine.round == self.round_start_attack:
        logging.info("[ModelAttack] Injecting malicious behaviour")
        await self._inject_malicious_behaviour()
    elif self.engine.round == self.round_stop_attack + 1:
        logging.info("[ModelAttack] Stopping attack")
        await self._restore_original_behaviour()
    elif self.engine.round in range(self.round_start_attack, self.round_stop_attack):
        logging.info("[ModelAttack] Performing attack")

model_attack(received_weights) abstractmethod

Abstract method that applies the specific model attack logic.

This method should be implemented in subclasses to define the attack logic on the received model weights.

Parameters:

Name Type Description Default
received_weights any

The aggregated model weights to be modified.

required

Returns:

Name Type Description
any

The modified model weights after applying the attack.

Source code in nebula/addons/attacks/model/modelattack.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
@abstractmethod
def model_attack(self, received_weights):
    """
    Abstract method that applies the specific model attack logic.

    This method should be implemented in subclasses to define the attack
    logic on the received model weights.

    Args:
        received_weights (any): The aggregated model weights to be modified.

    Returns:
        any: The modified model weights after applying the attack.
    """
    raise NotImplementedError