Documentation for Gllneuroninversion Module¶
GLLNeuronInversionAttack
¶
Bases: ModelAttack
Implements a neuron inversion attack on the received model weights.
This attack aims to invert the values of neurons in specific layers by replacing their values with random noise, potentially disrupting the model's functionality during aggregation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
engine
|
object
|
The training engine object that manages the aggregator. |
required |
_
|
any
|
A placeholder argument (not used in this class). |
required |
Source code in nebula/addons/attacks/model/gllneuroninversion.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
|
__init__(engine, attack_params)
¶
Initializes the GLLNeuronInversionAttack with the specified engine.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
engine
|
object
|
The training engine object. |
required |
_
|
any
|
A placeholder argument (not used in this class). |
required |
Source code in nebula/addons/attacks/model/gllneuroninversion.py
19 20 21 22 23 24 25 26 27 28 29 |
|
model_attack(received_weights)
¶
Performs the neuron inversion attack by modifying the weights of a specific layer with random noise.
This attack replaces the weights of a chosen layer with random values, which may disrupt the functionality of the model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
received_weights
|
dict
|
The aggregated model weights to be modified. |
required |
Returns:
Name | Type | Description |
---|---|---|
dict |
The modified model weights after applying the neuron inversion attack. |
Source code in nebula/addons/attacks/model/gllneuroninversion.py
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
|