Navigation as Attackers Wish? Towards Building Robust Embodied Agents under Federated Learning

University of California, Santa Cruz

NAACL 2024, Mexico

Figure 1. Illustration for the targeted backdoor attack NAW in federated vision-and-language navigation. The green clients refer to the benign clients with ground-truth training data, while the red client refers to the malicious client (attacker) with poisoned training data. The red flag added in the view is the trigger from the attacker. With the targeted attack, the agent will miss the correct route (green line) and turn to the expected route as the attacker wishes without following the language instruction.

Abstract

Federated embodied agent learning protects the data privacy of individual visual environments by keeping data locally at each client (the individual environment) during training. However, since the local data is inaccessible to the server under federated learning, attackers may easily poison the training data of the local client to build a backdoor in the agent without notice. Deploying such an agent raises the risk of potential harm to humans, as the attackers may easily navigate and control the agent as they wish via the backdoor. Towards robust federated embodied agent learning, in this paper, we study the attack and defense for the task of vision-and-language navigation (VLN), where the agent is required to follow natural language instructions to navigate indoor environments. First, we introduce a simple but effective attack strategy, Navigation as Wish (NAW), in which the malicious client manipulates local trajectory data to implant a backdoor into the global model. Results on two VLN datasets (R2R and RxR) show that NAW can easily navigate the deployed VLN agent regardless of the language instruction, without affecting its performance on normal test sets. Then, we propose a new Prompt-Based Aggregation (PBA) to defend against the NAW attack in federated VLN, which provides the server with a ''prompt'' of the vision-and-language alignment variance between the benign and malicious clients so that they can be distinguished during training. We validate the effectiveness of the PBA method on protecting the global model from the NAW attack, which outperforms other state-of-the-art defense methods by a large margin in the defense metrics on R2R and RxR.

Prompt-based Defense Method

  • Besides normal model update and aggregation, local prompt in the client is utilized and updated during the local training process
  • The local prompt would be an important reference to distinguish malicious clients after it is sent to the server.
  • The local prompt is initialized by a fixed global prompt at each communication round

Figure 2. Prompt-based Aggregation (PBA).

Experimental Results of Defense

  • We use Attack Success Rate (ASR) to evaluate the robustness of our defense method PBA.
  • Compared to other Robust Aggregation Rules, PBA gets the lowest ASR on different models under both seen and unseen environments

Table 1. Results between different defense methods on R2R and RxR. Lower is better.

I. Attack: Impact of the number of malicious cliets

The number of malicious clients is positively correlated with ASR and its increase accelerates the convergence of ASR. However, when the number is over 20, the performance of the navigation Success Rate (SR) is clearly affected.

II. Attack: Impact of the fraction of poisoned data

A larger fraction does not lead to a higher ASR; on the contrary, it obtains an even lower ASR than a smaller fraction of poisoned data. The navigation performance Success Rate (SR) becomes lower when the fraction of poisoned data is higher

III. Defense: Impact of different variables (CLIP-ViL)

Evaluate the impact of different variables with CLIP-ViL. PBA outperforms other defense methods under different settings but is gradually unable to resist the attack when the power of attackers gets stronger.

IV. Defense: Impact of different variables (EnvDrop)

Evaluate the impact of different variables with EnvDrop. PBA outperforms other defense methods under different settings but is gradually unable to resist the attack when the power of attackers gets stronger.

Visualization of PBA

During aggregation, the matrix computed by PBA ((c) and (d)) can efficiently detect the malicious client, which calculates the similarity between the uploaded prompts between clients. However, the matrix ((a) and (b)) computed by traditional aggregation rules can't distinguish the difference between malicious and benign clients, which calculates the distance between uploaded model parameters from clients.

Figure 3. The illustration of the difference of the method to calculate the distance matrix and similarity matrix of previous methods ((a) and (b)) and PBA ((c) and (d))

Ablation Study of Prompts

The prompt in PBA is inserted before the cross-attention module. Here we compare it to two variants to show the uniqueness of the mechanism of PBA. PBA-Input inserts the prompts in the input rather than the cross-attention module. PBA-Param use the parameters of the cross-attention module to calculate the distance matrix instead of inserting prompts. The superior performance of PBA reinforces the notion that focusing on smaller data not only enhances computational efficiency but also yields a more precise identification of attack-induced variances.

Figure 4. Variants of PBA.

BibTeX

@inproceedings{zhang-etal-2024-NAW,
    title = "Navigation as Attackers Wish? Towards Building Robust Embodied Agents under Federated Learning",
    author = "Zhang, Yunchao and Di, Zonglin and Zhou, Kaiwen and Xie, Cihang and Wang, Eric Xin",
    booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)",
    year = "2024",
}