Federated embodied agent learning protects the data privacy of individual visual environments by keeping data locally at each client (the individual environment) during training. However, since the local data is inaccessible to the server under federated learning, attackers may easily poison the training data of the local client to build a backdoor in the agent without notice. Deploying such an agent raises the risk of potential harm to humans, as the attackers may easily navigate and control the agent as they wish via the backdoor. Towards robust federated embodied agent learning, in this paper, we study the attack and defense for the task of vision-and-language navigation (VLN), where the agent is required to follow natural language instructions to navigate indoor environments. First, we introduce a simple but effective attack strategy, Navigation as Wish (NAW), in which the malicious client manipulates local trajectory data to implant a backdoor into the global model. Results on two VLN datasets (R2R and RxR) show that NAW can easily navigate the deployed VLN agent regardless of the language instruction, without affecting its performance on normal test sets. Then, we propose a new Prompt-Based Aggregation (PBA) to defend against the NAW attack in federated VLN, which provides the server with a ''prompt'' of the vision-and-language alignment variance between the benign and malicious clients so that they can be distinguished during training. We validate the effectiveness of the PBA method on protecting the global model from the NAW attack, which outperforms other state-of-the-art defense methods by a large margin in the defense metrics on R2R and RxR.
Figure 2. Prompt-based Aggregation (PBA).
Table 1. Results between different defense methods on R2R and RxR. Lower is better.
During aggregation, the matrix computed by PBA ((c) and (d)) can efficiently detect the malicious client, which calculates the similarity between the uploaded prompts between clients. However, the matrix ((a) and (b)) computed by traditional aggregation rules can't distinguish the difference between malicious and benign clients, which calculates the distance between uploaded model parameters from clients.
Figure 3. The illustration of the difference of the method to calculate the distance matrix and similarity matrix of previous methods ((a) and (b)) and PBA ((c) and (d))
The prompt in PBA is inserted before the cross-attention module. Here we compare it to two variants to show the uniqueness of the mechanism of PBA. PBA-Input inserts the prompts in the input rather than the cross-attention module. PBA-Param use the parameters of the cross-attention module to calculate the distance matrix instead of inserting prompts. The superior performance of PBA reinforces the notion that focusing on smaller data not only enhances computational efficiency but also yields a more precise identification of attack-induced variances.
Figure 4. Variants of PBA.
@inproceedings{zhang-etal-2024-NAW,
title = "Navigation as Attackers Wish? Towards Building Robust Embodied Agents under Federated Learning",
author = "Zhang, Yunchao and Di, Zonglin and Zhou, Kaiwen and Xie, Cihang and Wang, Eric Xin",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)",
year = "2024",
}