Presentation at the Embassy of the Republic of Indonesia

Adversarial Machine Learning for Facilitating Deliberative Decision-Making in Elections

Nieuws - 14 december 2023

On October 28th, I was honored to present my research at the Embassy of the Republic of Indonesia to the Kingdom of the Netherlands. I was chosen as one of the ten researchers, among PhD and master's students studying in the Netherlands, to showcase my research at the event.

The event commenced with opening remarks from the Deputy of Indonesia's National Research and Innovation Agency (BRIN) and the Education and Cultural Attache from the embassy. As the first presenter, I delivered a talk titled "Adversarial Machine Learning for Facilitating Deliberative Decision-Making in Elections" ("Sistem Kecerdasan Buatan Adversarial untuk Memfasilitasi Kebebasan Memilih dalam Pemilu").

To begin, it is common knowledge that our data reveals a great deal about us. However, how many of us are truly aware that it can disclose sensitive information such as our financial situation, social standing, health, religion, travel habits, romantic relationships, and even... our political preferences? In the early 2016s, the Cambridge Analytica case emerged as a prominent subject of discussion. It was widely believed that this organization exerted influence over the outcomes of significant global elections, such as the US presidential elections and Brexit in the United Kingdom. This was achieved through the analysis of personal information belonging to millions of Facebook users, which was obtained without the explicit consent of the data owners. The organization then produced persuasive personalized information by utilizing the attributes of individual users.

What is the relevance of this particular case to Indonesia? Unexpectedly, Indonesia was reportedly the third most impacted nation by the Cambridge Analytica data breach scandal, in which the company compromised the information of an estimated one million Indonesians. Owing to high public pressure, a Facebook representative in Indonesia issued a clarification in front of the Indonesian House of Representatives (DPR-RI) to dispel the suspicion. However, notwithstanding our acceptance of the representative's assertion and disregard for the 2016 scandal, additional reports bring forth the concern that Cambridge Analytica's parent company has meddled in Indonesian politics for decades, particularly during the political turmoil referred to as "pasca-reformasi". I did not address whose side's claims we ought to believe; rather, I emphasized the indisputable possibility that, by utilizing our social media data, certain individuals could discover the political beliefs of a large number of individuals. This knowledge is an enormous political asset, including targeting individuals with personalized advertisements touching their individuals’ uniqueness.

Two factors facilitate this possibility. First, in recent times, there has been an abundance of data. Many businesses derive their revenue from the sale of user data. Therefore, despite the existence of numerous privacy regulations, corporations would continue to seek any loophole to generate revenue. Additionally, today's people are more receptive to sharing their data in return for minor rewards and also disclosing an excessive amount of personal information on social media. Second, in recent years, Artificial Intelligence (AI) has surpassed humans in many aspects. Granting it the capability to analyze data in a manner that surpasses human capabilities.

I therefore arrived at the following conclusion in my research: if AI has become more intelligent and, consequently, more challenging for humans to handle, then why don't we employ AI to counteract another AI?

The term "adversarial machine learning" refers to this. That is the capability of an AI to deceive another AI by means of a meticulously crafted disturbance in the training or testing phase. The alteration of data will be minimal until the deceived AI is less likely to detect and, thus, it continues processing the data normally. Although the alteration is minor, empirical evidence suggests that these alterations can significantly impact the output of the targeted AI model.

Using this concept, my research intends to create a tool that assists users in modifying their Twitter behavior to fool the AI model that learns about our political preferences based on this data. As many prior researches demonstrate social media data, particularly Twitter data, can reveal the political preferences of individuals. For instance, if I frequently tweet about my concern for sustainable energy and like/retweet Obama's tweets, an AI profiler can deduce that I am an Obama supporter whose "sensitive topics" (i.e., those that are important to me and have the potential to sway my opinion) is sustainable energy. Should this information be disclosed to Obama's political opponent, they may inundate me with information such as "hey, you are mistaken; Obama does not care about sustainable energy. It is [the opponent] that is concerned about it”. Since sustainable energy is my most sensitive topic, I become perplexed and may ultimately alter my election preference. – Therefore, hiding my sensitivity to sustainable energy and my support for Obama would make it more challenging to implement the aforementioned scenario.

I have conceptualization, empirical, technical, and reflective phases in my research. At present, I am engaged in an empirical investigation wherein I conduct semi-structured interviews with a multitude of stakeholders, including political actors and Twitter users. So, if you are an Indonesian citizen who often uses Twitter and is interested in engaging in a 45-minute discussion regarding this subject, kindly reach out to me at s.f.auliya@tudelft.nl ! :D

PS: In my technical phases, I plan to use Chat-GPT to generate synthetic data mimicking human tweets. Intrigued? Feel free to reach out with questions or for discussions!

Written by: Syafira Fitri Auliya, a member of AI DeMoS Lab