Social networks started as a place to comfortably connect with your friends. With them, we can communicate our thoughts and opinions over different topics and reach a large portion of users, even those who are not on your friend’s list. This has led to making social networks a crucial part of many of us, providing for example information, entertainment, and learning. Many users prefer to access social networks, like Facebook or Twitter, to have access to news as they provide faster means for information diffusion. However, as a consequence, online social networks are also exploited as a tool to alter users’ opinions, especially during political campaigns. A real-life example is the 2018 Cambridge Analytica scandal when it was revealed that the company had harvested personal data from Facebook users and used it for political advertising purposes. The idea was to target users with specific messages, which were meant to alter or reinforce user opinions. This is a concern for the health of our democracies which rely on having access to information providing diverse viewpoints. The aim of this work is to address the research issue of designing strategies to understand and overcome these processes that may have drastic consequences in our society. We first consider the scenario in which a set of candidates are running for the elections and a social network of voters will decide the winner. Some attackers could be interested in changing the outcome of the elections by targeting a subset of voters with advertisement and/or (possibly fake) news. In this scenario we present two possible models that, exploiting influence in social networks, manipulate a voting process in order to make a target candidate win or lose the elections. We start by defining a model in which the preference list of each voter is known and give a constant factor approximation algorithm that can be used in arbitrary scoring rule voting systems, e.g., Plurality rule or Borda count. However, this assumption is not always satisfied in a realistic scenario as voters can be undecided on their preferences or they may not reveal them to the manipulator. Thus, we extend this model to design a scenario in which the manipulator can only guess a probability distribution over the candidates for each voter, instead of a deterministic preference list. Interestingly, while the problem can be approximated within a constant factor in the case of full knowledge, we show that, with partial information, the election control problem is hard to approximate within any constant factor through a reduction from Densest-k-subgraph problem, under some computational complexity hypothesis. However, we are able to show that a small relaxation of the model allows us to give a constant factor approximation algorithm. One of the possible ways to prevent election control for the integrity of voting processes is to reduce social biases and give to the users the possibility to be exposed to multiple sources with diverse perspectives and balancing users opinions by exposing them to challenging ideas. In this perspective we first investigate the problem from a computational point of view and generalize the work introduced by Garimella et al. [1] of balancing information exposure in a social network. In this setting we obtain strong approximation hardness results, however, we mitigate these hardness results by designing an algorithm with an approximation factor of Ω (n −1/2). Finally, we address the same issue of reducing the bias in social networks by proposing a link recommendation algorithm that evaluates the links to suggest according to their increment in social influence. We formulate the link recommendation task as an optimization problem that asks to suggest a fixed number of new connections to a subset of users with the aim of maximizing the network portion that is reached by their generated content. Thus, enhancing the possibility to spread their opinions.
Exploiting Social Influence to Control Opinions in Social Networks
CORO', FEDERICO
2019
Abstract
Social networks started as a place to comfortably connect with your friends. With them, we can communicate our thoughts and opinions over different topics and reach a large portion of users, even those who are not on your friend’s list. This has led to making social networks a crucial part of many of us, providing for example information, entertainment, and learning. Many users prefer to access social networks, like Facebook or Twitter, to have access to news as they provide faster means for information diffusion. However, as a consequence, online social networks are also exploited as a tool to alter users’ opinions, especially during political campaigns. A real-life example is the 2018 Cambridge Analytica scandal when it was revealed that the company had harvested personal data from Facebook users and used it for political advertising purposes. The idea was to target users with specific messages, which were meant to alter or reinforce user opinions. This is a concern for the health of our democracies which rely on having access to information providing diverse viewpoints. The aim of this work is to address the research issue of designing strategies to understand and overcome these processes that may have drastic consequences in our society. We first consider the scenario in which a set of candidates are running for the elections and a social network of voters will decide the winner. Some attackers could be interested in changing the outcome of the elections by targeting a subset of voters with advertisement and/or (possibly fake) news. In this scenario we present two possible models that, exploiting influence in social networks, manipulate a voting process in order to make a target candidate win or lose the elections. We start by defining a model in which the preference list of each voter is known and give a constant factor approximation algorithm that can be used in arbitrary scoring rule voting systems, e.g., Plurality rule or Borda count. However, this assumption is not always satisfied in a realistic scenario as voters can be undecided on their preferences or they may not reveal them to the manipulator. Thus, we extend this model to design a scenario in which the manipulator can only guess a probability distribution over the candidates for each voter, instead of a deterministic preference list. Interestingly, while the problem can be approximated within a constant factor in the case of full knowledge, we show that, with partial information, the election control problem is hard to approximate within any constant factor through a reduction from Densest-k-subgraph problem, under some computational complexity hypothesis. However, we are able to show that a small relaxation of the model allows us to give a constant factor approximation algorithm. One of the possible ways to prevent election control for the integrity of voting processes is to reduce social biases and give to the users the possibility to be exposed to multiple sources with diverse perspectives and balancing users opinions by exposing them to challenging ideas. In this perspective we first investigate the problem from a computational point of view and generalize the work introduced by Garimella et al. [1] of balancing information exposure in a social network. In this setting we obtain strong approximation hardness results, however, we mitigate these hardness results by designing an algorithm with an approximation factor of Ω (n −1/2). Finally, we address the same issue of reducing the bias in social networks by proposing a link recommendation algorithm that evaluates the links to suggest according to their increment in social influence. We formulate the link recommendation task as an optimization problem that asks to suggest a fixed number of new connections to a subset of users with the aim of maximizing the network portion that is reached by their generated content. Thus, enhancing the possibility to spread their opinions.File | Dimensione | Formato | |
---|---|---|---|
2019_Coro.pdf
accesso aperto
Dimensione
1.53 MB
Formato
Adobe PDF
|
1.53 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/62353
URN:NBN:IT:GSSI-62353